Sample records for incremental feature selection

  1. A Robust Shape Reconstruction Method for Facial Feature Point Detection.

    PubMed

    Tan, Shuqiu; Chen, Dongyi; Guo, Chenggang; Huang, Zhiqi

    2017-01-01

    Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  2. Embedded Incremental Feature Selection for Reinforcement Learning

    DTIC Science & Technology

    2012-05-01

    Prior to this work, feature selection for reinforce- ment learning has focused on linear value function ap- proximation ( Kolter and Ng, 2009; Parr et al...InProceed- ings of the the 23rd International Conference on Ma- chine Learning, pages 449–456. Kolter , J. Z. and Ng, A. Y. (2009). Regularization and feature

  3. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    PubMed

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture.

    PubMed

    Chen, C L Philip; Liu, Zhulin

    2018-01-01

    Broad Learning System (BLS) that aims to offer an alternative way of learning in deep structure is proposed in this paper. Deep structure and learning suffer from a time-consuming training process because of a large number of connecting parameters in filters and layers. Moreover, it encounters a complete retraining process if the structure is not sufficient to model the system. The BLS is established in the form of a flat network, where the original inputs are transferred and placed as "mapped features" in feature nodes and the structure is expanded in wide sense in the "enhancement nodes." The incremental learning algorithms are developed for fast remodeling in broad expansion without a retraining process if the network deems to be expanded. Two incremental learning algorithms are given for both the increment of the feature nodes (or filters in deep structure) and the increment of the enhancement nodes. The designed model and algorithms are very versatile for selecting a model rapidly. In addition, another incremental learning is developed for a system that has been modeled encounters a new incoming input. Specifically, the system can be remodeled in an incremental way without the entire retraining from the beginning. Satisfactory result for model reduction using singular value decomposition is conducted to simplify the final structure. Compared with existing deep neural networks, experimental results on the Modified National Institute of Standards and Technology database and NYU NORB object recognition dataset benchmark data demonstrate the effectiveness of the proposed BLS.

  5. Reliability assessment of selected indicators of tree health

    Treesearch

    Pawel M. Lech

    2000-01-01

    The measurements of electrical resistance of near-cambium tissues, selected biometric features of needles and shoots, and the annual radial increment as well as visual estimates of crown defoliation were performed on about 100 Norway spruce trees in three 60- to 70-year-old stands located in the Western Sudety Mountains. The defoliation, electrical resistance, and...

  6. An incremental approach to genetic-algorithms-based classification.

    PubMed

    Guan, Sheng-Uei; Zhu, Fangming

    2005-04-01

    Incremental learning has been widely addressed in the machine learning literature to cope with learning tasks where the learning environment is ever changing or training samples become available over time. However, most research work explores incremental learning with statistical algorithms or neural networks, rather than evolutionary algorithms. The work in this paper employs genetic algorithms (GAs) as basic learning algorithms for incremental learning within one or more classifier agents in a multiagent environment. Four new approaches with different initialization schemes are proposed. They keep the old solutions and use an "integration" operation to integrate them with new elements to accommodate new attributes, while biased mutation and crossover operations are adopted to further evolve a reinforced solution. The simulation results on benchmark classification data sets show that the proposed approaches can deal with the arrival of new input attributes and integrate them with the original input space. It is also shown that the proposed approaches can be successfully used for incremental learning and improve classification rates as compared to the retraining GA. Possible applications for continuous incremental training and feature selection are also discussed.

  7. Computational Prediction of Protein Epsilon Lysine Acetylation Sites Based on a Feature Selection Method.

    PubMed

    Gao, JianZhao; Tao, Xue-Wen; Zhao, Jia; Feng, Yuan-Ming; Cai, Yu-Dong; Zhang, Ning

    2017-01-01

    Lysine acetylation, as one type of post-translational modifications (PTM), plays key roles in cellular regulations and can be involved in a variety of human diseases. However, it is often high-cost and time-consuming to use traditional experimental approaches to identify the lysine acetylation sites. Therefore, effective computational methods should be developed to predict the acetylation sites. In this study, we developed a position-specific method for epsilon lysine acetylation site prediction. Sequences of acetylated proteins were retrieved from the UniProt database. Various kinds of features such as position specific scoring matrix (PSSM), amino acid factors (AAF), and disorders were incorporated. A feature selection method based on mRMR (Maximum Relevance Minimum Redundancy) and IFS (Incremental Feature Selection) was employed. Finally, 319 optimal features were selected from total 541 features. Using the 319 optimal features to encode peptides, a predictor was constructed based on dagging. As a result, an accuracy of 69.56% with MCC of 0.2792 was achieved. We analyzed the optimal features, which suggested some important factors determining the lysine acetylation sites. We developed a position-specific method for epsilon lysine acetylation site prediction. A set of optimal features was selected. Analysis of the optimal features provided insights into the mechanism of lysine acetylation sites, providing guidance of experimental validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Prediction of Protein Modification Sites of Pyrrolidone Carboxylic Acid Using mRMR Feature Selection and Analysis

    PubMed Central

    Zheng, Lu-Lu; Niu, Shen; Hao, Pei; Feng, KaiYan; Cai, Yu-Dong; Li, Yixue

    2011-01-01

    Pyrrolidone carboxylic acid (PCA) is formed during a common post-translational modification (PTM) of extracellular and multi-pass membrane proteins. In this study, we developed a new predictor to predict the modification sites of PCA based on maximum relevance minimum redundancy (mRMR) and incremental feature selection (IFS). We incorporated 727 features that belonged to 7 kinds of protein properties to predict the modification sites, including sequence conservation, residual disorder, amino acid factor, secondary structure and solvent accessibility, gain/loss of amino acid during evolution, propensity of amino acid to be conserved at protein-protein interface and protein surface, and deviation of side chain carbon atom number. Among these 727 features, 244 features were selected by mRMR and IFS as the optimized features for the prediction, with which the prediction model achieved a maximum of MCC of 0.7812. Feature analysis showed that all feature types contributed to the modification process. Further site-specific feature analysis showed that the features derived from PCA's surrounding sites contributed more to the determination of PCA sites than other sites. The detailed feature analysis in this paper might provide important clues for understanding the mechanism of the PCA formation and guide relevant experimental validations. PMID:22174779

  9. Boosted Regression Trees Outperforms Support Vector Machines in Predicting (Regional) Yields of Winter Wheat from Single and Cumulated Dekadal Spot-VGT Derived Normalized Difference Vegetation Indices

    NASA Astrophysics Data System (ADS)

    Stas, Michiel; Dong, Qinghan; Heremans, Stien; Zhang, Beier; Van Orshoven, Jos

    2016-08-01

    This paper compares two machine learning techniques to predict regional winter wheat yields. The models, based on Boosted Regression Trees (BRT) and Support Vector Machines (SVM), are constructed of Normalized Difference Vegetation Indices (NDVI) derived from low resolution SPOT VEGETATION satellite imagery. Three types of NDVI-related predictors were used: Single NDVI, Incremental NDVI and Targeted NDVI. BRT and SVM were first used to select features with high relevance for predicting the yield. Although the exact selections differed between the prefectures, certain periods with high influence scores for multiple prefectures could be identified. The same period of high influence stretching from March to June was detected by both machine learning methods. After feature selection, BRT and SVM models were applied to the subset of selected features for actual yield forecasting. Whereas both machine learning methods returned very low prediction errors, BRT seems to slightly but consistently outperform SVM.

  10. Sequence-Based Prediction of RNA-Binding Proteins Using Random Forest with Minimum Redundancy Maximum Relevance Feature Selection.

    PubMed

    Ma, Xin; Guo, Jing; Sun, Xiao

    2015-01-01

    The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR) method, followed by incremental feature selection (IFS). We incorporated features of conjoint triad features and three novel features: binding propensity (BP), nonbinding propensity (NBP), and evolutionary information combined with physicochemical properties (EIPP). The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient). High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.

  11. Prediction of active sites of enzymes by maximum relevance minimum redundancy (mRMR) feature selection.

    PubMed

    Gao, Yu-Fei; Li, Bi-Qing; Cai, Yu-Dong; Feng, Kai-Yan; Li, Zhan-Dong; Jiang, Yang

    2013-01-27

    Identification of catalytic residues plays a key role in understanding how enzymes work. Although numerous computational methods have been developed to predict catalytic residues and active sites, the prediction accuracy remains relatively low with high false positives. In this work, we developed a novel predictor based on the Random Forest algorithm (RF) aided by the maximum relevance minimum redundancy (mRMR) method and incremental feature selection (IFS). We incorporated features of physicochemical/biochemical properties, sequence conservation, residual disorder, secondary structure and solvent accessibility to predict active sites of enzymes and achieved an overall accuracy of 0.885687 and MCC of 0.689226 on an independent test dataset. Feature analysis showed that every category of the features except disorder contributed to the identification of active sites. It was also shown via the site-specific feature analysis that the features derived from the active site itself contributed most to the active site determination. Our prediction method may become a useful tool for identifying the active sites and the key features identified by the paper may provide valuable insights into the mechanism of catalysis.

  12. Prediction of lysine glutarylation sites by maximum relevance minimum redundancy feature selection.

    PubMed

    Ju, Zhe; He, Jian-Jun

    2018-06-01

    Lysine glutarylation is new type of protein acylation modification in both prokaryotes and eukaryotes. To better understand the molecular mechanism of glutarylation, it is important to identify glutarylated substrates and their corresponding glutarylation sites accurately. In this study, a novel bioinformatics tool named GlutPred is developed to predict glutarylation sites by using multiple feature extraction and maximum relevance minimum redundancy feature selection. On the one hand, amino acid factors, binary encoding, and the composition of k-spaced amino acid pairs features are incorporated to encode glutarylation sites. And the maximum relevance minimum redundancy method and the incremental feature selection algorithm are adopted to remove the redundant features. On the other hand, a biased support vector machine algorithm is used to handle the imbalanced problem in glutarylation sites training dataset. As illustrated by 10-fold cross-validation, the performance of GlutPred achieves a satisfactory performance with a Sensitivity of 64.80%, a Specificity of 76.60%, an Accuracy of 74.90% and a Matthew's correlation coefficient of 0.3194. Feature analysis shows that some k-spaced amino acid pair features play the most important roles in the prediction of glutarylation sites. The conclusions derived from this study might provide some clues for understanding the molecular mechanisms of glutarylation. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Neural Determinants of Task Performance during Feature-Based Attention in Human Cortex

    PubMed Central

    Gong, Mengyuan

    2018-01-01

    Abstract Studies of feature-based attention have associated activity in a dorsal frontoparietal network with putative attentional priority signals. Yet, how this neural activity mediates attentional selection and whether it guides behavior are fundamental questions that require investigation. We reasoned that endogenous fluctuations in the quality of attentional priority should influence task performance. Human subjects detected a speed increment while viewing clockwise (CW) or counterclockwise (CCW) motion (baseline task) or while attending to either direction amid distracters (attention task). In an fMRI experiment, direction-specific neural pattern similarity between the baseline task and the attention task revealed a higher level of similarity for correct than incorrect trials in frontoparietal regions. Using transcranial magnetic stimulation (TMS), we disrupted posterior parietal cortex (PPC) and found a selective deficit in the attention task, but not in the baseline task, demonstrating the necessity of this cortical area during feature-based attention. These results reveal that frontoparietal areas maintain attentional priority that facilitates successful behavioral selection. PMID:29497703

  14. Analysis of A Drug Target-based Classification System using Molecular Descriptors.

    PubMed

    Lu, Jing; Zhang, Pin; Bi, Yi; Luo, Xiaomin

    2016-01-01

    Drug-target interaction is an important topic in drug discovery and drug repositioning. KEGG database offers a drug annotation and classification using a target-based classification system. In this study, we gave an investigation on five target-based classes: (I) G protein-coupled receptors; (II) Nuclear receptors; (III) Ion channels; (IV) Enzymes; (V) Pathogens, using molecular descriptors to represent each drug compound. Two popular feature selection methods, maximum relevance minimum redundancy and incremental feature selection, were adopted to extract the important descriptors. Meanwhile, an optimal prediction model based on nearest neighbor algorithm was constructed, which got the best result in identifying drug target-based classes. Finally, some key descriptors were discussed to uncover their important roles in the identification of drug-target classes.

  15. Identifying Patients with Atrioventricular Septal Defect in Down Syndrome Populations by Using Self-Normalizing Neural Networks and Feature Selection.

    PubMed

    Pan, Xiaoyong; Hu, Xiaohua; Zhang, Yu Hang; Feng, Kaiyan; Wang, Shao Peng; Chen, Lei; Huang, Tao; Cai, Yu Dong

    2018-04-12

    Atrioventricular septal defect (AVSD) is a clinically significant subtype of congenital heart disease (CHD) that severely influences the health of babies during birth and is associated with Down syndrome (DS). Thus, exploring the differences in functional genes in DS samples with and without AVSD is a critical way to investigate the complex association between AVSD and DS. In this study, we present a computational method to distinguish DS patients with AVSD from those without AVSD using the newly proposed self-normalizing neural network (SNN). First, each patient was encoded by using the copy number of probes on chromosome 21. The encoded features were ranked by the reliable Monte Carlo feature selection (MCFS) method to obtain a ranked feature list. Based on this feature list, we used a two-stage incremental feature selection to construct two series of feature subsets and applied SNNs to build classifiers to identify optimal features. Results show that 2737 optimal features were obtained, and the corresponding optimal SNN classifier constructed on optimal features yielded a Matthew's correlation coefficient (MCC) value of 0.748. For comparison, random forest was also used to build classifiers and uncover optimal features. This method received an optimal MCC value of 0.582 when top 132 features were utilized. Finally, we analyzed some key features derived from the optimal features in SNNs found in literature support to further reveal their essential roles.

  16. Sclerochronology of Holocene oyster shells (Crassostrea gigas) from the West Coast of Bohai Sea, China

    NASA Astrophysics Data System (ADS)

    Fan, C.; Koeniger, P.; Wang, H.; Frechen, M.

    2009-04-01

    Sclerochronology, the study of periodic increments in skeletal organisms, can decipher the life history and environmental records preserved in fossil shells. Although there have been a number of studies that apply isotopic analyses to shells in open ocean and fresh water, investigations for brackish environments are rare. One of the common inhabitants in estuaries is the Crassostrea oyster. Kirby et al. (1998) demonstrated a close correspondence between the ligamental increments of convex and concave bands and yearly ^18O cycles; Andrus and Crowe (2000) found a close correspondence between translucent growth bands on the cross-section of the hinge and yearly ^18O cycles. They conclude that the morphological features on hinge and growth bands on the cross-section are formed annually and can be used to determine accurately age and growth rate in this species. However, Surge et al. (2001) could not find that these morphologic features have seasonal significance in the C. virginica shells. Therefore, these concave ridges are not reliable independent proxies of seasonality. These studies were carried out with C. virginica shells; none was studied with nature C. gigas, which was widely distributed along the Pacific coastal area. C. gigas has been introduced from its native home to all over the world, ranging from North America to Australia and Europe; it has become an important commercial harvest in many of these places. Buried Holocene oyster shells of C. gigas were sampled from a huge buried oyster reef on the West of Bohai Sea, China. One of these shells was selected for high resolution micro-sampling and stable isotope analyses testing the assumption that C. gigas ligamental increments are annual in nature. We analyzed 236 consecutive samples from the shell to show that morphologic features both on hinge and cross-section are annual by comparing them to the ^18O profiles. We tested the assumption that the morphologic features of C.gigas are delineated by convex tops and concave bottoms on hinge and corresponding translucent growth bands on cross-section. The shell has 13.5 ligamental increments, based on 13.5 convex bands and 13 concave bottoms on hinge. Convex tops correspond to ^18O minima (summers), whereas concave bottoms correspond to ^18O maxima, which were formed during the low temperature of winter in the study area. We demonstrate that the ligamental increments of convex tops, concave bottoms and translucent growth bands in the studied C. gigas shell are suitable indicators of annual growth increments. The life spans, growth rates, and the timing of death can be determined from the ligament increments and isotope profiles of buried oyster shells.

  17. The Personality Assessment Inventory as a Proxy for the Psychopathy Checklist-Revised: Testing the Incremental Validity and Cross-Sample Robustness of the Antisocial Features Scale

    ERIC Educational Resources Information Center

    Douglas, Kevin S.; Guy, Laura S.; Edens, John F.; Boer, Douglas P.; Hamilton, Jennine

    2007-01-01

    The Personality Assessment Inventory's (PAI's) ability to predict psychopathic personality features, as assessed by the Psychopathy Checklist-Revised (PCL-R), was examined. To investigate whether the PAI Antisocial Features (ANT) Scale and subscales possessed incremental validity beyond other theoretically relevant PAI scales, optimized regression…

  18. Analysis and Identification of Aptamer-Compound Interactions with a Maximum Relevance Minimum Redundancy and Nearest Neighbor Algorithm

    PubMed Central

    Wang, ShaoPeng; Zhang, Yu-Hang; Lu, Jing; Cui, Weiren; Hu, Jerry; Cai, Yu-Dong

    2016-01-01

    The development of biochemistry and molecular biology has revealed an increasingly important role of compounds in several biological processes. Like the aptamer-protein interaction, aptamer-compound interaction attracts increasing attention. However, it is time-consuming to select proper aptamers against compounds using traditional methods, such as exponential enrichment. Thus, there is an urgent need to design effective computational methods for searching effective aptamers against compounds. This study attempted to extract important features for aptamer-compound interactions using feature selection methods, such as Maximum Relevance Minimum Redundancy, as well as incremental feature selection. Each aptamer-compound pair was represented by properties derived from the aptamer and compound, including frequencies of single nucleotides and dinucleotides for the aptamer, as well as the constitutional, electrostatic, quantum-chemical, and space conformational descriptors of the compounds. As a result, some important features were obtained. To confirm the importance of the obtained features, we further discussed the associations between them and aptamer-compound interactions. Simultaneously, an optimal prediction model based on the nearest neighbor algorithm was built to identify aptamer-compound interactions, which has the potential to be a useful tool for the identification of novel aptamer-compound interactions. The program is available upon the request. PMID:26955638

  19. Analysis and Identification of Aptamer-Compound Interactions with a Maximum Relevance Minimum Redundancy and Nearest Neighbor Algorithm.

    PubMed

    Wang, ShaoPeng; Zhang, Yu-Hang; Lu, Jing; Cui, Weiren; Hu, Jerry; Cai, Yu-Dong

    2016-01-01

    The development of biochemistry and molecular biology has revealed an increasingly important role of compounds in several biological processes. Like the aptamer-protein interaction, aptamer-compound interaction attracts increasing attention. However, it is time-consuming to select proper aptamers against compounds using traditional methods, such as exponential enrichment. Thus, there is an urgent need to design effective computational methods for searching effective aptamers against compounds. This study attempted to extract important features for aptamer-compound interactions using feature selection methods, such as Maximum Relevance Minimum Redundancy, as well as incremental feature selection. Each aptamer-compound pair was represented by properties derived from the aptamer and compound, including frequencies of single nucleotides and dinucleotides for the aptamer, as well as the constitutional, electrostatic, quantum-chemical, and space conformational descriptors of the compounds. As a result, some important features were obtained. To confirm the importance of the obtained features, we further discussed the associations between them and aptamer-compound interactions. Simultaneously, an optimal prediction model based on the nearest neighbor algorithm was built to identify aptamer-compound interactions, which has the potential to be a useful tool for the identification of novel aptamer-compound interactions. The program is available upon the request.

  20. DNABP: Identification of DNA-Binding Proteins Based on Feature Selection Using a Random Forest and Predicting Binding Residues.

    PubMed

    Ma, Xin; Guo, Jing; Sun, Xiao

    2016-01-01

    DNA-binding proteins are fundamentally important in cellular processes. Several computational-based methods have been developed to improve the prediction of DNA-binding proteins in previous years. However, insufficient work has been done on the prediction of DNA-binding proteins from protein sequence information. In this paper, a novel predictor, DNABP (DNA-binding proteins), was designed to predict DNA-binding proteins using the random forest (RF) classifier with a hybrid feature. The hybrid feature contains two types of novel sequence features, which reflect information about the conservation of physicochemical properties of the amino acids, and the binding propensity of DNA-binding residues and non-binding propensities of non-binding residues. The comparisons with each feature demonstrated that these two novel features contributed most to the improvement in predictive ability. Furthermore, to improve the prediction performance of the DNABP model, feature selection using the minimum redundancy maximum relevance (mRMR) method combined with incremental feature selection (IFS) was carried out during the model construction. The results showed that the DNABP model could achieve 86.90% accuracy, 83.76% sensitivity, 90.03% specificity and a Matthews correlation coefficient of 0.727. High prediction accuracy and performance comparisons with previous research suggested that DNABP could be a useful approach to identify DNA-binding proteins from sequence information. The DNABP web server system is freely available at http://www.cbi.seu.edu.cn/DNABP/.

  1. Enhancement web proxy cache performance using Wrapper Feature Selection methods with NB and J48

    NASA Astrophysics Data System (ADS)

    Mahmoud Al-Qudah, Dua'a.; Funke Olanrewaju, Rashidah; Wong Azman, Amelia

    2017-11-01

    Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS)embedded NB and particle swarm optimization(PSO)reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.

  2. The influence of uncertain map features on risk beliefs and perceived ambiguity for maps of modeled cancer risk from air pollution

    PubMed Central

    Myers, Jeffrey D.

    2012-01-01

    Maps are often used to convey information generated by models, for example, modeled cancer risk from air pollution. The concrete nature of images, such as maps, may convey more certainty than warranted for modeled information. Three map features were selected to communicate the uncertainty of modeled cancer risk: (a) map contours appeared in or out of focus, (b) one or three colors were used, and (c) a verbal-relative or numeric risk expression was used in the legend. Study aims were to assess how these features influenced risk beliefs and the ambiguity of risk beliefs at four assigned map locations that varied by risk level. We applied an integrated conceptual framework to conduct this full factorial experiment with 32 maps that varied by the three dichotomous features and four risk levels; 826 university students participated. Data was analyzed using structural equation modeling. Unfocused contours and the verbal-relative risk expression generated more ambiguity than their counterparts. Focused contours generated stronger risk beliefs for higher risk levels and weaker beliefs for lower risk levels. Number of colors had minimal influence. The magnitude of risk level, conveyed using incrementally darker shading, had a substantial dose-response influence on the strength of risk beliefs. Personal characteristics of prior beliefs and numeracy also had substantial influences. Bottom-up and top-down information processing suggest why iconic visual features of incremental shading and contour focus had the strongest visual influences on risk beliefs and ambiguity. Variations in contour focus and risk expression show promise for fostering appropriate levels of ambiguity. PMID:22985196

  3. PREAL: prediction of allergenic protein by maximum Relevance Minimum Redundancy (mRMR) feature selection

    PubMed Central

    2013-01-01

    Background Assessment of potential allergenicity of protein is necessary whenever transgenic proteins are introduced into the food chain. Bioinformatics approaches in allergen prediction have evolved appreciably in recent years to increase sophistication and performance. However, what are the critical features for protein's allergenicity have been not fully investigated yet. Results We presented a more comprehensive model in 128 features space for allergenic proteins prediction by integrating various properties of proteins, such as biochemical and physicochemical properties, sequential features and subcellular locations. The overall accuracy in the cross-validation reached 93.42% to 100% with our new method. Maximum Relevance Minimum Redundancy (mRMR) method and Incremental Feature Selection (IFS) procedure were applied to obtain which features are essential for allergenicity. Results of the performance comparisons showed the superior of our method to the existing methods used widely. More importantly, it was observed that the features of subcellular locations and amino acid composition played major roles in determining the allergenicity of proteins, particularly extracellular/cell surface and vacuole of the subcellular locations for wheat and soybean. To facilitate the allergen prediction, we implemented our computational method in a web application, which can be available at http://gmobl.sjtu.edu.cn/PREAL/index.php. Conclusions Our new approach could improve the accuracy of allergen prediction. And the findings may provide novel insights for the mechanism of allergies. PMID:24565053

  4. Decision theory for computing variable and value ordering decisions for scheduling problems

    NASA Technical Reports Server (NTRS)

    Linden, Theodore A.

    1993-01-01

    Heuristics that guide search are critical when solving large planning and scheduling problems, but most variable and value ordering heuristics are sensitive to only one feature of the search state. One wants to combine evidence from all features of the search state into a subjective probability that a value choice is best, but there has been no solid semantics for merging evidence when it is conceived in these terms. Instead, variable and value ordering decisions should be viewed as problems in decision theory. This led to two key insights: (1) The fundamental concept that allows heuristic evidence to be merged is the net incremental utility that will be achieved by assigning a value to a variable. Probability distributions about net incremental utility can merge evidence from the utility function, binary constraints, resource constraints, and other problem features. The subjective probability that a value is the best choice is then derived from probability distributions about net incremental utility. (2) The methods used for rumor control in Bayesian Networks are the primary way to prevent cycling in the computation of probable net incremental utility. These insights lead to semantically justifiable ways to compute heuristic variable and value ordering decisions that merge evidence from all available features of the search state.

  5. Autonomous mental development with selective attention, object perception, and knowledge representation

    NASA Astrophysics Data System (ADS)

    Ban, Sang-Woo; Lee, Minho

    2008-04-01

    Knowledge-based clustering and autonomous mental development remains a high priority research topic, among which the learning techniques of neural networks are used to achieve optimal performance. In this paper, we present a new framework that can automatically generate a relevance map from sensory data that can represent knowledge regarding objects and infer new knowledge about novel objects. The proposed model is based on understating of the visual what pathway in our brain. A stereo saliency map model can selectively decide salient object areas by additionally considering local symmetry feature. The incremental object perception model makes clusters for the construction of an ontology map in the color and form domains in order to perceive an arbitrary object, which is implemented by the growing fuzzy topology adaptive resonant theory (GFTART) network. Log-polar transformed color and form features for a selected object are used as inputs of the GFTART. The clustered information is relevant to describe specific objects, and the proposed model can automatically infer an unknown object by using the learned information. Experimental results with real data have demonstrated the validity of this approach.

  6. Prediction of hot regions in protein-protein interaction by combining density-based incremental clustering with feature-based classification.

    PubMed

    Hu, Jing; Zhang, Xiaolong; Liu, Xiaoming; Tang, Jinshan

    2015-06-01

    Discovering hot regions in protein-protein interaction is important for drug and protein design, while experimental identification of hot regions is a time-consuming and labor-intensive effort; thus, the development of predictive models can be very helpful. In hot region prediction research, some models are based on structure information, and others are based on a protein interaction network. However, the prediction accuracy of these methods can still be improved. In this paper, a new method is proposed for hot region prediction, which combines density-based incremental clustering with feature-based classification. The method uses density-based incremental clustering to obtain rough hot regions, and uses feature-based classification to remove the non-hot spot residues from the rough hot regions. Experimental results show that the proposed method significantly improves the prediction performance of hot regions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. m-BIRCH: an online clustering approach for computer vision applications

    NASA Astrophysics Data System (ADS)

    Madan, Siddharth K.; Dana, Kristin J.

    2015-03-01

    We adapt a classic online clustering algorithm called Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), to incrementally cluster large datasets of features commonly used in multimedia and computer vision. We call the adapted version modified-BIRCH (m-BIRCH). The algorithm uses only a fraction of the dataset memory to perform clustering, and updates the clustering decisions when new data comes in. Modifications made in m-BIRCH enable data driven parameter selection and effectively handle varying density regions in the feature space. Data driven parameter selection automatically controls the level of coarseness of the data summarization. Effective handling of varying density regions is necessary to well represent the different density regions in data summarization. We use m-BIRCH to cluster 840K color SIFT descriptors, and 60K outlier corrupted grayscale patches. We use the algorithm to cluster datasets consisting of challenging non-convex clustering patterns. Our implementation of the algorithm provides an useful clustering tool and is made publicly available.

  8. Incremental Bayesian Category Learning From Natural Language.

    PubMed

    Frermann, Lea; Lapata, Mirella

    2016-08-01

    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words (e.g., chair is a member of the furniture category). We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: (a) the acquisition of features that discriminate among categories, and (b) the grouping of concepts into categories based on those features. Our model learns categories incrementally using particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference that sequentially integrates newly observed data and can be viewed as a plausible mechanism for human learning. Experimental results show that our incremental learner obtains meaningful categories which yield a closer fit to behavioral data compared to related models while at the same time acquiring features which characterize the learned categories. (An earlier version of this work was published in Frermann and Lapata .). Copyright © 2015 Cognitive Science Society, Inc.

  9. Invariant-feature-based adaptive automatic target recognition in obscured 3D point clouds

    NASA Astrophysics Data System (ADS)

    Khuon, Timothy; Kershner, Charles; Mattei, Enrico; Alverio, Arnel; Rand, Robert

    2014-06-01

    Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area. The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.

  10. Learning with imperfectly labeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of learning in pattern recognition using imperfectly labeled patterns is considered. The performance of the Bayes and nearest neighbor classifiers with imperfect labels is discussed using a probabilistic model for the mislabeling of the training patterns. Schemes for training the classifier using both parametric and non parametric techniques are presented. Methods for the correction of imperfect labels were developed. To gain an understanding of the learning process, expressions are derived for success probability as a function of training time for a one dimensional increment error correction classifier with imperfect labels. Feature selection with imperfectly labeled patterns is described.

  11. Prediction of Protein-Protein Interaction Sites by Random Forest Algorithm with mRMR and IFS

    PubMed Central

    Li, Bi-Qing; Feng, Kai-Yan; Chen, Lei; Huang, Tao; Cai, Yu-Dong

    2012-01-01

    Prediction of protein-protein interaction (PPI) sites is one of the most challenging problems in computational biology. Although great progress has been made by employing various machine learning approaches with numerous characteristic features, the problem is still far from being solved. In this study, we developed a novel predictor based on Random Forest (RF) algorithm with the Minimum Redundancy Maximal Relevance (mRMR) method followed by incremental feature selection (IFS). We incorporated features of physicochemical/biochemical properties, sequence conservation, residual disorder, secondary structure and solvent accessibility. We also included five 3D structural features to predict protein-protein interaction sites and achieved an overall accuracy of 0.672997 and MCC of 0.347977. Feature analysis showed that 3D structural features such as Depth Index (DPX) and surface curvature (SC) contributed most to the prediction of protein-protein interaction sites. It was also shown via site-specific feature analysis that the features of individual residues from PPI sites contribute most to the determination of protein-protein interaction sites. It is anticipated that our prediction method will become a useful tool for identifying PPI sites, and that the feature analysis described in this paper will provide useful insights into the mechanisms of interaction. PMID:22937126

  12. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  13. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy

    PubMed Central

    Zhang, Lina; Zhang, Chengjin; Gao, Rui; Yang, Runtao; Song, Qing

    2016-01-01

    Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information), PSSM (Position Specific Scoring Matrix), RSA (Relative Solvent Accessibility), and CTD (Composition, Transition, Distribution). The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest), SMO (Sequential Minimal Optimization), NNA (Nearest Neighbor Algorithm), and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection) method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew’s Correlation Coefficient) of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc. PMID:27662651

  14. Prediction of rat protein subcellular localization with pseudo amino acid composition based on multiple sequential features.

    PubMed

    Shi, Ruijia; Xu, Cunshuan

    2011-06-01

    The study of rat proteins is an indispensable task in experimental medicine and drug development. The function of a rat protein is closely related to its subcellular location. Based on the above concept, we construct the benchmark rat proteins dataset and develop a combined approach for predicting the subcellular localization of rat proteins. From protein primary sequence, the multiple sequential features are obtained by using of discrete Fourier analysis, position conservation scoring function and increment of diversity, and these sequential features are selected as input parameters of the support vector machine. By the jackknife test, the overall success rate of prediction is 95.6% on the rat proteins dataset. Our method are performed on the apoptosis proteins dataset and the Gram-negative bacterial proteins dataset with the jackknife test, the overall success rates are 89.9% and 96.4%, respectively. The above results indicate that our proposed method is quite promising and may play a complementary role to the existing predictors in this area.

  15. The Validity and Incremental Validity of Knowledge Tests, Low-Fidelity Simulations, and High-Fidelity Simulations for Predicting Job Performance in Advanced-Level High-Stakes Selection

    ERIC Educational Resources Information Center

    Lievens, Filip; Patterson, Fiona

    2011-01-01

    In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…

  16. Autonomous aircraft initiative study

    NASA Technical Reports Server (NTRS)

    Hewett, Marle D.

    1991-01-01

    The results of a consulting effort to aid NASA Ames-Dryden in defining a new initiative in aircraft automation are described. The initiative described is a multi-year, multi-center technology development and flight demonstration program. The initiative features the further development of technologies in aircraft automation already being pursued at multiple NASA centers and Department of Defense (DoD) research and Development (R and D) facilities. The proposed initiative involves the development of technologies in intelligent systems, guidance, control, software development, airborne computing, navigation, communications, sensors, unmanned vehicles, and air traffic control. It involves the integration and implementation of these technologies to the extent necessary to conduct selected and incremental flight demonstrations.

  17. Predicting bacteriophage proteins located in host cell with feature selection technique.

    PubMed

    Ding, Hui; Liang, Zhi-Yong; Guo, Feng-Biao; Huang, Jian; Chen, Wei; Lin, Hao

    2016-04-01

    A bacteriophage is a virus that can infect a bacterium. The fate of an infected bacterium is determined by the bacteriophage proteins located in the host cell. Thus, reliably identifying bacteriophage proteins located in the host cell is extremely important to understand their functions and discover potential anti-bacterial drugs. Thus, in this paper, a computational method was developed to recognize bacteriophage proteins located in host cells based only on their amino acid sequences. The analysis of variance (ANOVA) combined with incremental feature selection (IFS) was proposed to optimize the feature set. Using a jackknife cross-validation, our method can discriminate between bacteriophage proteins located in a host cell and the bacteriophage proteins not located in a host cell with a maximum overall accuracy of 84.2%, and can further classify bacteriophage proteins located in host cell cytoplasm and in host cell membranes with a maximum overall accuracy of 92.4%. To enhance the value of the practical applications of the method, we built a web server called PHPred (〈http://lin.uestc.edu.cn/server/PHPred〉). We believe that the PHPred will become a powerful tool to study bacteriophage proteins located in host cells and to guide related drug discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Mental sets in conduct problem youth with psychopathic features: entity versus incremental theories of intelligence.

    PubMed

    Salekin, Randall T; Lester, Whitney S; Sellers, Mary-Kate

    2012-08-01

    The purpose of the current study was to examine the effect of a motivational intervention on conduct problem youth with psychopathic features. Specifically, the current study examined conduct problem youths' mental set (or theory) regarding intelligence (entity vs. incremental) upon task performance. We assessed 36 juvenile offenders with psychopathic features and tested whether providing them with two different messages regarding intelligence would affect their functioning on a task related to academic performance. The study employed a MANOVA design with two motivational conditions and three outcomes including fluency, flexibility, and originality. Results showed that youth with psychopathic features who were given a message that intelligence grows over time, were more fluent and flexible than youth who were informed that intelligence is static. There were no significant differences between the groups in terms of originality. The implications of these findings are discussed including the possible benefits of interventions for adolescent offenders with conduct problems and psychopathic features. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  19. Word Order and Voice Influence the Timing of Verb Planning in German Sentence Production.

    PubMed

    Sauppe, Sebastian

    2017-01-01

    Theories of incremental sentence production make different assumptions about when speakers encode information about described events and when verbs are selected, accordingly. An eye tracking experiment on German testing the predictions from linear and hierarchical incrementality about the timing of event encoding and verb planning is reported. In the experiment, participants described depictions of two-participant events with sentences that differed in voice and word order. Verb-medial active sentences and actives and passives with sentence-final verbs were compared. Linear incrementality predicts that sentences with verbs placed early differ from verb-final sentences because verbs are assumed to only be planned shortly before they are articulated. By contrast, hierarchical incrementality assumes that speakers start planning with relational encoding of the event. A weak version of hierarchical incrementality assumes that only the action is encoded at the outset of formulation and selection of lexical verbs only occurs shortly before they are articulated, leading to the prediction of different fixation patterns for verb-medial and verb-final sentences. A strong version of hierarchical incrementality predicts no differences between verb-medial and verb-final sentences because it assumes that verbs are always lexically selected early in the formulation process. Based on growth curve analyses of fixations to agent and patient characters in the described pictures, and the influence of character humanness and the lack of an influence of the visual salience of characters on speakers' choice of active or passive voice, the current results suggest that while verb planning does not necessarily occur early during formulation, speakers of German always create an event representation early.

  20. Improving hot region prediction by parameter optimization of density clustering in PPI.

    PubMed

    Hu, Jing; Zhang, Xiaolong

    2016-11-01

    This paper proposed an optimized algorithm which combines density clustering of parameter selection with feature-based classification for hot region prediction. First, all the residues are classified by SVM to remove non-hot spot residues, then density clustering of parameter selection is used to find hot regions. In the density clustering, this paper studies how to select input parameters. There are two parameters radius and density in density-based incremental clustering. We firstly fix density and enumerate radius to find a pair of parameters which leads to maximum number of clusters, and then we fix radius and enumerate density to find another pair of parameters which leads to maximum number of clusters. Experiment results show that the proposed method using both two pairs of parameters provides better prediction performance than the other method, and compare these two predictive results, the result by fixing radius and enumerating density have slightly higher prediction accuracy than that by fixing density and enumerating radius. Copyright © 2016. Published by Elsevier Inc.

  1. Modular Representation of Luminance Polarity In the Superficial Layers Of Primary Visual Cortex

    PubMed Central

    Smith, Gordon B.; Whitney, David E.; Fitzpatrick, David

    2016-01-01

    Summary The spatial arrangement of luminance increments (ON) and decrements (OFF) falling on the retina provides a wealth of information used by central visual pathways to construct coherent representations of visual scenes. But how the polarity of luminance change is represented in the activity of cortical circuits remains unclear. Using wide-field epifluorescence and two-photon imaging we demonstrate a robust modular representation of luminance polarity (ON or OFF) in the superficial layers of ferret primary visual cortex. Polarity-specific domains are found with both uniform changes in luminance and single light/dark edges, and include neurons selective for orientation and direction of motion. The integration of orientation and polarity preference is evident in the selectivity and discrimination capabilities of most layer 2/3 neurons. We conclude that polarity selectivity is an integral feature of layer 2/3 neurons, ensuring that the distinction between light and dark stimuli is available for further processing in downstream extrastriate areas. PMID:26590348

  2. The evaluation of complex interventions in palliative care: an exploration of the potential of case study research strategies.

    PubMed

    Walshe, Catherine

    2011-12-01

    Complex, incrementally changing, context dependent and variable palliative care services are difficult to evaluate. Case study research strategies may have potential to contribute to evaluating such complex interventions, and to develop this field of evaluation research. This paper explores definitions of case study (as a unit of study, a process, and a product) and examines the features of case study research strategies which are thought to confer benefits for the evaluation of complex interventions in palliative care settings. Ten features of case study that are thought to be beneficial in evaluating complex interventions in palliative care are discussed, drawing from exemplars of research in this field. Important features are related to a longitudinal approach, triangulation, purposive instance selection, comprehensive approach, multiple data sources, flexibility, concurrent data collection and analysis, search for proving-disproving evidence, pattern matching techniques and an engaging narrative. The limitations of case study approaches are discussed including the potential for subjectivity and their complex, time consuming and potentially expensive nature. Case study research strategies have great potential in evaluating complex interventions in palliative care settings. Three key features need to be exploited to develop this field: case selection, longitudinal designs, and the use of rival hypotheses. In particular, case study should be used in situations where there is interplay and interdependency between the intervention and its context, such that it is difficult to define or find relevant comparisons.

  3. Improving the Selection, Classification, and Utilization of Army Enlisted Personnel. Project A

    DTIC Science & Technology

    1987-06-01

    performance measures, to determine whether the new predictors have incremental validity over and above the present system. These two components must be...critical aspect of this task is the demonstration of the incremental validity added by new predictors. Task 3. Measurement of School/Training Success...chances of incremental validity and classification efficiency. 3. Retain measures with adequate reliability. Using all accumulated information, the final

  4. Analysis and Prediction of Myristoylation Sites Using the mRMR Method, the IFS Method and an Extreme Learning Machine Algorithm.

    PubMed

    Wang, ShaoPeng; Zhang, Yu-Hang; Huang, GuoHua; Chen, Lei; Cai, Yu-Dong

    2017-01-01

    Myristoylation is an important hydrophobic post-translational modification that is covalently bound to the amino group of Gly residues on the N-terminus of proteins. The many diverse functions of myristoylation on proteins, such as membrane targeting, signal pathway regulation and apoptosis, are largely due to the lipid modification, whereas abnormal or irregular myristoylation on proteins can lead to several pathological changes in the cell. To better understand the function of myristoylated sites and to correctly identify them in protein sequences, this study conducted a novel computational investigation on identifying myristoylation sites in protein sequences. A training dataset with 196 positive and 84 negative peptide segments were obtained. Four types of features derived from the peptide segments following the myristoylation sites were used to specify myristoylatedand non-myristoylated sites. Then, feature selection methods including maximum relevance and minimum redundancy (mRMR), incremental feature selection (IFS), and a machine learning algorithm (extreme learning machine method) were adopted to extract optimal features for the algorithm to identify myristoylation sites in protein sequences, thereby building an optimal prediction model. As a result, 41 key features were extracted and used to build an optimal prediction model. The effectiveness of the optimal prediction model was further validated by its performance on a test dataset. Furthermore, detailed analyses were also performed on the extracted 41 features to gain insight into the mechanism of myristoylation modification. This study provided a new computational method for identifying myristoylation sites in protein sequences. We believe that it can be a useful tool to predict myristoylation sites from protein sequences. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  5. Half-Unit Insulin Pens: Disease Management in Patients With Diabetes Who Are Sensitive to Insulin.

    PubMed

    Klonoff, David C; Nayberg, Irina; Stauder, Udo; Oualali, Hamid; Domenger, Catherine

    2017-05-01

    Insulin pens represent a significant technological advancement in diabetes management. While the vast majority have been designed with 1U-dosing increments, improved accuracy and precision facilitated by half-unit increments may be particularly significant in specific patients who are sensitive to insulin. These include patients with low insulin requirements and in those requiring more precise dose adjustments, such as the pediatric patient population. This review summarized functional characteristics of insulin half-unit pens (HUPs) and their effect on user experience. The literature search was restricted to articles published in English between January 1, 2000, and January 1, 2015. A total of 17 publications met the set criteria and were included in the review. Overall, studies outlined characteristics for 4 insulin HUPs. Based on their functionality, the pens were generally similar and all met the ISO 11608-1 criteria for accuracy. However, some had specific advantageous features in terms of size, weight, design, dialing torque, and injection force. Although limited, the currently available user preference studies in children and adolescents with diabetes and their carers suggest that the selection of an HUP is likely to be influenced by a combination of factors such as these, in addition to the prescribed insulin and dosing regimen. Insulin HUPs are likely to be a key diabetes management tool for patients who are sensitive to insulin; specific pen features may further advance diabetes management in these populations.

  6. Evaluation of the Impact of AIRS Radiance and Profile Data Assimilation in Partly Cloudy Regions

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley; Srikishen, Jayanthi; Jedlovec, Gary

    2013-01-01

    Improvements to global and regional numerical weather prediction have been demonstrated through assimilation of data from NASA s Atmospheric Infrared Sounder (AIRS). Current operational data assimilation systems use AIRS radiances, but impact on regional forecasts has been much smaller than for global forecasts. Retrieved profiles from AIRS contain much of the information that is contained in the radiances and may be able to reveal reasons for this reduced impact. Assimilating AIRS retrieved profiles in an identical analysis configuration to the radiances, tracking the quantity and quality of the assimilated data in each technique, and examining analysis increments and forecast impact from each data type can yield clues as to the reasons for the reduced impact. By doing this with regional scale models individual synoptic features (and the impact of AIRS on these features) can be more easily tracked. This project examines the assimilation of hyperspectral sounder data used in operational numerical weather prediction by comparing operational techniques used for AIRS radiances and research techniques used for AIRS retrieved profiles. Parallel versions of a configuration of the Weather Research and Forecasting (WRF) model with Gridpoint Statistical Interpolation (GSI) are run to examine the impact AIRS radiances and retrieved profiles. Statistical evaluation of a long-term series of forecast runs will be compared along with preliminary results of in-depth investigations for select case comparing the analysis increments in partly cloudy regions and short-term forecast impacts.

  7. Evaluation of the Impact of Atmospheric Infrared Sounder (AIRS) Radiance and Profile Data Assimilation in Partly Cloudy Regions

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley; Srikishen, Jayanthi; Jedlovec, Gary

    2013-01-01

    Improvements to global and regional numerical weather prediction have been demonstrated through assimilation of data from NASA s Atmospheric Infrared Sounder (AIRS). Current operational data assimilation systems use AIRS radiances, but impact on regional forecasts has been much smaller than for global forecasts. Retrieved profiles from AIRS contain much of the information that is contained in the radiances and may be able to reveal reasons for this reduced impact. Assimilating AIRS retrieved profiles in an identical analysis configuration to the radiances, tracking the quantity and quality of the assimilated data in each technique, and examining analysis increments and forecast impact from each data type can yield clues as to the reasons for the reduced impact. By doing this with regional scale models individual synoptic features (and the impact of AIRS on these features) can be more easily tracked. This project examines the assimilation of hyperspectral sounder data used in operational numerical weather prediction by comparing operational techniques used for AIRS radiances and research techniques used for AIRS retrieved profiles. Parallel versions of a configuration of the Weather Research and Forecasting (WRF) model with Gridpoint Statistical Interpolation (GSI) are run to examine the impact AIRS radiances and retrieved profiles. Statistical evaluation of 6 weeks of forecast runs will be compared along with preliminary results of in-depth investigations for select case comparing the analysis increments in partly cloudy regions and short-term forecast impacts.

  8. Incremental Costs and Performance Benefits of Various Features of Concrete Pavements

    DOT National Transportation Integrated Search

    2004-04-01

    Various design features (such as dowel bars, tied shoulders, or drainable bases) may be added to a portland cement concrete (PCC) pavement design to improve its overall performance by maintaining a higher level of serviceability or by extending its s...

  9. MultiLIS: A Description of the System Design and Operational Features.

    ERIC Educational Resources Information Center

    Kelly, Glen J.; And Others

    1988-01-01

    Describes development, hardware requirements, and features of the MultiLIS integrated library software package. A system profile provides pricing information, operational characteristics, and technical specifications. Sidebars discuss MultiLIS integration structure, incremental architecture, and NCR Tower Computers. (4 references) (MES)

  10. Gene expression profiling gut microbiota in different races of humans

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Zhang, Yu-Hang; Huang, Tao; Cai, Yu-Dong

    2016-03-01

    The gut microbiome is shaped and modified by the polymorphisms of microorganisms in the intestinal tract. Its composition shows strong individual specificity and may play a crucial role in the human digestive system and metabolism. Several factors can affect the composition of the gut microbiome, such as eating habits, living environment, and antibiotic usage. Thus, various races are characterized by different gut microbiome characteristics. In this present study, we studied the gut microbiomes of three different races, including individuals of Asian, European and American races. The gut microbiome and the expression levels of gut microbiome genes were analyzed in these individuals. Advanced feature selection methods (minimum redundancy maximum relevance and incremental feature selection) and four machine-learning algorithms (random forest, nearest neighbor algorithm, sequential minimal optimization, Dagging) were employed to capture key differentially expressed genes. As a result, sequential minimal optimization was found to yield the best performance using the 454 genes, which could effectively distinguish the gut microbiomes of different races. Our analyses of extracted genes support the widely accepted hypotheses that eating habits, living environments and metabolic levels in different races can influence the characteristics of the gut microbiome.

  11. Gene expression profiling gut microbiota in different races of humans

    PubMed Central

    Chen, Lei; Zhang, Yu-Hang; Huang, Tao; Cai, Yu-Dong

    2016-01-01

    The gut microbiome is shaped and modified by the polymorphisms of microorganisms in the intestinal tract. Its composition shows strong individual specificity and may play a crucial role in the human digestive system and metabolism. Several factors can affect the composition of the gut microbiome, such as eating habits, living environment, and antibiotic usage. Thus, various races are characterized by different gut microbiome characteristics. In this present study, we studied the gut microbiomes of three different races, including individuals of Asian, European and American races. The gut microbiome and the expression levels of gut microbiome genes were analyzed in these individuals. Advanced feature selection methods (minimum redundancy maximum relevance and incremental feature selection) and four machine-learning algorithms (random forest, nearest neighbor algorithm, sequential minimal optimization, Dagging) were employed to capture key differentially expressed genes. As a result, sequential minimal optimization was found to yield the best performance using the 454 genes, which could effectively distinguish the gut microbiomes of different races. Our analyses of extracted genes support the widely accepted hypotheses that eating habits, living environments and metabolic levels in different races can influence the characteristics of the gut microbiome. PMID:26975620

  12. Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem

    PubMed Central

    Liu, Xunying; Zhang, Chao; Woodland, Phil; Fonteneau, Elisabeth

    2017-01-01

    There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important case in point. Automatic Speech Recognition (ASR) systems with near-human levels of performance are now available, which provide a computationally explicit solution for the recognition of words in continuous speech. This research aims to bridge the gap between speech recognition processes in humans and machines, using novel multivariate techniques to compare incremental ‘machine states’, generated as the ASR analysis progresses over time, to the incremental ‘brain states’, measured using combined electro- and magneto-encephalography (EMEG), generated as the same inputs are heard by human listeners. This direct comparison of dynamic human and machine internal states, as they respond to the same incrementally delivered sensory input, revealed a significant correspondence between neural response patterns in human superior temporal cortex and the structural properties of ASR-derived phonetic models. Spatially coherent patches in human temporal cortex responded selectively to individual phonetic features defined on the basis of machine-extracted regularities in the speech to lexicon mapping process. These results demonstrate the feasibility of relating human and ASR solutions to the problem of speech recognition, and suggest the potential for further studies relating complex neural computations in human speech comprehension to the rapidly evolving ASR systems that address the same problem domain. PMID:28945744

  13. Identification of compound-protein interactions through the analysis of gene ontology, KEGG enrichment for proteins and molecular fragments of compounds.

    PubMed

    Chen, Lei; Zhang, Yu-Hang; Zheng, Mingyue; Huang, Tao; Cai, Yu-Dong

    2016-12-01

    Compound-protein interactions play important roles in every cell via the recognition and regulation of specific functional proteins. The correct identification of compound-protein interactions can lead to a good comprehension of this complicated system and provide useful input for the investigation of various attributes of compounds and proteins. In this study, we attempted to understand this system by extracting properties from both proteins and compounds, in which proteins were represented by gene ontology and KEGG pathway enrichment scores and compounds were represented by molecular fragments. Advanced feature selection methods, including minimum redundancy maximum relevance, incremental feature selection, and the basic machine learning algorithm random forest, were used to analyze these properties and extract core factors for the determination of actual compound-protein interactions. Compound-protein interactions reported in The Binding Databases were used as positive samples. To improve the reliability of the results, the analytic procedure was executed five times using different negative samples. Simultaneously, five optimal prediction methods based on a random forest and yielding maximum MCCs of approximately 77.55 % were constructed and may be useful tools for the prediction of compound-protein interactions. This work provides new clues to understanding the system of compound-protein interactions by analyzing extracted core features. Our results indicate that compound-protein interactions are related to biological processes involving immune, developmental and hormone-associated pathways.

  14. More DoD Oversight Needed for Purchases Made Through the Department of Energy

    DTIC Science & Technology

    2010-12-03

    the selected servicing agency?  Will significant elements of the work be contracted out or be done in-house?  Is there a service fee /charge...the estimated cost and any fee of a cost- reimbursement contract. If the contract is incrementally funded, funds are obligated to cover the amount...allotted and any corresponding increment of fee . However, the FAR does not provide enough guidance on when contracts should be incrementally or fully

  15. Joint Precision Approach and Landing System Increment 1A (JPALS Inc 1A)

    DTIC Science & Technology

    2015-12-01

    Selected Acquisition Report (SAR) RCS: DD-A&T(Q&A)823-238 Joint Precision Approach and Landing System Increment 1A (JPALS Inc 1A) As of FY 2017...President’s Budget Defense Acquisition Management Information Retrieval (DAMIR) March 10, 2016 11:30:56 UNCLASSIFIED JPALS Inc 1A December 2015 SAR...Fiscal Year FYDP - Future Years Defense Program ICE - Independent Cost Estimate IOC - Initial Operational Capability Inc - Increment JROC - Joint

  16. Incremental learning of tasks from user demonstrations, past experiences, and vocal comments.

    PubMed

    Pardowitz, Michael; Knoop, Steffen; Dillmann, Ruediger; Zöllner, Raoul D

    2007-04-01

    Since many years the robotics community is envisioning robot assistants sharing the same environment with humans. It became obvious that they have to interact with humans and should adapt to individual user needs. Especially the high variety of tasks robot assistants will be facing requires a highly adaptive and user-friendly programming interface. One possible solution to this programming problem is the learning-by-demonstration paradigm, where the robot is supposed to observe the execution of a task, acquire task knowledge, and reproduce it. In this paper, a system to record, interpret, and reason over demonstrations of household tasks is presented. The focus is on the model-based representation of manipulation tasks, which serves as a basis for incremental reasoning over the acquired task knowledge. The aim of the reasoning is to condense and interconnect the data, resulting in more general task knowledge. A measure for the assessment of information content of task features is introduced. This measure for the relevance of certain features relies both on general background knowledge as well as task-specific knowledge gathered from the user demonstrations. Beside the autonomous information estimation of features, speech comments during the execution, pointing out the relevance of features are considered as well. The results of the incremental growth of the task knowledge when more task demonstrations become available and their fusion with relevance information gained from speech comments is demonstrated within the task of laying a table.

  17. SAMS Acceleration Measurements on Mir from May 1997 to June 1998 (NASA Increments 5, 6, and 7)

    NASA Technical Reports Server (NTRS)

    DeLombard, Richard

    1999-01-01

    During NASA Increments 5, 6, and 7 (May 1997 to June 1998), about eight gigabytes of acceleration data were collected by the Space Acceleration Measurement System (SAMS) onboard the Russian Space Station Mir. The data were recorded on twenty-seven optical disks which were returned to Earth on Orbiter missions STS-86, STS-89, and STS-91. During these increments, SAMS data were collected in the Priroda module to support various microgravity experiments. This report points out some of the salient features of the microgravity acceleration environment to which the experiments were exposed. This report presents an overview of the SAMS acceleration measurements recorded by 10 Hz and 100 Hz sensor heads. The analyses included herein complement those presented in previous Mir increment summary reports prepared by the Principal Investigator Microgravity Services project.

  18. Estimate of within population incremental selection through branch imbalance in lineage trees

    PubMed Central

    Liberman, Gilad; Benichou, Jennifer I.C.; Maman, Yaakov; Glanville, Jacob; Alter, Idan; Louzoun, Yoram

    2016-01-01

    Incremental selection within a population, defined as limited fitness changes following mutation, is an important aspect of many evolutionary processes. Strongly advantageous or deleterious mutations are detected using the synonymous to non-synonymous mutations ratio. However, there are currently no precise methods to estimate incremental selection. We here provide for the first time such a detailed method and show its precision in multiple cases of micro-evolution. The proposed method is a novel mixed lineage tree/sequence based method to detect within population selection as defined by the effect of mutations on the average number of offspring. Specifically, we propose to measure the log of the ratio between the number of leaves in lineage trees branches following synonymous and non-synonymous mutations. The method requires a high enough number of sequences, and a large enough number of independent mutations. It assumes that all mutations are independent events. It does not require of a baseline model and is practically not affected by sampling biases. We show the method's wide applicability by testing it on multiple cases of micro-evolution. We show that it can detect genes and inter-genic regions using the selection rate and detect selection pressures in viral proteins and in the immune response to pathogens. PMID:26586802

  19. Clarifying Interpersonal Heterogeneity in Borderline Personality Disorder Using Latent Mixture Modeling

    PubMed Central

    Wright, Aidan G.C.; Hallquist, Michael N.; Morse, Jennifer Q.; Scott, Lori N.; Stepp, Stephanie D.; Nolf, Kimberly A.; Pilkonis, Paul A.

    2013-01-01

    Significant interpersonal impairment is a cardinal feature of borderline personality disorder (BPD). However, past research has demonstrated that the interpersonal profile associated with BPD varies across samples, evidence for considerable interpersonal heterogeneity. The current study used Inventory of Interpersonal Problems – Circumplex (IIP-C; Alden, Wiggins, & Pincus, 1990) scale scores to investigate interpersonal inhibitions and excesses in a large sample (N = 255) selected for significant borderline pathology. Results indicated that BPD symptom counts were unrelated to the primary dimensions of the IIP-C, but were related to generalized interpersonal distress. A latent class analysis clarified this finding by revealing six homogeneous interpersonal classes with prototypical profiles associated with Intrusive, Vindictive, Avoidant, Nonassertive, and moderate and severe Exploitable interpersonal problems. These classes differed in clinically relevant features (e.g., antisocial behaviors, self-injury, past suicide attempts). Findings are discussed in terms of the incremental clinical utility of the interpersonal circumplex model and the implications for developmental and nosological models of BPD. PMID:23514179

  20. Process Features in Writing: Internal Structure and Incremental Value over Product Features. Research Report. ETS RR-15-27

    ERIC Educational Resources Information Center

    Zhang, Mo; Deane, Paul

    2015-01-01

    In educational measurement contexts, essays have been evaluated and formative feedback has been given based on the end product. In this study, we used a large sample collected from middle school students in the United States to investigate the factor structure of the writing process features gathered from keystroke logs and the association of that…

  1. Incremental Upgrade of Legacy Systems (IULS)

    DTIC Science & Technology

    2001-04-01

    analysis task employed SEI’s Feature-Oriented Domain Analysis methodology (see FODA reference) and included several phases: • Context Analysis • Establish...Legacy, new Host and upgrade system and software. The Feature Oriented Domain Analysis approach ( FODA , see SUM References) was used for this step...Feature-Oriented Domain Analysis ( FODA ) Feasibility Study (CMU/SEI-90-TR- 21, ESD-90-TR-222); Software Engineering Institute, Carnegie Mellon University

  2. Dynamic Constraint Satisfaction with Reasonable Global Constraints

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy

    2003-01-01

    Previously studied theoretical frameworks for dynamic constraint satisfaction problems (DCSPs) employ a small set of primitive operators to modify a problem instance. They do not address the desire to model problems using sophisticated global constraints, and do not address efficiency questions related to incremental constraint enforcement. In this paper, we extend a DCSP framework to incorporate global constraints with flexible scope. A simple approach to incremental propagation after scope modification can be inefficient under some circumstances. We characterize the cases when this inefficiency can occur, and discuss two ways to alleviate this problem: adding rejection variables to the scope of flexible constraints, and adding new features to constraints that permit increased control over incremental propagation.

  3. How Configural Is the Configural Superiority Effect? A Neuroimaging Investigation of Emergent Features in Visual Cortex

    PubMed Central

    Fox, Olivia M.; Harel, Assaf; Bennett, Kevin B.

    2017-01-01

    The perception of a visual stimulus is dependent not only upon local features, but also on the arrangement of those features. When stimulus features are perceptually well organized (e.g., symmetric or parallel), a global configuration with a high degree of salience emerges from the interactions between these features, often referred to as emergent features. Emergent features can be demonstrated in the Configural Superiority Effect (CSE): presenting a stimulus within an organized context relative to its presentation in a disarranged one results in better performance. Prior neuroimaging work on the perception of emergent features regards the CSE as an “all or none” phenomenon, focusing on the contrast between configural and non-configural stimuli. However, it is still not clear how emergent features are processed between these two endpoints. The current study examined the extent to which behavioral and neuroimaging markers of emergent features are responsive to the degree of configurality in visual displays. Subjects were tasked with reporting the anomalous quadrant in a visual search task while being scanned. Degree of configurality was manipulated by incrementally varying the rotational angle of low-level features within the stimulus arrays. Behaviorally, we observed faster response times with increasing levels of configurality. These behavioral changes were accompanied by increases in response magnitude across multiple visual areas in occipito-temporal cortex, primarily early visual cortex and object-selective cortex. Our findings suggest that the neural correlates of emergent features can be observed even in response to stimuli that are not fully configural, and demonstrate that configural information is already present at early stages of the visual hierarchy. PMID:28167924

  4. Taking the Next Step: Combining Incrementally Valid Indicators to Improve Recidivism Prediction

    ERIC Educational Resources Information Center

    Walters, Glenn D.

    2011-01-01

    The possibility of combining indicators to improve recidivism prediction was evaluated in a sample of released federal prisoners randomly divided into a derivation subsample (n = 550) and a cross-validation subsample (n = 551). Five incrementally valid indicators were selected from five domains: demographic (age), historical (prior convictions),…

  5. Motor Controller System For Large Dynamic Range of Motor Operation

    NASA Technical Reports Server (NTRS)

    Howard, David E. (Inventor); Alhorn, Dean C. (Inventor); Smith, Dennis A. (Inventor); Dutton, Kenneth R. (Inventor); Paulson, Mitchell Scott (Inventor)

    2006-01-01

    A motor controller system uses a rotary sensor with a plurality of signal conditioning units, coupled to the rotary sensor. Each of these units, which is associated with a particular range of motor output shaft rotation rates, generate a feedback signal indicative of the position of the motor s output shaft. A controller (i) converts a selected motor output shaft rotation rate to a corresponding incremental amount of rotational movement for a selected fixed time period, (ii) selects, at periodic completions of the selected fixed time period, the feedback signal from one of the signal conditioning units for which the particular range of motor output shaft rotation rates associated therewith encompasses the selected motor output shaft rotation rate, and (iii) generates a motor drive signal based on a difference between the incremental amount of rotational movement and the feedback signal from the selected one of the signal conditioning Units.

  6. Multiscale Characterization of the Probability Density Functions of Velocity and Temperature Increment Fields

    NASA Astrophysics Data System (ADS)

    DeMarco, Adam Ward

    The turbulent motions with the atmospheric boundary layer exist over a wide range of spatial and temporal scales and are very difficult to characterize. Thus, to explore the behavior of such complex flow enviroments, it is customary to examine their properties from a statistical perspective. Utilizing the probability density functions of velocity and temperature increments, deltau and deltaT, respectively, this work investigates their multiscale behavior to uncover the unique traits that have yet to be thoroughly studied. Utilizing diverse datasets, including idealized, wind tunnel experiments, atmospheric turbulence field measurements, multi-year ABL tower observations, and mesoscale models simulations, this study reveals remarkable similiarities (and some differences) between the small and larger scale components of the probability density functions increments fields. This comprehensive analysis also utilizes a set of statistical distributions to showcase their ability to capture features of the velocity and temperature increments' probability density functions (pdfs) across multiscale atmospheric motions. An approach is proposed for estimating their pdfs utilizing the maximum likelihood estimation (MLE) technique, which has never been conducted utilizing atmospheric data. Using this technique, we reveal the ability to estimate higher-order moments accurately with a limited sample size, which has been a persistent concern for atmospheric turbulence research. With the use robust Goodness of Fit (GoF) metrics, we quantitatively reveal the accuracy of the distributions to the diverse dataset. Through this analysis, it is shown that the normal inverse Gaussian (NIG) distribution is a prime candidate to be used as an estimate of the increment pdfs fields. Therefore, using the NIG model and its parameters, we display the variations in the increments over a range of scales revealing some unique scale-dependent qualities under various stability and ow conditions. This novel approach can provide a method of characterizing increment fields with the sole use of only four pdf parameters. Also, we investigate the capability of the current state-of-the-art mesoscale atmospheric models to predict the features and highlight the potential for use for future model development. With the knowledge gained in this study, a number of applications can benefit by using our methodology, including the wind energy and optical wave propagation fields.

  7. Conversion of type of quantum well structure

    NASA Technical Reports Server (NTRS)

    Ning, Cun-Zheng (Inventor)

    2007-01-01

    A method for converting a Type 2 quantum well semiconductor material to a Type 1 material. A second layer of undoped material is placed between first and third layers of selectively doped material, which are separated from the second layer by undoped layers having small widths. Doping profiles are chosen so that a first electrical potential increment across a first layer-second layer interface is equal to a first selected value and/or a second electrical potential increment across a second layer-third layer interface is equal to a second selected value. The semiconductor structure thus produced is useful as a laser material and as an incident light detector material in various wavelength regions, such as a mid-infrared region.

  8. Conversion of Type of Quantum Well Structure

    NASA Technical Reports Server (NTRS)

    Ning, Cun-Zheng (Inventor)

    2007-01-01

    A method for converting a Type 2 quantum well semiconductor material to a Type 1 material. A second layer of undoped material is placed between first and third layers of selectively doped material, which are separated from the second layer by undoped layers having small widths. Doping profiles are chosen so that a first electrical potential increment across a first layer-second layer interface is equal to a first selected value and/or a second electrical potential increment across a second layer-third layer interface is equal to a second selected value. The semiconductor structure thus produced is useful as a laser material and as an incident light detector material in various wavelength regions, such as a mid-infrared region.

  9. Energy cost of wheel running in house mice: implications for coadaptation of locomotion and energy budgets.

    PubMed

    Koteja, P; Swallow, J G; Carter, P A; Garland, T

    1999-01-01

    Laboratory house mice (Mus domesticus) that had experienced 10 generations of artificial selection for high levels of voluntary wheel running ran about 70% more total revolutions per day than did mice from random-bred control lines. The difference resulted primarily from increased average velocities rather than from increased time spent running. Within all eight lines (four selected, four control), females ran more than males. Average daily running distances ranged from 4.4 km in control males to 11.6 km in selected females. Whole-animal food consumption was statistically indistinguishable in the selected and control lines. However, mice from selected lines averaged approximately 10% smaller in body mass, and mass-adjusted food consumption was 4% higher in selected lines than in controls. The incremental cost of locomotion (grams food/revolution), computed as the partial regression slope of food consumption on revolutions run per day, did not differ between selected and control mice. On a 24-h basis, the total incremental cost of running (covering a distance) amounted to only 4.4% of food consumption in the control lines and 7.5% in the selected ones. However, the daily incremental cost of time active is higher (15.4% and 13.1% of total food consumption in selected and control lines, respectively). If wheel running in the selected lines continues to increase mainly by increases in velocity, then constraints related to energy acquisition are unlikely to be an important factor limiting further selective gain. More generally, our results suggest that, in small mammals, a substantial evolutionary increase in daily movement distances can be achieved by increasing running speed, without remarkable increases in total energy expenditure.

  10. Don't Want to Look Dumb? The Role of Theories of Intelligence and Humanlike Features in Online Help Seeking.

    PubMed

    Kim, Sara; Zhang, Ke; Park, Daeun

    2018-02-01

    Numerous studies have shown that individuals' help-seeking behavior increases when a computerized helper is endowed with humanlike features in nonachievement contexts. In contrast, the current research suggests that anthropomorphic helpers are not universally conducive to help-seeking behavior in contexts of achievement, particularly among individuals who construe help seeking as a display of incompetence (i.e., entity theorists). Study 1 demonstrated that when entity theorists received help from an anthropomorphized (vs. a nonanthropomorphized) helper, they were more concerned about negative judgments from other people, whereas incremental theorists were not affected by anthropomorphic features. Study 2 showed that when help was provided by an anthropomorphized (vs. a nonanthropomorphized) helper, entity theorists were less likely to seek help, even at the cost of lower performance. In contrast, incremental theorists' help-seeking behavior and task performance were not affected by anthropomorphism. This research deepens the current understanding of the role of anthropomorphic computerized helpers in online learning contexts.

  11. Coated graphite articles useful in metallurgical processes and method for making same

    DOEpatents

    Holcombe, Cressie E.; Bird, Eugene L.

    1995-01-01

    Graphite articles including crucibles and molds used in metallurgical processes involving the melting and the handling of molten metals and alloys that are reactive with carbon when in a molten state and at process temperatures up to about 2000.degree. C. are provided with a multiple-layer coating for inhibiting carbon diffusion from the graphite into the molten metal or alloys. The coating is provided by a first coating increment of a carbide-forming metal on selected surfaces of the graphite, a second coating increment of a carbide forming metal and a refractory metal oxide, and a third coating increment of a refractory metal oxide. The second coating increment provides thermal shock absorbing characteristics to prevent delamination of the coating during temperature cycling. A wash coat of unstabilized zirconia or titanium nitride can be applied onto the third coating increment to facilitate release of melts from the coating.

  12. Complexity of the heart rhythm after heart transplantation by entropy of transition network for RR-increments of RR time intervals between heartbeats.

    PubMed

    Makowiec, Danuta; Struzik, Zbigniew; Graff, Beata; Wdowczyk-Szulc, Joanna; Zarczynska-Buchnowiecka, Marta; Gruchala, Marcin; Rynkiewicz, Andrzej

    2013-01-01

    Network models have been used to capture, represent and analyse characteristics of living organisms and general properties of complex systems. The use of network representations in the characterization of time series complexity is a relatively new but quickly developing branch of time series analysis. In particular, beat-to-beat heart rate variability can be mapped out in a network of RR-increments, which is a directed and weighted graph with vertices representing RR-increments and the edges of which correspond to subsequent increments. We evaluate entropy measures selected from these network representations in records of healthy subjects and heart transplant patients, and provide an interpretation of the results.

  13. 36 CFR 220.7 - Environmental assessment and decision notice.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... modifications and incremental design features developed through the analysis process to develop the alternatives... proposed action and any alternatives together in a comparative description or describe the impacts of each...

  14. 36 CFR 220.7 - Environmental assessment and decision notice.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... modifications and incremental design features developed through the analysis process to develop the alternatives... proposed action and any alternatives together in a comparative description or describe the impacts of each...

  15. 36 CFR 220.7 - Environmental assessment and decision notice.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... modifications and incremental design features developed through the analysis process to develop the alternatives... proposed action and any alternatives together in a comparative description or describe the impacts of each...

  16. 36 CFR 220.7 - Environmental assessment and decision notice.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... modifications and incremental design features developed through the analysis process to develop the alternatives... proposed action and any alternatives together in a comparative description or describe the impacts of each...

  17. 36 CFR 220.7 - Environmental assessment and decision notice.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... modifications and incremental design features developed through the analysis process to develop the alternatives... proposed action and any alternatives together in a comparative description or describe the impacts of each...

  18. Support vector machine incremental learning triggered by wrongly predicted samples

    NASA Astrophysics Data System (ADS)

    Tang, Ting-long; Guan, Qiu; Wu, Yi-rong

    2018-05-01

    According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

  19. Safeguarding End-User Military Software

    DTIC Science & Technology

    2014-12-04

    product lines using composi- tional symbolic execution [17] Software product lines are families of products defined by feature commonality and vari...ability, with a well-managed asset base. Recent work in testing of software product lines has exploited similarities across development phases to reuse...feature dependence graph to extract the set of possible interaction trees in a product family. It composes these to incrementally and symbolically

  20. Relative Pose Estimation Using Image Feature Triplets

    NASA Astrophysics Data System (ADS)

    Chuang, T. Y.; Rottensteiner, F.; Heipke, C.

    2015-03-01

    A fully automated reconstruction of the trajectory of image sequences using point correspondences is turning into a routine practice. However, there are cases in which point features are hardly detectable, cannot be localized in a stable distribution, and consequently lead to an insufficient pose estimation. This paper presents a triplet-wise scheme for calibrated relative pose estimation from image point and line triplets, and investigates the effectiveness of the feature integration upon the relative pose estimation. To this end, we employ an existing point matching technique and propose a method for line triplet matching in which the relative poses are resolved during the matching procedure. The line matching method aims at establishing hypotheses about potential minimal line matches that can be used for determining the parameters of relative orientation (pose estimation) of two images with respect to the reference one; then, quantifying the agreement using the estimated orientation parameters. Rather than randomly choosing the line candidates in the matching process, we generate an associated lookup table to guide the selection of potential line matches. In addition, we integrate the homologous point and line triplets into a common adjustment procedure. In order to be able to also work with image sequences the adjustment is formulated in an incremental manner. The proposed scheme is evaluated with both synthetic and real datasets, demonstrating its satisfactory performance and revealing the effectiveness of image feature integration.

  1. Dark Flows in Newton Crater Extending During Summer Six-Image Sequence

    NASA Image and Video Library

    2011-08-04

    This image comes from observations of Newton crater by the HiRISE camera onboard NASA Mars Reconnaissance Orbiter where features appear and incrementally grow during warm seasons and fade in cold seasons.

  2. On the search for an appropriate metric for reaction time to suprathreshold increments and decrements.

    PubMed

    Vassilev, Angel; Murzac, Adrian; Zlatkova, Margarita B; Anderson, Roger S

    2009-03-01

    Weber contrast, DeltaL/L, is a widely used contrast metric for aperiodic stimuli. Zele, Cao & Pokorny [Zele, A. J., Cao, D., & Pokorny, J. (2007). Threshold units: A correct metric for reaction time? Vision Research, 47, 608-611] found that neither Weber contrast nor its transform to detection-threshold units equates human reaction times in response to luminance increments and decrements under selective rod stimulation. Here we show that their rod reaction times are equated when plotted against the spatial luminance ratio between the stimulus and its background (L(max)/L(min), the larger and smaller of background and stimulus luminances). Similarly, reaction times to parafoveal S-cone selective increments and decrements from our previous studies [Murzac, A. (2004). A comparative study of the temporal characteristics of processing of S-cone incremental and decremental signals. PhD thesis, New Bulgarian University, Sofia, Murzac, A., & Vassilev, A. (2004). Reaction time to S-cone increments and decrements. In: 7th European conference on visual perception, Budapest, August 22-26. Perception, 33, 180 (Abstract).], are better described by the spatial luminance ratio than by Weber contrast. We assume that the type of stimulus detection by temporal (successive) luminance discrimination, by spatial (simultaneous) luminance discrimination or by both [Sperling, G., & Sondhi, M. M. (1968). Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 58, 1133-1145.] determines the appropriateness of one or other contrast metric for reaction time.

  3. Determination of Azimuth Angle at Burnout for Placing a Satellite Over a Selected Earth Position

    NASA Technical Reports Server (NTRS)

    Skopinski, T. H.; Johnson, Katherine G.

    1960-01-01

    Expressions are presented for relating the satellite position in the orbital plane with the projected latitude and longitude on a rotating earth surface. An expression is also presented for determining the azimuth angle at a given burnout position on the basis of a selected passage position on the earth's surface. Examples are presented of a satellite launched eastward and one launched westward, each passing over a selected position sometime after having completed three orbits. Incremental changes from the desired latitude and longitude due to the earth's oblateness are included in the iteration for obtaining the azimuth angles of the two examples. The results for both cases are then compared with those obtained from a computing program using an oblate rotating earth. Changes from the selected latitude and longitude resulting from incremental changes from the burn-out azimuth angle and latitude are also analyzed.

  4. Features of Scots pine radial growth in conditions of provenance trial.

    NASA Astrophysics Data System (ADS)

    Kuzmin, Sergey; Kuzmina, Nina

    2013-04-01

    Provenance trial of Scots pine in Boguchany forestry of Krasnoyarsk krai is conducted on two different soils - dark-grey loam forest soil and sod-podzol sandy soil. Complex of negative factors for plant growth and development appears in dry conditions of sandy soil. It could results in decrease of resistance to diseases. Sandy soils in different climatic zones have such common traits as low absorbing capacity, poorness of elemental nutrition, low microbiological activity and moisture capacity, very high water permeability. But Scots pine trees growing in such conditions could have certain advantages and perspectives of use. In the scope of climate change (global warming) the study of Scots pine growth on sandy soil become urgent because of more frequent appearance of dry seasons. Purpose of the work is revelation of radial growth features of Scots pine with different origin in dry conditions of sandy soil and assessment of external factors influence. The main feature of radial growth of majority of studied pine provenances in conditions of sandy soil is presence of significant variation of increment with distinct decline in 25-years old with loss of tree rings in a number of cases. The reason of it is complex of factors: deficit of June precipitation and next following outbreak of fungal disease. Found «frost rings» for all trees of studied clymatypes in 1992 are the consequence of temperature decline from May 21 to June 2 - from 23 down to 2 degree Celsius. Perspective climatypes with biggest radial increments and least sensitivity to fungal disease were revealed. Eniseysk and Vikhorevka (from Krasnoyarsk krai and Irkutsk oblast)provenances of pine have the biggest radial increments, the least sensitivity to Cenangium dieback and smallest increments decline. These climatypes are in the group of perspective provenances and in present time they are recommended for wide trial in the region for future use in plantation forest growing. Kandalaksha (Murmansk oblast) climatype from northern taiga with good resistance to pathogen have nonsignificant decline in radial increment during epiphytoty in comparison with local and southern climatypes. Southern Chemal provenance (Altai) after influence of Cenangium dieback has more than others losses of tree rings as the result of nonresistance to this fungal disease.

  5. TargetM6A: Identifying N6-Methyladenosine Sites From RNA Sequences via Position-Specific Nucleotide Propensities and a Support Vector Machine.

    PubMed

    Li, Guang-Qing; Liu, Zi; Shen, Hong-Bin; Yu, Dong-Jun

    2016-10-01

    As one of the most ubiquitous post-transcriptional modifications of RNA, N 6 -methyladenosine ( [Formula: see text]) plays an essential role in many vital biological processes. The identification of [Formula: see text] sites in RNAs is significantly important for both basic biomedical research and practical drug development. In this study, we designed a computational-based method, called TargetM6A, to rapidly and accurately target [Formula: see text] sites solely from the primary RNA sequences. Two new features, i.e., position-specific nucleotide/dinucleotide propensities (PSNP/PSDP), are introduced and combined with the traditional nucleotide composition (NC) feature to formulate RNA sequences. The extracted features are further optimized to obtain a much more compact and discriminative feature subset by applying an incremental feature selection (IFS) procedure. Based on the optimized feature subset, we trained TargetM6A on the training dataset with a support vector machine (SVM) as the prediction engine. We compared the proposed TargetM6A method with existing methods for predicting [Formula: see text] sites by performing stringent jackknife tests and independent validation tests on benchmark datasets. The experimental results show that the proposed TargetM6A method outperformed the existing methods for predicting [Formula: see text] sites and remarkably improved the prediction performances, with MCC = 0.526 and AUC = 0.818. We also provided a user-friendly web server for TargetM6A, which is publicly accessible for academic use at http://csbio.njust.edu.cn/bioinf/TargetM6A.

  6. Defense Acquisitions: Assessments of Selected Weapon Programs

    DTIC Science & Technology

    2016-03-01

    Increment 3 81 Indirect Fire Protection Capability Increment 2-Intercept Block 1 (IFPC Inc 2-I Block 1) 83 Improved Turbine Engine Program (ITEP...ITEP Improved Turbine Engine Program JAGM Joint Air-to-Ground Missile JLTV Joint Light Tactical Vehicle JSTARS Recap Joint Surveillance Target...Attack Radar System Recap 09/2017 —-   Improved Turbine Engine Program 06/2018 O O O Amphibious Ship Replacement 09/2018 O O Advanced Pilot

  7. Defense Acquisitions: Assessments of Selected Weapon Programs

    DTIC Science & Technology

    2017-03-01

    PAC-3 MSE) 81 Warfighter Information Network-Tactical (WIN-T) Increment 2 83 Improved Turbine Engine Program (ITEP) 85 Long Range Precision Fires...Unmanned Air System 05/2018 —- O  Joint Surveillance Target Attack Radar System Recapitalization 10/2017 —- O  Improved Turbine Engine Program TBD...Network-Tactical (WIN-T) Increment 2 83 1-page assessments Improved Turbine Engine Program (ITEP) 85 Long Range Precision Fires (LRPF) 86

  8. Lack of genetic variation in tree ring delta13C suggests a uniform, stomatally-driven response to drought stress across Pinus radiata genotypes.

    PubMed

    Rowell, Douglas M; Ades, Peter K; Tausz, Michael; Arndt, Stefan K; Adams, Mark A

    2009-02-01

    We assessed the variation in delta(13)C signatures of Pinus radiata D. Don stemwood taken from three genetic trials in southern Australia. We sought to determine the potential of using delta(13)C signatures as selection criteria for drought tolerance. Increment cores were taken from P. radiata and were used to determine the basal area increment and the delta(13)C signature of extracted cellulose. Both growth increment and cellulose delta(13)C were affected by water availability. Growth increment and delta(13)C were negatively correlated suggesting that growth was water-limited. While there was significant genetic variation in growth, there was no significant genetic variation in cellulose delta(13)C of tree rings. This suggests that different genotypes of P. radiata display significant differences in growth and yet respond similarly to drought stress. The delta(13)C response to drought stress was more due to changes in stomatal conductance than to the variation in photosynthetic capacity, and this may explain the lack of genetic variation in delta(13)C. The lack of genetic variation in cellulose delta(13)C of tree rings precludes its use as a selection criterion for drought tolerance among P. radiata genotypes.

  9. Esthetic evaluation of incisor inclination in smiling profiles with respect to mandibular position.

    PubMed

    Zarif Najafi, Hooman; Oshagh, Morteza; Khalili, Mohammad Hassan; Torkan, Sepideh

    2015-09-01

    The smile is a key facial expression, and a careful assessment of the facial profile in smiling is an essential part of a complete orthodontic diagnosis. The aim of this study was to determine the preferred maxillary incisor inclination in the smile profile with regard to different mandibular positions. A smiling profile photograph of a man with normal facial profile features was altered digitally to obtain 3 different mandibular sagittal positions in 4-mm decrements or increments from -4 to +4 mm. In each mandibular position, the inclination of the maxillary incisors was changed from -10° to +10° in 5° increments. A total of 234 raters (72 senior dental students, 24 orthodontists, 21 maxillofacial surgeons, 25 prosthodontists, and 92 laypeople) were asked to score each photograph using a Likert-type rating scale. Mann-Whitney, Kruskal-Wallis, and intraclass correlation coefficient tests were used to analyze the data. In retruded and protruded mandibles, normal incisor inclination and the most retroclined incisors were selected as the most and the least attractive images, respectively, by almost all groups. With an orthognathic mandible, the image with the most retroclined incisors was selected as the least attractive, but the raters were not unanimous regarding the most attractive image. The intraclass correlation coefficient was 0.82 (high level of agreement). Also, the sex of the raters had no effect on the rating of the photographs. It is crucial to establish a normal incisor inclination, especially in patients with a mandibular deficiency or excess. An excessive maxillary incisor lingual inclination should be avoided regardless of the mandibular position. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  10. The prognostic value of CT radiomic features for patients with pulmonary adenocarcinoma treated with EGFR tyrosine kinase inhibitors

    PubMed Central

    Kim, Hyungjin; Park, Sang Joon; Kim, Miso; Kim, Tae Min; Kim, Dong-Wan; Heo, Dae Seog; Goo, Jin Mo

    2017-01-01

    Purpose To determine if the radiomic features on CT can predict progression-free survival (PFS) in epidermal growth factor receptor (EGFR) mutant adenocarcinoma patients treated with first-line EGFR tyrosine kinase inhibitors (TKIs) and to identify the incremental value of radiomic features over conventional clinical factors in PFS prediction. Methods In this institutional review board–approved retrospective study, pretreatment contrast-enhanced CT and first follow-up CT after initiation of TKIs were analyzed in 48 patients (M:F = 23:25; median age: 61 years). Radiomic features at baseline, at 1st first follow-up, and the percentage change between the two were determined. A Cox regression model was used to predict PFS with nonredundant radiomic features and clinical factors, respectively. The incremental value of radiomic features over the clinical factors in PFS prediction was also assessed by way of a concordance index. Results Roundness (HR: 3.91; 95% CI: 1.72, 8.90; P = 0.001) and grey-level nonuniformity (HR: 3.60; 95% CI: 1.80, 7.18; P<0.001) were independent predictors of PFS. For clinical factors, patient age (HR: 2.11; 95% CI: 1.01, 4.39; P = 0.046), baseline tumor diameter (HR: 1.03; 95% CI: 1.01, 1.05; P = 0.002), and treatment response (HR: 0.46; 95% CI: 0.24, 0.87; P = 0.017) were independent predictors. The addition of radiomic features to clinical factors significantly improved predictive performance (concordance index; combined model = 0.77, clinical-only model = 0.69, P<0.001). Conclusions Radiomic features enable PFS estimation in EGFR mutant adenocarcinoma patients treated with first-line EGFR TKIs. Radiomic features combined with clinical factors provide significant improvement in prognostic performance compared with using only clinical factors. PMID:29099855

  11. Dental caries increments and related factors in children with type 1 diabetes mellitus.

    PubMed

    Siudikiene, J; Machiulskiene, V; Nyvad, B; Tenovuo, J; Nedzelskiene, I

    2008-01-01

    The aim of this study was to analyse possible associations between caries increments and selected caries determinants in children with type 1 diabetes mellitus and their age- and sex-matched non-diabetic controls, over 2 years. A total of 63 (10-15 years old) diabetic and non-diabetic pairs were examined for dental caries, oral hygiene and salivary factors. Salivary flow rates, buffer effect, concentrations of mutans streptococci, lactobacilli, yeasts, total IgA and IgG, protein, albumin, amylase and glucose were analysed. Means of 2-year decayed/missing/filled surface (DMFS) increments were similar in diabetics and their controls. Over the study period, both unstimulated and stimulated salivary flow rates remained significantly lower in diabetic children compared to controls. No differences were observed in the counts of lactobacilli, mutans streptococci or yeast growth during follow-up, whereas salivary IgA, protein and glucose concentrations were higher in diabetics than in controls throughout the 2-year period. Multivariable linear regression analysis showed that children with higher 2-year DMFS increments were older at baseline and had higher salivary glucose concentrations than children with lower 2-year DMFS increments. Likewise, higher 2-year DMFS increments in diabetics versus controls were associated with greater increments in salivary glucose concentrations in diabetics. Higher increments in active caries lesions in diabetics versus controls were associated with greater increments of dental plaque and greater increments of salivary albumin. Our results suggest that, in addition to dental plaque as a common caries risk factor, diabetes-induced changes in salivary glucose and albumin concentrations are indicative of caries development among diabetics. Copyright 2008 S. Karger AG, Basel.

  12. Cascaded face alignment via intimacy definition feature

    NASA Astrophysics Data System (ADS)

    Li, Hailiang; Lam, Kin-Man; Chiu, Man-Yau; Wu, Kangheng; Lei, Zhibin

    2017-09-01

    Recent years have witnessed the emerging popularity of regression-based face aligners, which directly learn mappings between facial appearance and shape-increment manifolds. We propose a random-forest based, cascaded regression model for face alignment by using a locally lightweight feature, namely intimacy definition feature. This feature is more discriminative than the pose-indexed feature, more efficient than the histogram of oriented gradients feature and the scale-invariant feature transform feature, and more compact than the local binary feature (LBF). Experimental validation of our algorithm shows that our approach achieves state-of-the-art performance when testing on some challenging datasets. Compared with the LBF-based algorithm, our method achieves about twice the speed, 20% improvement in terms of alignment accuracy and saves an order of magnitude on memory requirement.

  13. Temporal Surface Reconstruction

    DTIC Science & Technology

    1991-05-03

    and the convergence cannot be guaranteed. Maybank [68] investigated alternative incremental schemes for the estimation of feature locations from a...depth from image sequences. International Journal of Computer Vision, 3, 1989. [68] S. J. Maybank . Filter based estimates of depth. In Proceedings of the

  14. The Incremental Value of Subjective and Quantitative Assessment of 18F-FDG PET for the Prediction of Pathologic Complete Response to Preoperative Chemoradiotherapy in Esophageal Cancer.

    PubMed

    van Rossum, Peter S N; Fried, David V; Zhang, Lifei; Hofstetter, Wayne L; van Vulpen, Marco; Meijer, Gert J; Court, Laurence E; Lin, Steven H

    2016-05-01

    A reliable prediction of a pathologic complete response (pathCR) to chemoradiotherapy before surgery for esophageal cancer would enable investigators to study the feasibility and outcome of an organ-preserving strategy after chemoradiotherapy. So far no clinical parameters or diagnostic studies are able to accurately predict which patients will achieve a pathCR. The aim of this study was to determine whether subjective and quantitative assessment of baseline and postchemoradiation (18)F-FDG PET can improve the accuracy of predicting pathCR to preoperative chemoradiotherapy in esophageal cancer beyond clinical predictors. This retrospective study was approved by the institutional review board, and the need for written informed consent was waived. Clinical parameters along with subjective and quantitative parameters from baseline and postchemoradiation (18)F-FDG PET were derived from 217 esophageal adenocarcinoma patients who underwent chemoradiotherapy followed by surgery. The associations between these parameters and pathCR were studied in univariable and multivariable logistic regression analysis. Four prediction models were constructed and internally validated using bootstrapping to study the incremental predictive values of subjective assessment of (18)F-FDG PET, conventional quantitative metabolic features, and comprehensive (18)F-FDG PET texture/geometry features, respectively. The clinical benefit of (18)F-FDG PET was determined using decision-curve analysis. A pathCR was found in 59 (27%) patients. A clinical prediction model (corrected c-index, 0.67) was improved by adding (18)F-FDG PET-based subjective assessment of response (corrected c-index, 0.72). This latter model was slightly improved by the addition of 1 conventional quantitative metabolic feature only (i.e., postchemoradiation total lesion glycolysis; corrected c-index, 0.73), and even more by subsequently adding 4 comprehensive (18)F-FDG PET texture/geometry features (corrected c-index, 0.77). However, at a decision threshold of 0.9 or higher, representing a clinically relevant predictive value for pathCR at which one may be willing to omit surgery, there was no clear incremental value. Subjective and quantitative assessment of (18)F-FDG PET provides statistical incremental value for predicting pathCR after preoperative chemoradiotherapy in esophageal cancer. However, the discriminatory improvement beyond clinical predictors does not translate into a clinically relevant benefit that could change decision making. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  15. Incremental benefit of three-dimensional transesophageal echocardiography in the assessment of a primary pericardial hemangioma.

    PubMed

    Arisha, Mohammed J; Hsiung, Ming C; Nanda, Navin C; ElKaryoni, Ahmed; Mohamed, Ahmed H; Wei, Jeng

    2017-08-01

    Hemangiomas are rarely found in the heart and pericardial involvement is even more rare. We report a case of primary pericardial hemangioma, in which three-dimensional transesophageal echocardiography (3DTEE) provided incremental benefit over standard two-dimensional images. Our case also highlights the importance of systematic cropping of the 3D datasets in making a diagnosis of pericardial hemangioma with a greater degree of certainty. In addition, we also provide a literature review of the features of cardiac/pericardial hemangiomas in a tabular form. © 2017, Wiley Periodicals, Inc.

  16. Frequency-selective augmenting responses by short-term synaptic depression in cat neocortex

    PubMed Central

    Houweling, Arthur R; Bazhenov, Maxim; Timofeev, Igor; Grenier, François; Steriade, Mircea; Sejnowski, Terrence J

    2002-01-01

    Thalamic stimulation at frequencies between 5 and 15 Hz elicits incremental or ‘augmenting’ cortical responses. Augmenting responses can also be evoked in cortical slices and isolated cortical slabs in vivo. Here we show that a realistic network model of cortical pyramidal cells and interneurones including short-term plasticity of inhibitory and excitatory synapses replicates the main features of augmenting responses as obtained in isolated slabs in vivo. Repetitive stimulation of synaptic inputs at frequencies around 10 Hz produced postsynaptic potentials that grew in size and carried an increasing number of action potentials resulting from the depression of inhibitory synaptic currents. Frequency selectivity was obtained through the relatively weak depression of inhibitory synapses at low frequencies, and strong depression of excitatory synapses together with activation of a calcium-activated potassium current at high frequencies. This network resonance is a consequence of short-term synaptic plasticity in a network of neurones without intrinsic resonances. These results suggest that short-term plasticity of cortical synapses could shape the dynamics of synchronized oscillations in the brain. PMID:12122156

  17. Numerical Simulation of Current Artillery Charges Using the TDNOVA Code.

    DTIC Science & Technology

    1986-06-01

    behavior was occasionally observed, particularly near the ends of the charge and particularly at increment-to-increment interfaces . Rather than expanding...between the charge sidewalls and the tube, had been observed at an early date by Kent. 3 The influence of axial ullage. or spaces between the ends of...subsided to within a user -selectable tolerance, the model is converted to a quasi-two-dimensional representation based on coupled regions of coaxial one

  18. Combining Accuracy and Efficiency: An Incremental Focal-Point Method Based on Pair Natural Orbitals.

    PubMed

    Fiedler, Benjamin; Schmitz, Gunnar; Hättig, Christof; Friedrich, Joachim

    2017-12-12

    In this work, we present a new pair natural orbitals (PNO)-based incremental scheme to calculate CCSD(T) and CCSD(T0) reaction, interaction, and binding energies. We perform an extensive analysis, which shows small incremental errors similar to previous non-PNO calculations. Furthermore, slight PNO errors are obtained by using T PNO = T TNO with appropriate values of 10 -7 to 10 -8 for reactions and 10 -8 for interaction or binding energies. The combination with the efficient MP2 focal-point approach yields chemical accuracy relative to the complete basis-set (CBS) limit. In this method, small basis sets (cc-pVDZ, def2-TZVP) for the CCSD(T) part are sufficient in case of reactions or interactions, while some larger ones (e.g., (aug)-cc-pVTZ) are necessary for molecular clusters. For these larger basis sets, we show the very high efficiency of our scheme. We obtain not only tremendous decreases of the wall times (i.e., factors >10 2 ) due to the parallelization of the increment calculations as well as of the total times due to the application of PNOs (i.e., compared to the normal incremental scheme) but also smaller total times with respect to the standard PNO method. That way, our new method features a perfect applicability by combining an excellent accuracy with a very high efficiency as well as the accessibility to larger systems due to the separation of the full computation into several small increments.

  19. Real-time on-line space research laboratory environment monitoring with off-line trend and prediction analysis

    NASA Astrophysics Data System (ADS)

    Jules, Kenol; Lin, Paul P.

    2007-06-01

    With the International Space Station currently operational, a significant amount of acceleration data is being down-linked, processed and analyzed daily on the ground on a continuous basis for the space station reduced gravity environment characterization, the vehicle design requirements verification and science data collection. To help understand the impact of the unique spacecraft environment on the science data, an artificial intelligence monitoring system was developed, which detects in near real time any change in the reduced gravity environment susceptible to affect the on-going experiments. Using a dynamic graphical display, the monitoring system allows science teams, at any time and any location, to see the active vibration disturbances, such as pumps, fans, compressor, crew exercise, re-boost and extra-vehicular activities that might impact the reduced gravity environment the experiments are exposed to. The monitoring system can detect both known and unknown vibratory disturbance activities. It can also perform trend analysis and prediction by analyzing past data over many increments (an increment usually lasts 6 months) collected onboard the station for selected disturbances. This feature can be used to monitor the health of onboard mechanical systems to detect and prevent potential systems failures. The monitoring system has two operating modes: online and offline. Both near real-time on-line vibratory disturbance detection and off-line detection and trend analysis are discussed in this paper.

  20. Regulating recognition decisions through incremental reinforcement learning.

    PubMed

    Han, Sanghoon; Dobbins, Ian G

    2009-06-01

    Does incremental reinforcement learning influence recognition memory judgments? We examined this question by subtly altering the relative validity or availability of feedback in order to differentially reinforce old or new recognition judgments. Experiment 1 probabilistically and incorrectly indicated that either misses or false alarms were correct in the context of feedback that was otherwise accurate. Experiment 2 selectively withheld feedback for either misses or false alarms in the context of feedback that was otherwise present. Both manipulations caused prominent shifts of recognition memory decision criteria that remained for considerable periods even after feedback had been altogether removed. Overall, these data demonstrate that incremental reinforcement-learning mechanisms influence the degree of caution subjects exercise when evaluating explicit memories.

  1. International Space Station Increment Operations Services

    NASA Astrophysics Data System (ADS)

    Michaelis, Horst; Sielaff, Christian

    2002-01-01

    The Industrial Operator (IO) has defined End-to-End services to perform efficiently all required operations tasks for the Manned Space Program (MSP) as agreed during the Ministerial Council in Edinburgh in November 2001. Those services are the result of a detailed task analysis based on the operations processes as derived from the Space Station Program Implementation Plans (SPIP) and defined in the Operations Processes Documents (OPD). These services are related to ISS Increment Operations and ATV Mission Operations. Each of these End-to-End services is typically characterised by the following properties: It has a clearly defined starting point, where all requirements on the end-product are fixed and associated performance metrics of the customer are well defined. It has a clearly defined ending point, when the product or service is delivered to the customer and accepted by him, according to the performance metrics defined at the start point. The implementation of the process might be restricted by external boundary conditions and constraints mutually agreed with the customer. As far as those are respected the IO has the free choice to select methods and means of implementation. The ISS Increment Operations Service (IOS) activities required for the MSP Exploitation program cover the complete increment specific cycle starting with the support to strategic planning and ending with the post increment evaluation. These activities are divided into sub-services including the following tasks: - ISS Planning Support covering the support to strategic and tactical planning up to the generation - Development &Payload Integration Support - ISS Increment Preparation - ISS Increment Execution These processes are tight together by the Increment Integration Management, which provides the planning and scheduling of all activities as well as the technical management of the overall process . The paper describes the entire End-to-End ISS Increment Operations service and the implementation to support the Columbus Flight 1E related increment and subsequent ISS increments. Special attention is paid to the implications caused by long term operations on hardware, software and operations personnel.

  2. Projection methods for incompressible flow problems with WENO finite difference schemes

    NASA Astrophysics Data System (ADS)

    de Frutos, Javier; John, Volker; Novo, Julia

    2016-03-01

    Weighted essentially non-oscillatory (WENO) finite difference schemes have been recommended in a competitive study of discretizations for scalar evolutionary convection-diffusion equations [20]. This paper explores the applicability of these schemes for the simulation of incompressible flows. To this end, WENO schemes are used in several non-incremental and incremental projection methods for the incompressible Navier-Stokes equations. Velocity and pressure are discretized on the same grid. A pressure stabilization Petrov-Galerkin (PSPG) type of stabilization is introduced in the incremental schemes to account for the violation of the discrete inf-sup condition. Algorithmic aspects of the proposed schemes are discussed. The schemes are studied on several examples with different features. It is shown that the WENO finite difference idea can be transferred to the simulation of incompressible flows. Some shortcomings of the methods, which are due to the splitting in projection schemes, become also obvious.

  3. Hurst Estimation of Scale Invariant Processes with Stationary Increments and Piecewise Linear Drift

    NASA Astrophysics Data System (ADS)

    Modarresi, N.; Rezakhah, S.

    The characteristic feature of the discrete scale invariant (DSI) processes is the invariance of their finite dimensional distributions by dilation for certain scaling factor. DSI process with piecewise linear drift and stationary increments inside prescribed scale intervals is introduced and studied. To identify the structure of the process, first, we determine the scale intervals, their linear drifts and eliminate them. Then, a new method for the estimation of the Hurst parameter of such DSI processes is presented and applied to some period of the Dow Jones indices. This method is based on fixed number equally spaced samples inside successive scale intervals. We also present some efficient method for estimating Hurst parameter of self-similar processes with stationary increments. We compare the performance of this method with the celebrated FA, DFA and DMA on the simulated data of fractional Brownian motion (fBm).

  4. Search for Minimal and Semi-Minimal Rule Sets in Incremental Learning of Context-Free and Definite Clause Grammars

    NASA Astrophysics Data System (ADS)

    Imada, Keita; Nakamura, Katsuhiko

    This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.

  5. Small Diameter Bomb Increment II (SDB II)

    DTIC Science & Technology

    2015-12-01

    Selected Acquisition Report ( SAR ) RCS: DD-A&T(Q&A)823-439 Small Diameter Bomb Increment II (SDB II) As of FY 2017 President’s Budget Defense...Acquisition Management Information Retrieval (DAMIR) March 23, 2016 16:19:13 UNCLASSIFIED SDB II December 2015 SAR March 23, 2016 16:19:13 UNCLASSIFIED...Document OSD - Office of the Secretary of Defense O&S - Operating and Support PAUC - Program Acquisition Unit Cost SDB II December 2015 SAR March 23

  6. Changes of brain response induced by simulated weightlessness

    NASA Astrophysics Data System (ADS)

    Wei, Jinhe; Yan, Gongdong; Guan, Zhiqiang

    The characteristics change of brain response was studied during 15° head-down tilt (HDT) comparing with 45° head-up tilt (HUT). The brain responses evaluated included the EEG power spectra change at rest and during mental arithmetic, and the event-related potentials (ERPs) of somatosensory, selective attention and mental arithmetic activities. The prominent feature of brain response change during HDT revealed that the brain function was inhibited to some extent. Such inhibition included that the significant increment of "40Hz" activity during HUT arithmetic almost disappeared during HDT arithmetic, and that the positive-potential effect induced by HDT presented in all kinds of ERPs measured, but the slow negative wave reflecting mental arithmetic and memory process was elongated. These data suggest that the brain function be affected profoundly by the simulated weightlessness, therefore, the brain function change during space flight should be studied systematically.

  7. The use of fatigue tests in the manufacture of automotive steel wheels.

    NASA Astrophysics Data System (ADS)

    Drozyner, P.; Rychlik, A.

    2016-08-01

    Production for the automotive industry must be particularly sensitive to the aspect of safety and reliability of manufactured components. One of such element is the rim, where durability is a feature which significantly affects the safety of transport. Customer complaints regarding this element are particularly painful for the manufacturer because it is almost always associated with the event of accident or near-accident. Authors propose original comprehensive method of quality control at selected stages of rims production: supply of materials, production and pre-shipment inspections. Tests by the proposed method are carried out on the originally designed inertial fatigue machine The machine allows bending fatigue tests in the frequency range of 0 to 50 Hz at controlled increments of vibration amplitude. The method has been positively verified in one of rims factory in Poland. Implementation resulted in an almost complete elimination of complaints resulting from manufacturing and material errors.

  8. IGOS improvements for Seasat

    NASA Technical Reports Server (NTRS)

    Warmke, J. M.

    1979-01-01

    Modifications to Battelle's Interactive Graphics Orbit Selection (IGOS) computer program to assist in the planning and evaluation of the Seasat-A Scatterometer System (SASS) flight program were studied. To meet the planning needs of the LaRC Seasat-A Scatterometer team, the following features/modifications were implemented in IGOS: (1) display and specification of time increments in orbital passes represented by the cross-hatching of ground swaths; (2) addition of pass number annotations on the horizontal axis of the STPLNG and STPTOD plots; (3) modification of the sensor model to include more than two swaths associated with a single sensor to approximate the SASS cell pattern; (4) inclusion of down range and cross-track swath geometry to display the characteristic skewed SASS pattern; (5) addition of a swath schedule to allow the display of the SASS mode changes and to calibrate gaps; and (6) development of a set of commands to generate the detailed swath data from sensor characteristics and orbit/earth motion.

  9. Incremental validity of the episode size criterion in binge-eating definitions: An examination in women with purging syndromes.

    PubMed

    Forney, K Jean; Bodell, Lindsay P; Haedt-Matt, Alissa A; Keel, Pamela K

    2016-07-01

    Of the two primary features of binge eating, loss of control (LOC) eating is well validated while the role of eating episode size is less clear. Given the ICD-11 proposal to eliminate episode size from the binge-eating definition, the present study examined the incremental validity of the size criterion, controlling for LOC. Interview and questionnaire data come from four studies of 243 women with bulimia nervosa (n = 141) or purging disorder (n = 102). Hierarchical linear regression tested if the largest reported episode size, coded in kilocalories, explained additional variance in eating disorder features, psychopathology, personality traits, and impairment, holding constant LOC eating frequency, age, and body mass index (BMI). Analyses also tested if episode size moderated the association between LOC eating and these variables. Holding LOC constant, episode size explained significant variance in disinhibition, trait anxiety, and eating disorder-related impairment. Episode size moderated the association of LOC eating with purging frequency and depressive symptoms, such that in the presence of larger eating episodes, LOC eating was more closely associated with these features. Neither episode size nor its interaction with LOC explained additional variance in BMI, hunger, restraint, shape concerns, state anxiety, negative urgency, or global functioning. Taken together, results support the incremental validity of the size criterion, in addition to and in combination with LOC eating, for defining binge-eating episodes in purging syndromes. Future research should examine the predictive validity of episode size in both purging and nonpurging eating disorders (e.g., binge eating disorder) to inform nosological schemes. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:651-662). © 2016 Wiley Periodicals, Inc.

  10. SAIL: Summation-bAsed Incremental Learning for Information-Theoretic Text Clustering.

    PubMed

    Cao, Jie; Wu, Zhiang; Wu, Junjie; Xiong, Hui

    2013-04-01

    Information-theoretic clustering aims to exploit information-theoretic measures as the clustering criteria. A common practice on this topic is the so-called Info-Kmeans, which performs K-means clustering with KL-divergence as the proximity function. While expert efforts on Info-Kmeans have shown promising results, a remaining challenge is to deal with high-dimensional sparse data such as text corpora. Indeed, it is possible that the centroids contain many zero-value features for high-dimensional text vectors, which leads to infinite KL-divergence values and creates a dilemma in assigning objects to centroids during the iteration process of Info-Kmeans. To meet this challenge, in this paper, we propose a Summation-bAsed Incremental Learning (SAIL) algorithm for Info-Kmeans clustering. Specifically, by using an equivalent objective function, SAIL replaces the computation of KL-divergence by the incremental computation of Shannon entropy. This can avoid the zero-feature dilemma caused by the use of KL-divergence. To improve the clustering quality, we further introduce the variable neighborhood search scheme and propose the V-SAIL algorithm, which is then accelerated by a multithreaded scheme in PV-SAIL. Our experimental results on various real-world text collections have shown that, with SAIL as a booster, the clustering performance of Info-Kmeans can be significantly improved. Also, V-SAIL and PV-SAIL indeed help improve the clustering quality at a lower cost of computation.

  11. Optimization of incremental structure from motion combining a random k-d forest and pHash for unordered images in a complex scene

    NASA Astrophysics Data System (ADS)

    Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi

    2018-01-01

    On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.

  12. How Can Evolution Learn?

    PubMed

    Watson, Richard A; Szathmáry, Eörs

    2016-02-01

    The theory of evolution links random variation and selection to incremental adaptation. In a different intellectual domain, learning theory links incremental adaptation (e.g., from positive and/or negative reinforcement) to intelligent behaviour. Specifically, learning theory explains how incremental adaptation can acquire knowledge from past experience and use it to direct future behaviours toward favourable outcomes. Until recently such cognitive learning seemed irrelevant to the 'uninformed' process of evolution. In our opinion, however, new results formally linking evolutionary processes to the principles of learning might provide solutions to several evolutionary puzzles - the evolution of evolvability, the evolution of ecological organisation, and evolutionary transitions in individuality. If so, the ability for evolution to learn might explain how it produces such apparently intelligent designs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Metric integration architecture for product development

    NASA Astrophysics Data System (ADS)

    Sieger, David B.

    1997-06-01

    Present-day product development endeavors utilize the concurrent engineering philosophy as a logical means for incorporating a variety of viewpoints into the design of products. Since this approach provides no explicit procedural provisions, it is necessary to establish at least a mental coupling with a known design process model. The central feature of all such models is the management and transformation of information. While these models assist in structuring the design process, characterizing the basic flow of operations that are involved, they provide no guidance facilities. The significance of this feature, and the role it plays in the time required to develop products, is increasing in importance due to the inherent process dynamics, system/component complexities, and competitive forces. The methodology presented in this paper involves the use of a hierarchical system structure, discrete event system specification (DEVS), and multidimensional state variable based metrics. This approach is unique in its capability to quantify designer's actions throughout product development, provide recommendations about subsequent activity selection, and coordinate distributed activities of designers and/or design teams across all design stages. Conceptual design tool implementation results are used to demonstrate the utility of this technique in improving the incremental decision making process.

  14. ARIES: Acquisition of Requirements and Incremental Evolution of Specifications

    NASA Technical Reports Server (NTRS)

    Roberts, Nancy A.

    1993-01-01

    This paper describes a requirements/specification environment specifically designed for large-scale software systems. This environment is called ARIES (Acquisition of Requirements and Incremental Evolution of Specifications). ARIES provides assistance to requirements analysts for developing operational specifications of systems. This development begins with the acquisition of informal system requirements. The requirements are then formalized and gradually elaborated (transformed) into formal and complete specifications. ARIES provides guidance to the user in validating formal requirements by translating them into natural language representations and graphical diagrams. ARIES also provides ways of analyzing the specification to ensure that it is correct, e.g., testing the specification against a running simulation of the system to be built. Another important ARIES feature, especially when developing large systems, is the sharing and reuse of requirements knowledge. This leads to much less duplication of effort. ARIES combines all of its features in a single environment that makes the process of capturing a formal specification quicker and easier.

  15. Constituency and origins of cyclic growth layers in pelecypod shells, part 1

    NASA Technical Reports Server (NTRS)

    Berry, W. B. N.

    1972-01-01

    Growth layers occurring in shells of 98 species of pelecypods were examined microscopically in thin section and as natural and etched surfaces. Study began with shells of eleven species known from life history investigations to have annual cycles of growth. Internal microstructural features of the annual layers in these shells provided criteria for recognition of similar, apparently annual shell increments in eighty-six of eighty-seven other species. All of the specimens feature growth laminae, commonly on the order of 50 microns in thickness. The specimens from shallow marine environments show either a clustering of growth laminae related to the formation of concentric ridges or minor growth bands on the external shell surface. Based on observations of the number of growth laminae and clusters per annual-growth layer, it was hypothesised that the subannual increments may be related to daily and fortnightly (and in some cases monthly) cycles in the environment. Possible applications of the paleogrowth method in the fields of paleoecology and paleoclimatology are discussed.

  16. Effects of frequency and duration on psychometric functions for detection of increments and decrements in sinusoids in noise.

    PubMed

    Moore, B C; Peters, R W; Glasberg, B R

    1999-12-01

    Psychometric functions for detecting increments or decrements in level of sinusoidal pedestals were measured for increment and decrement durations of 5, 10, 20, 50, 100, and 200 ms and for frequencies of 250, 1000, and 4000 Hz. The sinusoids were presented in background noise intended to mask spectral splatter. A three-interval, three-alternative procedure was used. The results indicated that, for increments, the detectability index d' was approximately proportional to delta I/I. For decrements, d' was approximately proportional to delta L. The slopes of the psychometric functions increased (indicating better performance) with increasing frequency for both increments and decrements. For increments, the slopes increased with increasing increment duration up to 200 ms at 250 and 1000 Hz, but at 4000 Hz they increased only up to 50 ms. For decrements, the slopes increased for durations up to 50 ms, and then remained roughly constant, for all frequencies. For a center frequency of 250 Hz, the slopes of the psychometric functions for increment detection increased with duration more rapidly than predicted by a "multiple-looks" hypothesis, i.e., more rapidly than the square root of duration, for durations up to 50 ms. For center frequencies of 1000 and 4000 Hz, the slopes increased less rapidly than predicted by a multiple-looks hypothesis, for durations greater than about 20 ms. The slopes of the psychometric functions for decrement detection increased with decrement duration at a rate slightly greater than the square root of duration, for durations up to 50 ms, at all three frequencies. For greater durations, the increase in slope was less than proportional to the square root of duration. The results were analyzed using a model incorporating a simulated auditory filter, a compressive nonlinearity, a sliding temporal integrator, and a decision device based on a template mechanism. The model took into account the effects of both the external noise and an assumed internal noise. The model was able to account for the major features of the data for both increment and decrement detection.

  17. Age effects on discrimination of timing in auditory sequences

    NASA Astrophysics Data System (ADS)

    Fitzgibbons, Peter J.; Gordon-Salant, Sandra

    2004-08-01

    The experiments examined age-related changes in temporal sensitivity to increments in the interonset intervals (IOI) of components in tonal sequences. Discrimination was examined using reference sequences consisting of five 50-ms tones separated by silent intervals; tone frequencies were either fixed at 4 kHz or varied within a 2-4-kHz range to produce spectrally complex patterns. The tonal IOIs within the reference sequences were either equal (200 or 600 ms) or varied individually with an average value of 200 or 600 ms to produce temporally complex patterns. The difference limen (DL) for increments of IOI was measured. Comparison sequences featured either equal increments in all tonal IOIs or increments in a single target IOI, with the sequential location of the target changing randomly across trials. Four groups of younger and older adults with and without sensorineural hearing loss participated. Results indicated that DLs for uniform changes of sequence rate were smaller than DLs for single target intervals, with the largest DLs observed for single targets embedded within temporally complex sequences. Older listeners performed more poorly than younger listeners in all conditions, but the largest age-related differences were observed for temporally complex stimulus conditions. No systematic effects of hearing loss were observed.

  18. Joint Precision Approach and Landing System Increment 1A (JPALS Inc 1A)

    DTIC Science & Technology

    2015-12-01

    Selected Acquisition Report ( SAR ) RCS: DD-A&T(Q&A)823-238 Joint Precision Approach and Landing System Increment 1A (JPALS Inc 1A) As of FY 2017...President’s Budget Defense Acquisition Management Information Retrieval (DAMIR) March 10, 2016 11:30:56 UNCLASSIFIED JPALS Inc 1A December 2015 SAR ...Cost JPALS Inc 1A December 2015 SAR March 10, 2016 11:30:56 UNCLASSIFIED 3 PB - President’s Budget PE - Program Element PEO - Program Executive Officer

  19. Analysis of Marine Corps Renewable Energy Planning to Meet Installation Energy Security Requirements

    DTIC Science & Technology

    2013-12-03

    experimentation eventually gives way to the era of ferment . A few new technologies break through in the industry and are applied to a growing number of niche...experimentation and ferment eventually give way to an era of incremental change, where the industry down-selects to the most successful and efficient...clearly in the later part of the era of incremental change. Most renewables, however, are only just now moving into the second phase, the era of ferment

  20. Dynamic and scalable audio classification by collective network of binary classifiers framework: an evolutionary approach.

    PubMed

    Kiranyaz, Serkan; Mäkinen, Toni; Gabbouj, Moncef

    2012-10-01

    In this paper, we propose a novel framework based on a collective network of evolutionary binary classifiers (CNBC) to address the problems of feature and class scalability. The main goal of the proposed framework is to achieve a high classification performance over dynamic audio and video repositories. The proposed framework adopts a "Divide and Conquer" approach in which an individual network of binary classifiers (NBC) is allocated to discriminate each audio class. An evolutionary search is applied to find the best binary classifier in each NBC with respect to a given criterion. Through the incremental evolution sessions, the CNBC framework can dynamically adapt to each new incoming class or feature set without resorting to a full-scale re-training or re-configuration. Therefore, the CNBC framework is particularly designed for dynamically varying databases where no conventional static classifiers can adapt to such changes. In short, it is entirely a novel topology, an unprecedented approach for dynamic, content/data adaptive and scalable audio classification. A large set of audio features can be effectively used in the framework, where the CNBCs make appropriate selections and combinations so as to achieve the highest discrimination among individual audio classes. Experiments demonstrate a high classification accuracy (above 90%) and efficiency of the proposed framework over large and dynamic audio databases. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Incremental Reactivity Effects on Secondary Organic Aerosol Formation in Urban Atmospheres with and without Biogenic Influence

    NASA Astrophysics Data System (ADS)

    Kacarab, Mary; Li, Lijie; Carter, William P. L.; Cocker, David R., III

    2016-04-01

    Two different surrogate mixtures of anthropogenic and biogenic volatile organic compounds (VOCs) were developed to study secondary organic aerosol (SOA) formation at atmospheric reactivities similar to urban regions with varying biogenic influence levels. Environmental chamber simulations were designed to enable the study of the incremental aerosol formation from select anthropogenic (m-Xylene, 1,2,4-Trimethylbenzene, and 1-Methylnaphthalene) and biogenic (α-pinene) precursors under the chemical reactivity set by the two different surrogate mixtures. The surrogate reactive organic gas (ROG) mixtures were based on that used to develop the maximum incremental reactivity (MIR) factors for evaluation of O3 forming potential. Multiple incremental aerosol formation experiments were performed in the University of California Riverside (UCR) College of Engineering Center for Environmental Research and Technology (CE-CERT) dual 90m3 environmental chambers. Incremental aerosol yields were determined for each of the VOCs studied and compared to yields found from single precursor studies. Aerosol physical properties of density, volatility, and hygroscopicity were monitored throughout experiments. Bulk elemental chemical composition from high-resolution time of flight aerosol mass spectrometer (HR-ToF-AMS) data will also be presented. Incremental yields and SOA chemical and physical characteristics will be compared with data from previous single VOC studies conducted for these aerosol precursors following traditional VOC/NOx chamber experiments. Evaluation of the incremental effects of VOCs on SOA formation and properties are paramount in evaluating how to best extrapolate environmental chamber observations to the ambient atmosphere and provides useful insights into current SOA formation models. Further, the comparison of incremental SOA from VOCs in varying surrogate urban atmospheres (with and without strong biogenic influence) allows for a unique perspective on the impacts different compounds have on aerosol formation in different urban regions.

  2. Design Principles for a Comprehensive Library System.

    ERIC Educational Resources Information Center

    Uluakar, Tamer; And Others

    1981-01-01

    Describes an online design featuring circulation control, catalog access, and serial holdings that uses an incremental approach to system development. Utilizing a dedicated computer, this second of three releases pays particular attention to present and predicted computing capabilities as well as trends in library automation. (Author/RAA)

  3. Laser goniometer

    DOEpatents

    Fairer, George M.; Boernge, James M.; Harris, David W.; Campbell, DeWayne A.; Tuttle, Gene E.; McKeown, Mark H.; Beason, Steven C.

    1993-01-01

    The laser goniometer is an apparatus which permits an operator to sight along a geologic feature and orient a collimated lamer beam to match the attitude of the feature directly. The horizontal orientation (strike) and the angle from horizontal (dip), are detected by rotary incremental encoders attached to the laser goniometer which provide a digital readout of the azimuth and tilt of the collimated laser beam. A microprocessor then translates the square wave signal encoder outputs into an ASCII signal for use by data recording equipment.

  4. Residual stresses investigations in composite samples by speckle interferometry and specimen repositioning

    NASA Astrophysics Data System (ADS)

    Baldi, Alfonso; Jacquot, Pierre

    2003-05-01

    Graphite-epoxy laminates are subjected to the "incremental hole-drilling" technique in order to investigate the residual stresses acting within each layer of the composite samples. In-plane speckle interferometry is used to measure the displacement field created by each drilling increment around the hole. Our approach features two particularities (1) we rely on the precise repositioning of the samples in the optical set-up after each new boring step, performed by means of a high precision, numerically controlled milling machine in the workshop; (2) for each increment, we acquire three displacement fields, along the length, the width of the samples, and at 45°, using a single symmetrical double beam illumination and a rotary stage holding the specimens. The experimental protocol is described in detail and the experimental results are presented, including a comparison with strain gages. Speckle interferometry appears as a suitable method to respond to the increasing demand for residual stress determination in composite samples.

  5. Predicting Relapse in Patients With Medulloblastoma by Integrating Evidence From Clinical and Genomic Features

    PubMed Central

    Tamayo, Pablo; Cho, Yoon-Jae; Tsherniak, Aviad; Greulich, Heidi; Ambrogio, Lauren; Schouten-van Meeteren, Netteke; Zhou, Tianni; Buxton, Allen; Kool, Marcel; Meyerson, Matthew; Pomeroy, Scott L.; Mesirov, Jill P.

    2011-01-01

    Purpose Despite significant progress in the molecular understanding of medulloblastoma, stratification of risk in patients remains a challenge. Focus has shifted from clinical parameters to molecular markers, such as expression of specific genes and selected genomic abnormalities, to improve accuracy of treatment outcome prediction. Here, we show how integration of high-level clinical and genomic features or risk factors, including disease subtype, can yield more comprehensive, accurate, and biologically interpretable prediction models for relapse versus no-relapse classification. We also introduce a novel Bayesian nomogram indicating the amount of evidence that each feature contributes on a patient-by-patient basis. Patients and Methods A Bayesian cumulative log-odds model of outcome was developed from a training cohort of 96 children treated for medulloblastoma, starting with the evidence provided by clinical features of metastasis and histology (model A) and incrementally adding the evidence from gene-expression–derived features representing disease subtype–independent (model B) and disease subtype–dependent (model C) pathways, and finally high-level copy-number genomic abnormalities (model D). The models were validated on an independent test cohort (n = 78). Results On an independent multi-institutional test data set, models A to D attain an area under receiver operating characteristic (au-ROC) curve of 0.73 (95% CI, 0.60 to 0.84), 0.75 (95% CI, 0.64 to 0.86), 0.80 (95% CI, 0.70 to 0.90), and 0.78 (95% CI, 0.68 to 0.88), respectively, for predicting relapse versus no relapse. Conclusion The proposed models C and D outperform the current clinical classification schema (au-ROC, 0.68), our previously published eight-gene outcome signature (au-ROC, 0.71), and several new schemas recently proposed in the literature for medulloblastoma risk stratification. PMID:21357789

  6. iNuc-PhysChem: A Sequence-Based Predictor for Identifying Nucleosomes via Physicochemical Properties

    PubMed Central

    Feng, Peng-Mian; Ding, Chen; Zuo, Yong-Chun; Chou, Kuo-Chen

    2012-01-01

    Nucleosome positioning has important roles in key cellular processes. Although intensive efforts have been made in this area, the rules defining nucleosome positioning is still elusive and debated. In this study, we carried out a systematic comparison among the profiles of twelve DNA physicochemical features between the nucleosomal and linker sequences in the Saccharomyces cerevisiae genome. We found that nucleosomal sequences have some position-specific physicochemical features, which can be used for in-depth studying nucleosomes. Meanwhile, a new predictor, called iNuc-PhysChem, was developed for identification of nucleosomal sequences by incorporating these physicochemical properties into a 1788-D (dimensional) feature vector, which was further reduced to a 884-D vector via the IFS (incremental feature selection) procedure to optimize the feature set. It was observed by a cross-validation test on a benchmark dataset that the overall success rate achieved by iNuc-PhysChem was over 96% in identifying nucleosomal or linker sequences. As a web-server, iNuc-PhysChem is freely accessible to the public at http://lin.uestc.edu.cn/server/iNuc-PhysChem. For the convenience of the vast majority of experimental scientists, a step-by-step guide is provided on how to use the web-server to get the desired results without the need to follow the complicated mathematics that were presented just for the integrity in developing the predictor. Meanwhile, for those who prefer to run predictions in their own computers, the predictor's code can be easily downloaded from the web-server. It is anticipated that iNuc-PhysChem may become a useful high throughput tool for both basic research and drug design. PMID:23144709

  7. Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.

    2018-02-01

    The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.

  8. Mutual-Information-Based Incremental Relaying Communications for Wireless Biomedical Implant Systems

    PubMed Central

    Liao, Yangzhe; Cai, Qing; Ai, Qingsong; Liu, Quan

    2018-01-01

    Network lifetime maximization of wireless biomedical implant systems is one of the major research challenges of wireless body area networks (WBANs). In this paper, a mutual information (MI)-based incremental relaying communication protocol is presented where several on-body relay nodes and one coordinator are attached to the clothes of a patient. Firstly, a comprehensive analysis of a system model is investigated in terms of channel path loss, energy consumption, and the outage probability from the network perspective. Secondly, only when the MI value becomes smaller than the predetermined threshold is data transmission allowed. The communication path selection can be either from the implanted sensor to the on-body relay then forwards to the coordinator or from the implanted sensor to the coordinator directly, depending on the communication distance. Moreover, mathematical models of quality of service (QoS) metrics are derived along with the related subjective functions. The results show that the MI-based incremental relaying technique achieves better performance in comparison to our previous proposed protocol techniques regarding several selected performance metrics. The outcome of this paper can be applied to intra-body continuous physiological signal monitoring, artificial biofeedback-oriented WBANs, and telemedicine system design. PMID:29419784

  9. Mutual-Information-Based Incremental Relaying Communications for Wireless Biomedical Implant Systems.

    PubMed

    Liao, Yangzhe; Leeson, Mark S; Cai, Qing; Ai, Qingsong; Liu, Quan

    2018-02-08

    Network lifetime maximization of wireless biomedical implant systems is one of the major research challenges of wireless body area networks (WBANs). In this paper, a mutual information (MI)-based incremental relaying communication protocol is presented where several on-body relay nodes and one coordinator are attached to the clothes of a patient. Firstly, a comprehensive analysis of a system model is investigated in terms of channel path loss, energy consumption, and the outage probability from the network perspective. Secondly, only when the MI value becomes smaller than the predetermined threshold is data transmission allowed. The communication path selection can be either from the implanted sensor to the on-body relay then forwards to the coordinator or from the implanted sensor to the coordinator directly, depending on the communication distance. Moreover, mathematical models of quality of service (QoS) metrics are derived along with the related subjective functions. The results show that the MI-based incremental relaying technique achieves better performance in comparison to our previous proposed protocol techniques regarding several selected performance metrics. The outcome of this paper can be applied to intra-body continuous physiological signal monitoring, artificial biofeedback-oriented WBANs, and telemedicine system design.

  10. Directional emittance surface measurement system and process

    NASA Technical Reports Server (NTRS)

    Puram, Chith K. (Inventor); Daryabeigi, Kamran (Inventor); Wright, Robert (Inventor); Alderfer, David W. (Inventor)

    1994-01-01

    Apparatus and process for measuring the variation of directional emittance of surfaces at various temperatures using a radiometric infrared imaging system. A surface test sample is coated onto a copper target plate provided with selective heating within the desired incremental temperature range to be tested and positioned onto a precision rotator to present selected inclination angles of the sample relative to the fixed positioned and optically aligned infrared imager. A thermal insulator holder maintains the target plate on the precision rotator. A screen display of the temperature obtained by the infrared imager, and inclination readings are provided with computer calculations of directional emittance being performed automatically according to equations provided to convert selected incremental target temperatures and inclination angles to relative target directional emittance values. The directional emittance of flat black lacquer and an epoxy resin measurements obtained are in agreement with the predictions of the electromagnetic theory and with directional emittance data inferred from directional reflectance measurements made on a spectrophotometer.

  11. The dark side of incremental learning: a model of cumulative semantic interference during lexical access in speech production.

    PubMed

    Oppenheim, Gary M; Dell, Gary S; Schwartz, Myrna F

    2010-02-01

    Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have claimed that the findings are only understandable by positing a competitive mechanism for lexical selection. We present a simple model of lexical retrieval in speech production that applies error-driven learning to its lexical activation network. This model naturally produces repetition priming and semantic interference effects. It predicts the major findings from several published experiments, demonstrating that these effects may arise from incremental learning. Furthermore, analysis of the model suggests that competition during lexical selection is not necessary for semantic interference if the learning process is itself competitive. Copyright 2009 Elsevier B.V. All rights reserved.

  12. Economics and the 1995 National Assessment of United States Oil and Gas Resources

    USGS Publications Warehouse

    Attanasi, E.D.

    1998-01-01

    This report summarizes the economic component of the 1995 National Assessment of Oil and Gas Resources prepared by the U.S. Geological Survey for onshore and State offshore areas of the United States. Province and regional incremental cost functions for conventional undiscovered oil and gas fields, and selected unconventional oil and gas accumulations, allowing the ranking of areas by the incremental costs finding, developing, and producing these resources. Regional projections of additions to reserves from previously discovered fields to 2015 are also presented.

  13. Intelligent power consumption with two-way shift able feature and its implementation

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Liu, Youwei

    2017-10-01

    This paper proposes an intelligent power consumption system with two-way shift able feature and its implementation. Based on power consumption information of standby load and load in working state, a dispatching system decomposes load regulation demand top-down to smart appliances and makes them response orderly as required. It designs a code-based representation method for power consumption information and takes account of standby load, which lays the information foundation for load increment. It also presents a shift able index, which can be used to comprehensively reflect feature of electrical equipment and users and provides a basis for load priority.

  14. Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.

    PubMed

    Gijsberts, Arjan; Metta, Giorgio

    2013-05-01

    Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A new scoring system in Cystic Fibrosis: statistical tools for database analysis - a preliminary report.

    PubMed

    Hafen, G M; Hurst, C; Yearwood, J; Smith, J; Dzalilov, Z; Robinson, P J

    2008-10-05

    Cystic fibrosis is the most common fatal genetic disorder in the Caucasian population. Scoring systems for assessment of Cystic fibrosis disease severity have been used for almost 50 years, without being adapted to the milder phenotype of the disease in the 21st century. The aim of this current project is to develop a new scoring system using a database and employing various statistical tools. This study protocol reports the development of the statistical tools in order to create such a scoring system. The evaluation is based on the Cystic Fibrosis database from the cohort at the Royal Children's Hospital in Melbourne. Initially, unsupervised clustering of the all data records was performed using a range of clustering algorithms. In particular incremental clustering algorithms were used. The clusters obtained were characterised using rules from decision trees and the results examined by clinicians. In order to obtain a clearer definition of classes expert opinion of each individual's clinical severity was sought. After data preparation including expert-opinion of an individual's clinical severity on a 3 point-scale (mild, moderate and severe disease), two multivariate techniques were used throughout the analysis to establish a method that would have a better success in feature selection and model derivation: 'Canonical Analysis of Principal Coordinates' and 'Linear Discriminant Analysis'. A 3-step procedure was performed with (1) selection of features, (2) extracting 5 severity classes out of a 3 severity class as defined per expert-opinion and (3) establishment of calibration datasets. (1) Feature selection: CAP has a more effective "modelling" focus than DA.(2) Extraction of 5 severity classes: after variables were identified as important in discriminating contiguous CF severity groups on the 3-point scale as mild/moderate and moderate/severe, Discriminant Function (DF) was used to determine the new groups mild, intermediate moderate, moderate, intermediate severe and severe disease. (3) Generated confusion tables showed a misclassification rate of 19.1% for males and 16.5% for females, with a majority of misallocations into adjacent severity classes particularly for males. Our preliminary data show that using CAP for detection of selection features and Linear DA to derive the actual model in a CF database might be helpful in developing a scoring system. However, there are several limitations, particularly more data entry points are needed to finalize a score and the statistical tools have further to be refined and validated, with re-running the statistical methods in the larger dataset.

  16. Beyond Correlations: Usefulness of High School GPA and Test Scores in Making College Admissions Decisions

    ERIC Educational Resources Information Center

    Sawyer, Richard

    2013-01-01

    Correlational evidence suggests that high school GPA is better than admission test scores in predicting first-year college GPA, although test scores have incremental predictive validity. The usefulness of a selection variable in making admission decisions depends in part on its predictive validity, but also on institutions' selectivity and…

  17. Line roughness improvements on self-aligned quadruple patterning by wafer stress engineering

    NASA Astrophysics Data System (ADS)

    Liu, Eric; Ko, Akiteru; Biolsi, Peter; Chae, Soo Doo; Hsieh, Chia-Yun; Kagaya, Munehito; Lee, Choongman; Moriya, Tsuyoshi; Tsujikawa, Shimpei; Suzuki, Yusuke; Okubo, Kazuya; Imai, Kiyotaka

    2018-04-01

    In integrated circuit and memory devices, size shrinkage has been the most effective method to reduce production cost and enable the steady increment of the number of transistors per unit area over the past few decades. In order to reduce the die size and feature size, it is necessary to minimize pattern formation in the advance node development. In the node of sub-10nm, extreme ultra violet lithography (EUV) and multi-patterning solutions based on 193nm immersionlithography are the two most common options to achieve the size requirement. In such small features of line and space pattern, line width roughness (LWR) and line edge roughness (LER) contribute significant amount of process variation that impacts both physical and electrical performances. In this paper, we focus on optimizing the line roughness performance by using wafer stress engineering on 30nm pitch line and space pattern. This pattern is generated by a self-aligned quadruple patterning (SAQP) technique for the potential application of fin formation. Our investigation starts by comparing film materials and stress levels in various processing steps and material selection on SAQP integration scheme. From the cross-matrix comparison, we are able to determine the best stack of film selection and stress combination in order to achieve the lowest line roughness performance while obtaining pattern validity after fin etch. This stack is also used to study the step-by-step line roughness performance from SAQP to fin etch. Finally, we will show a successful patterning of 30nm pitch line and space pattern SAQP scheme with 1nm line roughness performance.

  18. Saxon Math. What Works Clearinghouse Intervention Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2013

    2013-01-01

    "Saxon Math," published by Houghton Mifflin Harcourt, is a core curriculum for students in grades K-5. A distinguishing feature of the curriculum is its use of an incremental approach for instruction and assessment. This approach limits the amount of new math content delivered to students each day and allows time for daily practice. New…

  19. Saxon Math. What Works Clearinghouse Intervention Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2013

    2013-01-01

    "Saxon Math", published by Houghton Mifflin Harcourt, is a core curriculum for students in grades K-12. This report includes studies that investigate the potential impact of "Saxon Math" for students in grades 6-8. A distinguishing feature of the curriculum is its use of an incremental approach for instruction and assessment.…

  20. Longitudinal associations between dental caries increment and risk factors in late childhood and adolescence.

    PubMed

    Curtis, Alexandra M; VanBuren, John; Cavanaugh, Joseph E; Warren, John J; Marshall, Teresa A; Levy, Steven M

    2018-05-12

    To assess longitudinal associations between permanent tooth caries increment and both modifiable and non-modifiable risk factors, using best subsets model selection. The Iowa Fluoride Study has followed a birth cohort with standardized caries exams without radiographs of the permanent dentition conducted at about ages 9, 13, and 17 years. Questionnaires were sent semi-annually to assess fluoride exposures and intakes, select food and beverage intakes, and tooth brushing frequency. Exposure variables were averaged over ages 7-9, 11-13, and 15-17, reflecting exposure 2 years prior to the caries exam. Longitudinal models were used to relate period-specific averaged exposures and demographic variables to adjusted decayed and filled surface increments (ADJCI) (n = 392). The Akaike Information Criterion (AIC) was used to assess optimal explanatory variable combinations. From birth to age 9, 9-13, and 13-17 years, 24, 30, and 55 percent of subjects had positive permanent ADJCI, respectively. Ten models had AIC values within two units of the lowest AIC model and were deemed optimal based on AIC. Younger age, being male, higher mother's education, and higher brushing frequency were associated with lower caries increment in all 10 models, while milk intake was included in 3 of 10 models. Higher milk intakes were slightly associated with lower ADJCI. With the exception of brushing frequency, modifiable risk factors under study were not significantly associated with ADJCI. When possible, researchers should consider presenting multiple models if fit criteria cannot discern among a group of optimal models. © 2018 American Association of Public Health Dentistry.

  1. Incremental Learning With Selective Memory (ILSM): Towards Fast Prostate Localization for Image Guided Radiotherapy

    PubMed Central

    Gao, Yaozong; Zhan, Yiqiang

    2015-01-01

    Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to “personalize” the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼0.89) and fast (∼4 s), which satisfies the real-world clinical requirements of IGRT. PMID:24495983

  2. A computational procedure for multibody systems including flexible beam dynamics

    NASA Technical Reports Server (NTRS)

    Downer, J. D.; Park, K. C.; Chiou, J. C.

    1990-01-01

    A computational procedure suitable for the solution of equations of motions for flexible multibody systems has been developed. The flexible beams are modeled using a fully nonlinear theory which accounts for both finite rotations and large deformations. The present formulation incorporates physical measures of conjugate Cauchy stress and covariant strain increments. As a consequence, the beam model can easily be interfaced with real-time strain measurements and feedback control systems. A distinct feature of the present work is the computational preservation of total energy for undamped systems; this is obtained via an objective strain increment/stress update procedure combined with an energy-conserving time integration algorithm which contains an accurate update of angular orientations. The procedure is demonstrated via several example problems.

  3. Sensitivity to gaze-contingent contrast increments in naturalistic movies: An exploratory report and model comparison

    PubMed Central

    Wallis, Thomas S. A.; Dorr, Michael; Bex, Peter J.

    2015-01-01

    Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. PMID:26057546

  4. International Space Station Increment-4/5 Microgravity Environment Summary Report

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Hrovat, Kenneth; Kelly, Eric; McPherson, Kevin; Reckart, Timothy

    2003-01-01

    This summary report presents the results of some of the processed acceleration data measured aboard the International Space Station during the period of December 2001 to December 2002. Unlike the past two ISS Increment reports, which were increment specific, this summary report covers two increments: Increments 4 and 5, hereafter referred to as Increment-4/5. Two accelerometer systems were used to measure the acceleration levels for the activities that took place during Increment-4/5. Due to time constraint and lack of precise timeline information regarding some payload operations and station activities, not a11 of the activities were analyzed for this report. The National Aeronautics and Space Administration sponsors the Microgravity Acceleration Measurement System and the Space Acceleration Microgravity System to support microgravity science experiments which require microgravity acceleration measurements. On April 19, 2001, both the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System units were launched on STS-100 from the Kennedy Space Center for installation on the International Space Station. The Microgravity Acceleration Measurement System supports science experiments requiring quasi-steady acceleration measurements, while the Space Acceleration Measurement System unit supports experiments requiring vibratory acceleration measurement. The International Space Station Increment-4/5 reduced gravity environment analysis presented in this report uses acceleration data collected by both sets of accelerometer systems: The Microgravity Acceleration Measurement System, which consists of two sensors: the low-frequency Orbital Acceleration Research Experiment Sensor Subsystem and the higher frequency High Resolution Accelerometer Package. The low frequency sensor measures up to 1 Hz, but is routinely trimmean filtered to yield much lower frequency acceleration data up to 0.01 Hz. This filtered data can be mapped to arbitrary locations for characterizing the quasi-steady environment for payloads and the vehicle. The high frequency sensor is used to characterize the vibratory environment up to 100 Hz at a single measurement location. The Space Acceleration Measurement System, which deploys high frequency sensors, measures vibratory acceleration data in the range of 0.01 to 400 Hz at multiple measurement locations. This summary report presents analysis of some selected quasi-steady and vibratory activities measured by these accelerometers during Increment- 4/5 from December 2001 to December 2002.

  5. Modular Polyethylene Inserts for Total Knee Arthroplasty: Can Surgeons Detect 1-mm Thickness Increments?

    PubMed

    Yoo, Joanne Y; Cai, Jenny; Chen, Antonia F; Austin, Matthew S; Sharkey, Peter F

    2016-05-01

    Some manufacturers have introduced polyethylene (PE) inserts in 1-mm increment thickness options to allow for finer adjustments in total knee arthroplasty kinematics. Two surgeons with extensive experience performed 88 total knee arthroplasties using implants with 1-mm PE inserts. After trial components were inserted and the optimal PE thickness was selected, the insert was removed and a trial insert size was randomly chosen from opaque envelopes (1-mm smaller, same size, and 1-mm larger). The knee was re-examined and the surgeon determined which size PE had been placed. Surgeons reliably determined insert thicknesses in 62.5% (55 of 88; P = .050) of trials. Surgeons were not able to accurately detect 1-mm incremental changes of trial PE implants on a consistent basis. The potential clinical usefulness of this concept should be further evaluated. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Modern traffic control devices to improve safety at rural intersections.

    DOT National Transportation Integrated Search

    2011-12-01

    "Engineers with the Texas Department of Transportation (TxDOT) frequently make changes to traffic control devices : (TCDs) to improve intersection safety. To use available funds judiciously, engineers make incremental changes in : order to select the...

  7. Powerful Electromechanical Linear Actuator

    NASA Technical Reports Server (NTRS)

    Cowan, John R.; Myers, William N.

    1994-01-01

    Powerful electromechanical linear actuator designed to replace hydraulic actuator that provides incremental linear movements to large object and holds its position against heavy loads. Electromechanical actuator cleaner and simpler, and needs less maintenance. Two principal innovative features that distinguish new actuator are use of shaft-angle resolver as source of position feedback to electronic control subsystem and antibacklash gearing arrangement.

  8. Projecting Grammatical Features in Nominals: Cognitive Processing Theory & Computational Implementation

    DTIC Science & Technology

    2010-03-01

    functionality and plausibility distinguishes this research from most research in computational linguistics and computational psycholinguistics . The... Psycholinguistic Theory There is extensive psycholinguistic evidence that human language processing is essentially incremental and interactive...challenges of psycholinguistic research is to explain how humans can process language effortlessly and accurately given the complexity and ambiguity that is

  9. Increased Activation in Superior Temporal Gyri as a Function of Increment in Phonetic Features

    ERIC Educational Resources Information Center

    Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten

    2011-01-01

    A common assumption is that phonetic sounds initiate unique processing in the superior temporal gyri and sulci (STG/STS). The anatomical areas subserving these processes are also implicated in the processing of non-phonetic stimuli such as music instrument sounds. The differential processing of phonetic and non-phonetic sounds was investigated in…

  10. Characterization of air profiles impeded by plant canopies for a variable-rate air-assisted sprayer

    USDA-ARS?s Scientific Manuscript database

    The preferential design for variable-rate orchard and nursery sprayers relies on tree structure to control liquid and air flow rates. Demand for this advanced feature has been incremental as the public demand on reduction of pesticide use. A variable-rate, air assisted, five-port sprayer had been in...

  11. Space station (modular) mission analysis. Volume 1: Mission analysis

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The mission analysis on the modular space station considers experimental requirements and options characterized by low initial cost and incremental manning. Features that affect initial development and early operating costs are identified and their impacts on the program are assessed. Considered are the areas of experiment, mission, operations, information management, and long life and safety analyses.

  12. Cobweb/3: A portable implementation

    NASA Technical Reports Server (NTRS)

    Mckusick, Kathleen; Thompson, Kevin

    1990-01-01

    An algorithm is examined for data clustering and incremental concept formation. An overview is given of the Cobweb/3 system and the algorithm on which it is based, as well as the practical details of obtaining and running the system code. The implementation features a flexible user interface which includes a graphical display of the concept hierarchies that the system constructs.

  13. Warfighter Information Network-Tactical Increment 3 (WIN-T Inc 3)

    DTIC Science & Technology

    2013-12-01

    T vehicles employed at BCT, Fires, (Ch-1) WIN-T Inc 3 December 2013 SAR April 16, 2014 16:49:41 UNCLASSIFIED 13 AVN , BfSB, and select force...passengers and crew from small arms fire, mines, IED and other anti-vehicle/ personnel threats. AVN , BfSB, and select force pooled assets...small arms fire, mines, IED and other anti-vehicle/ personnel threats. AVN , BfSB, and select force pooled assets operating within the

  14. Reducing workpieces to their base geometry for multi-step incremental forming using manifold harmonics

    NASA Astrophysics Data System (ADS)

    Carette, Yannick; Vanhove, Hans; Duflou, Joost

    2018-05-01

    Single Point Incremental Forming is a flexible process that is well-suited for small batch production and rapid prototyping of complex sheet metal parts. The distributed nature of the deformation process and the unsupported sheet imply that controlling the final accuracy of the workpiece is challenging. To improve the process limits and the accuracy of SPIF, the use of multiple forming passes has been proposed and discussed by a number of authors. Most methods use multiple intermediate models, where the previous one is strictly smaller than the next one, while gradually increasing the workpieces' wall angles. Another method that can be used is the manufacture of a smoothed-out "base geometry" in the first pass, after which more detailed features can be added in subsequent passes. In both methods, the selection of these intermediate shapes is freely decided by the user. However, their practical implementation in the production of complex freeform parts is not straightforward. The original CAD model can be manually adjusted or completely new CAD models can be created. This paper discusses an automatic method that is able to extract the base geometry from a full STL-based CAD model in an analytical way. Harmonic decomposition is used to express the final geometry as the sum of individual surface harmonics. It is then possible to filter these harmonic contributions to obtain a new CAD model with a desired level of geometric detail. This paper explains the technique and its implementation, as well as its use in the automatic generation of multi-step geometries.

  15. Incremental online learning in high dimensions.

    PubMed

    Vijayakumar, Sethu; D'Souza, Aaron; Schaal, Stefan

    2005-12-01

    Locally weighted projection regression (LWPR) is a new algorithm for incremental nonlinear function approximation in high-dimensional spaces with redundant and irrelevant input dimensions. At its core, it employs nonparametric regression with locally linear models. In order to stay computationally efficient and numerically robust, each local model performs the regression analysis with a small number of univariate regressions in selected directions in input space in the spirit of partial least squares regression. We discuss when and how local learning techniques can successfully work in high-dimensional spaces and review the various techniques for local dimensionality reduction before finally deriving the LWPR algorithm. The properties of LWPR are that it (1) learns rapidly with second-order learning methods based on incremental training, (2) uses statistically sound stochastic leave-one-out cross validation for learning without the need to memorize training data, (3) adjusts its weighting kernels based on only local information in order to minimize the danger of negative interference of incremental learning, (4) has a computational complexity that is linear in the number of inputs, and (5) can deal with a large number of-possibly redundant-inputs, as shown in various empirical evaluations with up to 90 dimensional data sets. For a probabilistic interpretation, predictive variance and confidence intervals are derived. To our knowledge, LWPR is the first truly incremental spatially localized learning method that can successfully and efficiently operate in very high-dimensional spaces.

  16. Comparative study on growth performance of two shade trees in tea agroforestry system.

    PubMed

    Kalita, Rinku Moni; Das, Ashesh Kumar; Nath, Arun Jyoti

    2014-07-01

    An attempt was made to study the stem growth of two native dominant shade tree species in terms of annual girth increment in three dominant girth size categories for two years in tea agroforestry system of Barak Valley, Assam. Fifty two sampling plots of 0.1 ha size were established and all trees exceeding 10 cm girth over bark at breast height (1.37 m) were uniquely identified, tagged, and annually measured for girth increment, using metal tape during December 2010-12. Albizia lebbeck and A. odoratissima were dominant shade tree species registering 82% of appearance of the individuals studied. The girth class was categorized into six different categories where 30-50 cm, 50-70 cm and 70-90 cm were dominating girth classes and selected for increment study. Mean annual girth increment ranged from 1.41 cm in Albizia odoratissima (50-70 cm girth class) to 2.97 cm in Albizia lebbeck (70-90 cm girth class) for the first year and 1.70 cm in Albizia odoratissima (50-70 cm girth class) to 3.09 cm in Albizia lebbeck (70-90 cm girth class) for the second year. Albizia lebbeck exhibited better growth in all prominent girth classes as compared to Albizia odoratissima during the observation period. The two shade tree species showed similar trend of growth in both the years of observation and significant difference in girth increment.

  17. The Theory-based Influence of Map Features on Risk Beliefs: Self-reports of What is Seen and Understood for Maps Depicting an Environmental Health Hazard

    PubMed Central

    Vatovec, Christine

    2013-01-01

    Theory-based research is needed to understand how maps of environmental health risk information influence risk beliefs and protective behavior. Using theoretical concepts from multiple fields of study including visual cognition, semiotics, health behavior, and learning and memory supports a comprehensive assessment of this influence. We report results from thirteen cognitive interviews that provide theory-based insights into how visual features influenced what participants saw and the meaning of what they saw as they viewed three formats of water test results for private wells (choropleth map, dot map, and a table). The unit of perception, color, proximity to hazards, geographic distribution, and visual salience had substantial influences on what participants saw and their resulting risk beliefs. These influences are explained by theoretical factors that shape what is seen, properties of features that shape cognition (pre-attentive, symbolic, visual salience), information processing (top-down and bottom-up), and the strength of concrete compared to abstract information. Personal relevance guided top-down attention to proximal and larger hazards that shaped stronger risk beliefs. Meaning was more local for small perceptual units and global for large units. Three aspects of color were important: pre-attentive “incremental risk” meaning of sequential shading, symbolic safety meaning of stoplight colors, and visual salience that drew attention. The lack of imagery, geographic information, and color diminished interest in table information. Numeracy and prior beliefs influenced comprehension for some participants. Results guided the creation of an integrated conceptual framework for application to future studies. Ethics should guide the selection of map features that support appropriate communication goals. PMID:22715919

  18. The theory-based influence of map features on risk beliefs: self-reports of what is seen and understood for maps depicting an environmental health hazard.

    PubMed

    Severtson, Dolores J; Vatovec, Christine

    2012-08-01

    Theory-based research is needed to understand how maps of environmental health risk information influence risk beliefs and protective behavior. Using theoretical concepts from multiple fields of study including visual cognition, semiotics, health behavior, and learning and memory supports a comprehensive assessment of this influence. The authors report results from 13 cognitive interviews that provide theory-based insights into how visual features influenced what participants saw and the meaning of what they saw as they viewed 3 formats of water test results for private wells (choropleth map, dot map, and a table). The unit of perception, color, proximity to hazards, geographic distribution, and visual salience had substantial influences on what participants saw and their resulting risk beliefs. These influences are explained by theoretical factors that shape what is seen, properties of features that shape cognition (preattentive, symbolic, visual salience), information processing (top-down and bottom-up), and the strength of concrete compared with abstract information. Personal relevance guided top-down attention to proximal and larger hazards that shaped stronger risk beliefs. Meaning was more local for small perceptual units and global for large units. Three aspects of color were important: preattentive "incremental risk" meaning of sequential shading, symbolic safety meaning of stoplight colors, and visual salience that drew attention. The lack of imagery, geographic information, and color diminished interest in table information. Numeracy and prior beliefs influenced comprehension for some participants. Results guided the creation of an integrated conceptual framework for application to future studies. Ethics should guide the selection of map features that support appropriate communication goals.

  19. Facial feature tracking: a psychophysiological measure to assess exercise intensity?

    PubMed

    Miles, Kathleen H; Clark, Bradley; Périard, Julien D; Goecke, Roland; Thompson, Kevin G

    2018-04-01

    The primary aim of this study was to determine whether facial feature tracking reliably measures changes in facial movement across varying exercise intensities. Fifteen cyclists completed three, incremental intensity, cycling trials to exhaustion while their faces were recorded with video cameras. Facial feature tracking was found to be a moderately reliable measure of facial movement during incremental intensity cycling (intra-class correlation coefficient = 0.65-0.68). Facial movement (whole face (WF), upper face (UF), lower face (LF) and head movement (HM)) increased with exercise intensity, from lactate threshold one (LT1) until attainment of maximal aerobic power (MAP) (WF 3464 ± 3364mm, P < 0.005; UF 1961 ± 1779mm, P = 0.002; LF 1608 ± 1404mm, P = 0.002; HM 849 ± 642mm, P < 0.001). UF movement was greater than LF movement at all exercise intensities (UF minus LF at: LT1, 1048 ± 383mm; LT2, 1208 ± 611mm; MAP, 1401 ± 712mm; P < 0.001). Significant medium to large non-linear relationships were found between facial movement and power output (r 2  = 0.24-0.31), HR (r 2  = 0.26-0.33), [La - ] (r 2  = 0.33-0.44) and RPE (r 2  = 0.38-0.45). The findings demonstrate the potential utility of facial feature tracking as a non-invasive, psychophysiological measure to potentially assess exercise intensity.

  20. Online Bimanual Manipulation Using Surface Electromyography and Incremental Learning.

    PubMed

    Strazzulla, Ilaria; Nowak, Markus; Controzzi, Marco; Cipriani, Christian; Castellini, Claudio

    2017-03-01

    The paradigm of simultaneous and proportional myocontrol of hand prostheses is gaining momentum in the rehabilitation robotics community. As opposed to the traditional surface electromyography classification schema, in simultaneous and proportional control the desired force/torque at each degree of freedom of the hand/wrist is predicted in real-time, giving to the individual a more natural experience, reducing the cognitive effort and improving his dexterity in daily-life activities. In this study we apply such an approach in a realistic manipulation scenario, using 10 non-linear incremental regression machines to predict the desired torques for each motor of two robotic hands. The prediction is enforced using two sets of surface electromyography electrodes and an incremental, non-linear machine learning technique called Incremental Ridge Regression with Random Fourier Features. Nine able-bodied subjects were engaged in a functional test with the aim to evaluate the performance of the system. The robotic hands were mounted on two hand/wrist orthopedic splints worn by healthy subjects and controlled online. An average completion rate of more than 95% was achieved in single-handed tasks and 84% in bimanual tasks. On average, 5 min of retraining were necessary on a total session duration of about 1 h and 40 min. This work sets a beginning in the study of bimanual manipulation with prostheses and will be carried on through experiments in unilateral and bilateral upper limb amputees thus increasing its scientific value.

  1. Quantum-enhanced feature selection with forward selection and backward elimination

    NASA Astrophysics Data System (ADS)

    He, Zhimin; Li, Lvzhou; Huang, Zhiming; Situ, Haozhen

    2018-07-01

    Feature selection is a well-known preprocessing technique in machine learning, which can remove irrelevant features to improve the generalization capability of a classifier and reduce training and inference time. However, feature selection is time-consuming, particularly for the applications those have thousands of features, such as image retrieval, text mining and microarray data analysis. It is crucial to accelerate the feature selection process. We propose a quantum version of wrapper-based feature selection, which converts a classical feature selection to its quantum counterpart. It is valuable for machine learning on quantum computer. In this paper, we focus on two popular kinds of feature selection methods, i.e., wrapper-based forward selection and backward elimination. The proposed feature selection algorithm can quadratically accelerate the classical one.

  2. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  3. Assortativity and leadership emerge from anti-preferential attachment in heterogeneous networks.

    PubMed

    Sendiña-Nadal, I; Danziger, M M; Wang, Z; Havlin, S; Boccaletti, S

    2016-02-18

    Real-world networks have distinct topologies, with marked deviations from purely random networks. Many of them exhibit degree-assortativity, with nodes of similar degree more likely to link to one another. Though microscopic mechanisms have been suggested for the emergence of other topological features, assortativity has proven elusive. Assortativity can be artificially implanted in a network via degree-preserving link permutations, however this destroys the graph's hierarchical clustering and does not correspond to any microscopic mechanism. Here, we propose the first generative model which creates heterogeneous networks with scale-free-like properties in degree and clustering distributions and tunable realistic assortativity. Two distinct populations of nodes are incrementally added to an initial network by selecting a subgraph to connect to at random. One population (the followers) follows preferential attachment, while the other population (the potential leaders) connects via anti-preferential attachment: they link to lower degree nodes when added to the network. By selecting the lower degree nodes, the potential leader nodes maintain high visibility during the growth process, eventually growing into hubs. The evolution of links in Facebook empirically validates the connection between the initial anti-preferential attachment and long term high degree. In this way, our work sheds new light on the structure and evolution of social networks.

  4. Assortativity and leadership emerge from anti-preferential attachment in heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Sendiña-Nadal, I.; Danziger, M. M.; Wang, Z.; Havlin, S.; Boccaletti, S.

    2016-02-01

    Real-world networks have distinct topologies, with marked deviations from purely random networks. Many of them exhibit degree-assortativity, with nodes of similar degree more likely to link to one another. Though microscopic mechanisms have been suggested for the emergence of other topological features, assortativity has proven elusive. Assortativity can be artificially implanted in a network via degree-preserving link permutations, however this destroys the graph’s hierarchical clustering and does not correspond to any microscopic mechanism. Here, we propose the first generative model which creates heterogeneous networks with scale-free-like properties in degree and clustering distributions and tunable realistic assortativity. Two distinct populations of nodes are incrementally added to an initial network by selecting a subgraph to connect to at random. One population (the followers) follows preferential attachment, while the other population (the potential leaders) connects via anti-preferential attachment: they link to lower degree nodes when added to the network. By selecting the lower degree nodes, the potential leader nodes maintain high visibility during the growth process, eventually growing into hubs. The evolution of links in Facebook empirically validates the connection between the initial anti-preferential attachment and long term high degree. In this way, our work sheds new light on the structure and evolution of social networks.

  5. Perceptually Guided Photo Retargeting.

    PubMed

    Xia, Yingjie; Zhang, Luming; Hong, Richang; Nie, Liqiang; Yan, Yan; Shao, Ling

    2016-04-22

    We propose perceptually guided photo retargeting, which shrinks a photo by simulating a human's process of sequentially perceiving visually/semantically important regions in a photo. In particular, we first project the local features (graphlets in this paper) onto a semantic space, wherein visual cues such as global spatial layout and rough geometric context are exploited. Thereafter, a sparsity-constrained learning algorithm is derived to select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path which simulates how a human actively perceives semantics in a photo. Furthermore, we learn the prior distribution of such active graphlet paths (AGPs) from training photos that are marked as esthetically pleasing by multiple users. The learned priors enforce the corresponding AGP of a retargeted photo to be maximally similar to those from the training photos. On top of the retargeting model, we further design an online learning scheme to incrementally update the model with new photos that are esthetically pleasing. The online update module makes the algorithm less dependent on the number and contents of the initial training data. Experimental results show that: 1) the proposed AGP is over 90% consistent with human gaze shifting path, as verified by the eye-tracking data, and 2) the retargeting algorithm outperforms its competitors significantly, as AGP is more indicative of photo esthetics than conventional saliency maps.

  6. Feature Selection for Chemical Sensor Arrays Using Mutual Information

    PubMed Central

    Wang, X. Rosalind; Lizier, Joseph T.; Nowotny, Thomas; Berna, Amalia Z.; Prokopenko, Mikhail; Trowell, Stephen C.

    2014-01-01

    We address the problem of feature selection for classifying a diverse set of chemicals using an array of metal oxide sensors. Our aim is to evaluate a filter approach to feature selection with reference to previous work, which used a wrapper approach on the same data set, and established best features and upper bounds on classification performance. We selected feature sets that exhibit the maximal mutual information with the identity of the chemicals. The selected features closely match those found to perform well in the previous study using a wrapper approach to conduct an exhaustive search of all permitted feature combinations. By comparing the classification performance of support vector machines (using features selected by mutual information) with the performance observed in the previous study, we found that while our approach does not always give the maximum possible classification performance, it always selects features that achieve classification performance approaching the optimum obtained by exhaustive search. We performed further classification using the selected feature set with some common classifiers and found that, for the selected features, Bayesian Networks gave the best performance. Finally, we compared the observed classification performances with the performance of classifiers using randomly selected features. We found that the selected features consistently outperformed randomly selected features for all tested classifiers. The mutual information filter approach is therefore a computationally efficient method for selecting near optimal features for chemical sensor arrays. PMID:24595058

  7. Physical aspects of heat generation/absorption in the second grade fluid flow due to Riga plate: Application of Cattaneo-Christov approach

    NASA Astrophysics Data System (ADS)

    Anjum, Aisha; Mir, N. A.; Farooq, M.; Javed, M.; Ahmad, S.; Malik, M. Y.; Alshomrani, A. S.

    2018-06-01

    The present article concentrates on thermal stratification in the flow of second grade fluid past a Riga plate with linear stretching towards a stagnation region. Heat transfer phenomenon is disclosed with heat generation/absorption. Riga plate is known as electromagnetic actuator which comprises of permanent magnets and alternating electrodes placed on a plane surface. Cattaneo-Christov heat flux model is implemented to analyze the features of heat transfer. This new heat flux model is the generalization of classical Fourier's law with the contribution of thermal relaxation time. For the first time heat generation/absorption effect is computed with non-Fourier's law of heat conduction (i.e., Cattaneo-Christov heat flux model). Transformations are used to obtain the governing non-linear ordinary differential equations. Approximate convergent solutions are developed for the non-dimensionalized governing problems. Physical features of velocity and temperature distributions are graphically analyzed corresponding to various parameters in 2D and 3D. It is noted that velocity field enhances with an increment of modified Hartman number while it reduces with increasing variable thickness parameter. Increment in modified heat generation parameter results in reduction of temperature field.

  8. Rough sets and Laplacian score based cost-sensitive feature selection

    PubMed Central

    Yu, Shenglong

    2018-01-01

    Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884

  9. Rough sets and Laplacian score based cost-sensitive feature selection.

    PubMed

    Yu, Shenglong; Zhao, Hong

    2018-01-01

    Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.

  10. Driving electromechanically assisted Gait Trainer for people with stroke.

    PubMed

    Iosa, Marco; Morone, Giovanni; Bragoni, Maura; De Angelis, Domenico; Venturiero, Vincenzo; Coiro, Paola; Pratesi, Luca; Paolucci, Stefano

    2011-01-01

    Electromechanically assisted gait training is a promising task-oriented approach for gait restoration, especially for people with subacute stroke. However, few guidelines are available for selecting the parameter values of the electromechanical Gait Trainer (GT) (Reha-Stim; Berlin, Germany) and none is tailored to a patient's motor capacity. We assessed 342 GT sessions performed by 20 people with stroke who were stratified by Functional Ambulatory Category. In the first GT session of all patients, the body-weight support (BWS) required was higher than that reported in the literature. In further sessions, we noted a slow reduction of BWS and a fast increment of walking speed for the most-affected patients. Inverse trends were observed for the less-affected patients. In all the patients, the heart rate increment was about 20 beats per minute, even for sessions in which the number of strides performed was up to 500. In addition, the effective BWS measured during GT sessions was different from that initially selected by the physiotherapist. This difference depended mainly on the position of the GT platforms during selection. Finally, harness acceleration in the anteroposterior direction proved to be higher in patients with stroke than in nondisabled subjects. Our findings are an initial step toward scientifically selecting parameters in electromechanically assisted gait training.

  11. The Dynamics of Perceptual Learning: An Incremental Reweighting Model

    ERIC Educational Resources Information Center

    Petrov, Alexander A.; Dosher, Barbara Anne; Lu, Zhong-Lin

    2005-01-01

    The mechanisms of perceptual learning are analyzed theoretically, probed in an orientation-discrimination experiment involving a novel nonstationary context manipulation, and instantiated in a detailed computational model. Two hypotheses are examined: modification of early cortical representations versus task-specific selective reweighting.…

  12. Tractable Goal Selection with Oversubscribed Resources

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; McLaren, David

    2009-01-01

    We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.

  13. Integrating Climate Change Resilience Features into the Incremental Refinement of an Existing Marine Park

    PubMed Central

    Beckley, Lynnath E.; Kobryn, Halina T.; Lombard, Amanda T.; Radford, Ben; Heyward, Andrew

    2016-01-01

    Marine protected area (MPA) designs are likely to require iterative refinement as new knowledge is gained. In particular, there is an increasing need to consider the effects of climate change, especially the ability of ecosystems to resist and/or recover from climate-related disturbances, within the MPA planning process. However, there has been limited research addressing the incorporation of climate change resilience into MPA design. This study used Marxan conservation planning software with fine-scale shallow water (<20 m) bathymetry and habitat maps, models of major benthic communities for deeper water, and comprehensive human use information from Ningaloo Marine Park in Western Australia to identify climate change resilience features to integrate into the incremental refinement of the marine park. The study assessed the representation of benthic habitats within the current marine park zones, identified priority areas of high resilience for inclusion within no-take zones and examined if any iterative refinements to the current no-take zones are necessary. Of the 65 habitat classes, 16 did not meet representation targets within the current no-take zones, most of which were in deeper offshore waters. These deeper areas also demonstrated the highest resilience values and, as such, Marxan outputs suggested minor increases to the current no-take zones in the deeper offshore areas. This work demonstrates that inclusion of fine-scale climate change resilience features within the design process for MPAs is feasible, and can be applied to future marine spatial planning practices globally. PMID:27529820

  14. Active learning: a step towards automating medical concept extraction.

    PubMed

    Kholghi, Mahnoosh; Sitbon, Laurianne; Zuccon, Guido; Nguyen, Anthony

    2016-03-01

    This paper presents an automatic, active learning-based system for the extraction of medical concepts from clinical free-text reports. Specifically, (1) the contribution of active learning in reducing the annotation effort and (2) the robustness of incremental active learning framework across different selection criteria and data sets are determined. The comparative performance of an active learning framework and a fully supervised approach were investigated to study how active learning reduces the annotation effort while achieving the same effectiveness as a supervised approach. Conditional random fields as the supervised method, and least confidence and information density as 2 selection criteria for active learning framework were used. The effect of incremental learning vs standard learning on the robustness of the models within the active learning framework with different selection criteria was also investigated. The following 2 clinical data sets were used for evaluation: the Informatics for Integrating Biology and the Bedside/Veteran Affairs (i2b2/VA) 2010 natural language processing challenge and the Shared Annotated Resources/Conference and Labs of the Evaluation Forum (ShARe/CLEF) 2013 eHealth Evaluation Lab. The annotation effort saved by active learning to achieve the same effectiveness as supervised learning is up to 77%, 57%, and 46% of the total number of sequences, tokens, and concepts, respectively. Compared with the random sampling baseline, the saving is at least doubled. Incremental active learning is a promising approach for building effective and robust medical concept extraction models while significantly reducing the burden of manual annotation. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. CASPER: computer-aided segmentation of imperceptible motion-a learning-based tracking of an invisible needle in ultrasound.

    PubMed

    Beigi, Parmida; Rohling, Robert; Salcudean, Septimiu E; Ng, Gary C

    2017-11-01

    This paper presents a new micro-motion-based approach to track a needle in ultrasound images captured by a handheld transducer. We propose a novel learning-based framework to track a handheld needle by detecting microscale variations of motion dynamics over time. The current state of the art on using motion analysis for needle detection uses absolute motion and hence work well only when the transducer is static. We have introduced and evaluated novel spatiotemporal and spectral features, obtained from the phase image, in a self-supervised tracking framework to improve the detection accuracy in the subsequent frames using incremental training. Our proposed tracking method involves volumetric feature selection and differential flow analysis to incorporate the neighboring pixels and mitigate the effects of the subtle tremor motion of a handheld transducer. To evaluate the detection accuracy, the method is tested on porcine tissue in-vivo, during the needle insertion in the biceps femoris muscle. Experimental results show the mean, standard deviation and root-mean-square errors of [Formula: see text], [Formula: see text] and [Formula: see text] in the insertion angle, and 0.82, 1.21, 1.47 mm, in the needle tip, respectively. Compared to the appearance-based detection approaches, the proposed method is especially suitable for needles with ultrasonic characteristics that are imperceptible in the static image and to the naked eye.

  16. The compositional transition of vertebrate genomes: an analysis of the secondary structure of the proteins encoded by human genes.

    PubMed

    D'Onofrio, Giuseppe; Ghosh, Tapash Chandra

    2005-01-17

    Fluctuations and increments of both C(3) and G(3) levels along the human coding sequences were investigated comparing two sets of Xenopus/human orthologous genes. The first set of genes shows minor differences of the GC(3) levels, the second shows considerable increments of the GC(3) levels in the human genes. In both data sets, the fluctuations of C(3) and G(3) levels along the coding sequences correlated with the secondary structures of the encoded proteins. The human genes that underwent the compositional transition showed a different increment of the C(3) and G(3) levels within and among the structural units of the proteins. The relative synonymous codon usage (RSCU) of several amino acids were also affected during the compositional transition, showing that there exists a correlation between RSCU and protein secondary structures in human genes. The importance of natural selection for the formation of isochore organization of the human genome has been discussed on the basis of these results.

  17. Functional data analysis on ground reaction force of military load carriage increment

    NASA Astrophysics Data System (ADS)

    Din, Wan Rozita Wan; Rambely, Azmin Sham

    2014-06-01

    Analysis of ground reaction force on military load carriage is done through functional data analysis (FDA) statistical technique. The main objective of the research is to investigate the effect of 10% load increment and to find the maximum suitable load for the Malaysian military. Ten military soldiers age 31 ± 6.2 years, weigh 71.6 ± 10.4 kg and height of 166.3 ± 5.9 cm carrying different military load range from 0% body weight (BW) up to 40% BW participated in an experiment to gather the GRF and kinematic data using Vicon Motion Analysis System, Kirstler force plates and thirty nine body markers. The analysis is conducted in sagittal, medial lateral and anterior posterior planes. The results show that 10% BW load increment has an effect when heel strike and toe-off for all the three planes analyzed with P-value less than 0.001 at 0.05 significant levels. FDA proves to be one of the best statistical techniques in analyzing the functional data. It has the ability to handle filtering, smoothing and curve aligning according to curve features and points of interest.

  18. SAMS Acceleration Measurements on Mir (NASA Increment 4)

    NASA Technical Reports Server (NTRS)

    DeLombard, Richard

    1998-01-01

    During NASA Increment 4 (January to May 1997), about 5 gigabytes of acceleration data were collected by the Space Acceleration Measurements System (SAMS) onboard the Russian Space Station, Mir. The data were recorded on 28 optical disks which were returned to Earth on STS-84. During this increment, SAMS data were collected in the Priroda module to support the Mir Structural Dynamics Experiment (MiSDE), the Binary Colloidal Alloy Tests (BCAT), Angular Liquid Bridge (ALB), Candle Flames in Microgravity (CFM), Diffusion Controlled Apparatus Module (DCAM), Enhanced Dynamic Load Sensors (EDLS), Forced Flow Flame Spreading Test (FFFr), Liquid Metal Diffusion (LMD), Protein Crystal Growth in Dewar (PCG/Dewar), Queen's University Experiments in Liquid Diffusion (QUELD), and Technical Evaluation of MIM (TEM). This report points out some of the salient features of the microgravity environment to which these experiments were exposed. Also documented are mission events of interest such as the docked phase of STS-84 operations, a Progress engine bum, Soyuz vehicle docking and undocking, and Progress vehicle docking. This report presents an overview of the SAMS acceleration measurements recorded by 10 Hz and 100 Hz sensor heads. The analyses included herein complement those presented in previous summary reports prepared by the Principal Investigator Microgravity Services (PIMS) group.

  19. Online feature selection with streaming features.

    PubMed

    Wu, Xindong; Yu, Kui; Ding, Wei; Wang, Hao; Zhu, Xingquan

    2013-05-01

    We propose a new online feature selection framework for applications with streaming features where the knowledge of the full feature space is unknown in advance. We define streaming features as features that flow in one by one over time whereas the number of training examples remains fixed. This is in contrast with traditional online learning methods that only deal with sequentially added observations, with little attention being paid to streaming features. The critical challenges for Online Streaming Feature Selection (OSFS) include 1) the continuous growth of feature volumes over time, 2) a large feature space, possibly of unknown or infinite size, and 3) the unavailability of the entire feature set before learning starts. In the paper, we present a novel Online Streaming Feature Selection method to select strongly relevant and nonredundant features on the fly. An efficient Fast-OSFS algorithm is proposed to improve feature selection performance. The proposed algorithms are evaluated extensively on high-dimensional datasets and also with a real-world case study on impact crater detection. Experimental results demonstrate that the algorithms achieve better compactness and higher prediction accuracy than existing streaming feature selection algorithms.

  20. [Evaluation on the eco-economic benefits of small watershed in Beijing mountainous area: a case of Yanqi River watershed].

    PubMed

    Xiao, Hui-Jie; Wei, Zi-Gang; Wang, Qing; Zhu, Xiao-Bo

    2012-12-01

    Based on the theory of harmonious development of ecological economy, a total of 13 evaluation indices were selected from the ecological, economic, and social sub-systems of Yanqi River watershed in Huairou District of Beijing. The selected evaluation indices were normalized by using trapezoid functions, and the weights of the evaluation indices were determined by analytic hierarchy process. Then, the eco-economic benefits of the watershed were evaluated with weighted composite index method. From 2004 to 2011, the ecological, economic, and social benefits of Yanqi River watershed all had somewhat increase, among which, ecological benefit increased most, with the value changed from 0.210 in 2004 to 0.255 in 2011 and an increment of 21.5%. The eco-economic benefits of the watershed increased from 0.734 in 2004 to 0.840 in 2011, with an increment of 14.2%. At present, the watershed reached the stage of advanced ecosystem, being in beneficial circulation and harmonious development of ecology, economy, and society.

  1. Enhancing the Discrimination Ability of a Gas Sensor Array Based on a Novel Feature Selection and Fusion Framework.

    PubMed

    Deng, Changjian; Lv, Kun; Shi, Debo; Yang, Bo; Yu, Song; He, Zhiyi; Yan, Jia

    2018-06-12

    In this paper, a novel feature selection and fusion framework is proposed to enhance the discrimination ability of gas sensor arrays for odor identification. Firstly, we put forward an efficient feature selection method based on the separability and the dissimilarity to determine the feature selection order for each type of feature when increasing the dimension of selected feature subsets. Secondly, the K-nearest neighbor (KNN) classifier is applied to determine the dimensions of the optimal feature subsets for different types of features. Finally, in the process of establishing features fusion, we come up with a classification dominance feature fusion strategy which conducts an effective basic feature. Experimental results on two datasets show that the recognition rates of Database I and Database II achieve 97.5% and 80.11%, respectively, when k = 1 for KNN classifier and the distance metric is correlation distance (COR), which demonstrates the superiority of the proposed feature selection and fusion framework in representing signal features. The novel feature selection method proposed in this paper can effectively select feature subsets that are conducive to the classification, while the feature fusion framework can fuse various features which describe the different characteristics of sensor signals, for enhancing the discrimination ability of gas sensors and, to a certain extent, suppressing drift effect.

  2. Method and system for laser-based formation of micro-shapes in surfaces of optical elements

    DOEpatents

    Bass, Isaac Louis; Guss, Gabriel Mark

    2013-03-05

    A method of forming a surface feature extending into a sample includes providing a laser operable to emit an output beam and modulating the output beam to form a pulse train having a plurality of pulses. The method also includes a) directing the pulse train along an optical path intersecting an exposed portion of the sample at a position i and b) focusing a first portion of the plurality of pulses to impinge on the sample at the position i. Each of the plurality of pulses is characterized by a spot size at the sample. The method further includes c) ablating at least a portion of the sample at the position i to form a portion of the surface feature and d) incrementing counter i. The method includes e) repeating steps a) through d) to form the surface feature. The sample is free of a rim surrounding the surface feature.

  3. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  4. Use of Attribute Driven Incremental Discretization and Logic Learning Machine to build a prognostic classifier for neuroblastoma patients.

    PubMed

    Cangelosi, Davide; Muselli, Marco; Parodi, Stefano; Blengio, Fabiola; Becherini, Pamela; Versteeg, Rogier; Conte, Massimo; Varesio, Luigi

    2014-01-01

    Cancer patient's outcome is written, in part, in the gene expression profile of the tumor. We previously identified a 62-probe sets signature (NB-hypo) to identify tissue hypoxia in neuroblastoma tumors and showed that NB-hypo stratified neuroblastoma patients in good and poor outcome 1. It was important to develop a prognostic classifier to cluster patients into risk groups benefiting of defined therapeutic approaches. Novel classification and data discretization approaches can be instrumental for the generation of accurate predictors and robust tools for clinical decision support. We explored the application to gene expression data of Rulex, a novel software suite including the Attribute Driven Incremental Discretization technique for transforming continuous variables into simplified discrete ones and the Logic Learning Machine model for intelligible rule generation. We applied Rulex components to the problem of predicting the outcome of neuroblastoma patients on the bases of 62 probe sets NB-hypo gene expression signature. The resulting classifier consisted in 9 rules utilizing mainly two conditions of the relative expression of 11 probe sets. These rules were very effective predictors, as shown in an independent validation set, demonstrating the validity of the LLM algorithm applied to microarray data and patients' classification. The LLM performed as efficiently as Prediction Analysis of Microarray and Support Vector Machine, and outperformed other learning algorithms such as C4.5. Rulex carried out a feature selection by selecting a new signature (NB-hypo-II) of 11 probe sets that turned out to be the most relevant in predicting outcome among the 62 of the NB-hypo signature. Rules are easily interpretable as they involve only few conditions. Our findings provided evidence that the application of Rulex to the expression values of NB-hypo signature created a set of accurate, high quality, consistent and interpretable rules for the prediction of neuroblastoma patients' outcome. We identified the Rulex weighted classification as a flexible tool that can support clinical decisions. For these reasons, we consider Rulex to be a useful tool for cancer classification from microarray gene expression data.

  5. Green Infrastructure Tool | EPA Center for Exposure ...

    EPA Pesticide Factsheets

    2016-03-07

    Units option added – SI or US units. Default option is US units Additional options added to FTABLE such as clear FTABLE Significant digits for FTABLE calculations is changed to 5 Previously a default Cd value was used for calculations (under-drain and riser) but now a user-defined value option is given Conversion options added wherever necessary Default values of suction head and hydraulic conductivity are changed based on units selected in infiltration panel Default values of Cd for riser orifice and under-drain textboxes is changed to 0.6. Previously a default increment value of 0.1 is used for all the channel panels but now user can specify the increment

  6. Selected advanced aerodynamics and active controls technology concepts development on a derivative B-747

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The feasibility of applying wing tip extensions, winglets, and active control wing load alleviation to the Boeing 747 is investigated. Winglet aerodynamic design methods and high speed wind tunnel test results of winglets and of symmetrically deflected ailerons are presented. Structural resizing analyses to determine weight and aeroelastic twist increments for all the concepts and flutter model test results for the wing with winglets are included. Control law development, system mechanization/reliability studies, and aileron balance tab trade studies for active wing load alleviation systems are discussed. Results are presented in the form of incremental effects on L/D, structural weight, block fuel savings, stability and control, airplane price, and airline operating economics.

  7. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  8. Compensatory selection for roads over natural linear features by wolves in northern Ontario: Implications for caribou conservation

    PubMed Central

    Patterson, Brent R.; Anderson, Morgan L.; Rodgers, Arthur R.; Vander Vennen, Lucas M.; Fryxell, John M.

    2017-01-01

    Woodland caribou (Rangifer tarandus caribou) in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water) linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon) over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression) at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism–a compensatory functional response to anthropogenic linear feature density resulting in decreased use of natural travel corridors–has negative consequences for the viability of woodland caribou. PMID:29117234

  9. Compensatory selection for roads over natural linear features by wolves in northern Ontario: Implications for caribou conservation.

    PubMed

    Newton, Erica J; Patterson, Brent R; Anderson, Morgan L; Rodgers, Arthur R; Vander Vennen, Lucas M; Fryxell, John M

    2017-01-01

    Woodland caribou (Rangifer tarandus caribou) in Ontario are a threatened species that have experienced a substantial retraction of their historic range. Part of their decline has been attributed to increasing densities of anthropogenic linear features such as trails, roads, railways, and hydro lines. These features have been shown to increase the search efficiency and kill rate of wolves. However, it is unclear whether selection for anthropogenic linear features is additive or compensatory to selection for natural (water) linear features which may also be used for travel. We studied the selection of water and anthropogenic linear features by 52 resident wolves (Canis lupus x lycaon) over four years across three study areas in northern Ontario that varied in degrees of forestry activity and human disturbance. We used Euclidean distance-based resource selection functions (mixed-effects logistic regression) at the seasonal range scale with random coefficients for distance to water linear features, primary/secondary roads/railways, and hydro lines, and tertiary roads to estimate the strength of selection for each linear feature and for several habitat types, while accounting for availability of each feature. Next, we investigated the trade-off between selection for anthropogenic and water linear features. Wolves selected both anthropogenic and water linear features; selection for anthropogenic features was stronger than for water during the rendezvous season. Selection for anthropogenic linear features increased with increasing density of these features on the landscape, while selection for natural linear features declined, indicating compensatory selection of anthropogenic linear features. These results have implications for woodland caribou conservation. Prey encounter rates between wolves and caribou seem to be strongly influenced by increasing linear feature densities. This behavioral mechanism-a compensatory functional response to anthropogenic linear feature density resulting in decreased use of natural travel corridors-has negative consequences for the viability of woodland caribou.

  10. McTwo: a two-step feature selection algorithm based on maximal information coefficient.

    PubMed

    Ge, Ruiquan; Zhou, Manli; Luo, Youxi; Meng, Qinghan; Mai, Guoqin; Ma, Dongli; Wang, Guoqing; Zhou, Fengfeng

    2016-03-23

    High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This "large p, small n" paradigm in the area of biomedical "big data" may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets. This work describes a feature selection algorithm based on a recently published correlation measurement, Maximal Information Coefficient (MIC). The proposed algorithm, McTwo, aims to select features associated with phenotypes, independently of each other, and achieving high classification performance of the nearest neighbor algorithm. Based on the comparative study of 17 datasets, McTwo performs about as well as or better than existing algorithms, with significantly reduced numbers of selected features. The features selected by McTwo also appear to have particular biomedical relevance to the phenotypes from the literature. McTwo selects a feature subset with very good classification performance, as well as a small feature number. So McTwo may represent a complementary feature selection algorithm for the high-dimensional biomedical datasets.

  11. Attentional Selection Can Be Predicted by Reinforcement Learning of Task-relevant Stimulus Features Weighted by Value-independent Stickiness.

    PubMed

    Balcarras, Matthew; Ardid, Salva; Kaping, Daniel; Everling, Stefan; Womelsdorf, Thilo

    2016-02-01

    Attention includes processes that evaluate stimuli relevance, select the most relevant stimulus against less relevant stimuli, and bias choice behavior toward the selected information. It is not clear how these processes interact. Here, we captured these processes in a reinforcement learning framework applied to a feature-based attention task that required macaques to learn and update the value of stimulus features while ignoring nonrelevant sensory features, locations, and action plans. We found that value-based reinforcement learning mechanisms could account for feature-based attentional selection and choice behavior but required a value-independent stickiness selection process to explain selection errors while at asymptotic behavior. By comparing different reinforcement learning schemes, we found that trial-by-trial selections were best predicted by a model that only represents expected values for the task-relevant feature dimension, with nonrelevant stimulus features and action plans having only a marginal influence on covert selections. These findings show that attentional control subprocesses can be described by (1) the reinforcement learning of feature values within a restricted feature space that excludes irrelevant feature dimensions, (2) a stochastic selection process on feature-specific value representations, and (3) value-independent stickiness toward previous feature selections akin to perseveration in the motor domain. We speculate that these three mechanisms are implemented by distinct but interacting brain circuits and that the proposed formal account of feature-based stimulus selection will be important to understand how attentional subprocesses are implemented in primate brain networks.

  12. Exploring the process of capacity-building among community-based health promotion workers in Alberta, Canada.

    PubMed

    Montemurro, Genevieve R; Raine, Kim D; Nykiforuk, Candace I J; Mayan, Maria

    2014-09-01

    Community capacity-building is a central element to health promotion. While capacity-building features, domains and relationships to program sustainability have been well examined, information on the process of capacity-building as experienced by practitioners is needed. This study examined this process as experienced by coordinators working within a community-based chronic disease prevention project implemented in four communities in Alberta (Canada) from 2005-2010 using a case study approach with a mixed-method design. Data collection involved semi-structured interviews, a focus group and program documents tracking coordinator activity. Qualitative analysis followed the constant comparative method using open, axial and selective coding. Quantitative data were analyzed for frequency of major activity distribution. Capacity-building process involves distinct stages of networking, information exchange, partnering, prioritizing, planning/implementing and supporting/ sustaining. Stages are incremental though not always linear. Contextual factors exert a great influence on the process. Implications for research, practice and policy are discussed. © The Author (2013). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. The effect of feature selection methods on computer-aided detection of masses in mammograms

    NASA Astrophysics Data System (ADS)

    Hupse, Rianne; Karssemeijer, Nico

    2010-05-01

    In computer-aided diagnosis (CAD) research, feature selection methods are often used to improve generalization performance of classifiers and shorten computation times. In an application that detects malignant masses in mammograms, we investigated the effect of using a selection criterion that is similar to the final performance measure we are optimizing, namely the mean sensitivity of the system in a predefined range of the free-response receiver operating characteristics (FROC). To obtain the generalization performance of the selected feature subsets, a cross validation procedure was performed on a dataset containing 351 abnormal and 7879 normal regions, each region providing a set of 71 mass features. The same number of noise features, not containing any information, were added to investigate the ability of the feature selection algorithms to distinguish between useful and non-useful features. It was found that significantly higher performances were obtained using feature sets selected by the general test statistic Wilks' lambda than using feature sets selected by the more specific FROC measure. Feature selection leads to better performance when compared to a system in which all features were used.

  14. Incremental cost of nosocomial bacteremia according to the focus of infection and antibiotic sensitivity of the causative microorganism in a university hospital.

    PubMed

    Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Garcia-Alzorriz, Enric; Castells, Xavier; Grau, Santiago; Cots, Francesc

    2017-04-01

    To estimate the incremental cost of nosocomial bacteremia according to the causative focus and classified by the antibiotic sensitivity of the microorganism.Patients admitted to Hospital del Mar in Barcelona from 2005 to 2012 were included. We analyzed the total hospital costs of patients with nosocomial bacteremia caused by microorganisms with a high prevalence and, often, with multidrug-resistance. A control group was defined by selecting patients without bacteremia in the same diagnosis-related group.Our hospital has a cost accounting system (full-costing) that uses activity-based criteria to estimate per-patient costs. A logistic regression was fitted to estimate the probability of developing bacteremia (propensity score) and was used for propensity-score matching adjustment. This propensity score was included in an econometric model to adjust the incremental cost of patients with bacteremia with differentiation of the causative focus and antibiotic sensitivity.The mean incremental cost was estimated at &OV0556;15,526. The lowest incremental cost corresponded to bacteremia caused by multidrug-sensitive urinary infection (&OV0556;6786) and the highest to primary or unknown sources of bacteremia caused by multidrug-resistant microorganisms (&OV0556;29,186).This is one of the first analyses to include all episodes of bacteremia produced during hospital stays in a single study. The study included accurate information about the focus and antibiotic sensitivity of the causative organism and actual hospital costs. It provides information that could be useful to improve, establish, and prioritize prevention strategies for nosocomial infections.

  15. Use of occlusal sealant in a community program and caries incidence in high- and low-risk children.

    PubMed

    Baldini, Vânia; Tagliaferro, Elaine Pereira da Silva; Ambrosano, Gláucia Maria Bovi; Meneghim, Marcelo de Castro; Pereira, Antonio Carlos

    2011-08-01

    The aims of this study were to investigate the effectiveness of sealant placement under the guidelines of the Oral Health Promotion Program for Children and Adolescents (Portugal), and to test the influence of clinical and socioeconomic variables on the DMFT increment in 277 children, born in 1997. A dental hygienist performed the initial examinations and sealant placement (Helioseal, Vivadent) on the permanent first molars in 2005. These activities were registered in dental records that were assessed in 2007. Children were classified according to caries risk at baseline [high (HR: DMFT+dmft>0); low (LR: DMFT+dmft=0) risk] and sealant placement as follows: HR-S and LR-S Groups (with sealant placement); HR-NS and LR-NS Groups (without sealant placement). A calibrated dentist performed the final examination in 2007 at school, based on the World Health Organization recommendations. The variables collected were: dental caries, visible dental plaque, malocclusions, and socioeconomic level (questionnaire sent to children's parents). For univariate (Chi-square or Fisher tests) and multivariate (Multiple logistic regression) analyses the DMFT increment >0 was selected as dependent variable. Approximately 17.0% of the children showed DMFT increment>0 (mean=0.25). High-risk children presented a significant increase in the number of decayed and/or filled teeth. These children had 7.94 more chance of developing caries. Children who did not receive sealant were 1.8 more prone to have DMFT increment >0. It appears that sealant placement was effective in preventing dental caries development. Moreover, the variables "risk" and "sealant placement" were predictors for DMFT increment in the studied children.

  16. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions.more » The experiments show that this method can improve the recognition rate and the time of feature extraction.« less

  17. Feature Selection Method Based on Neighborhood Relationships: Applications in EEG Signal Identification and Chinese Character Recognition

    PubMed Central

    Zhao, Yu-Xiang; Chou, Chien-Hsing

    2016-01-01

    In this study, a new feature selection algorithm, the neighborhood-relationship feature selection (NRFS) algorithm, is proposed for identifying rat electroencephalogram signals and recognizing Chinese characters. In these two applications, dependent relationships exist among the feature vectors and their neighboring feature vectors. Therefore, the proposed NRFS algorithm was designed for solving this problem. By applying the NRFS algorithm, unselected feature vectors have a high priority of being added into the feature subset if the neighboring feature vectors have been selected. In addition, selected feature vectors have a high priority of being eliminated if the neighboring feature vectors are not selected. In the experiments conducted in this study, the NRFS algorithm was compared with two feature algorithms. The experimental results indicated that the NRFS algorithm can extract the crucial frequency bands for identifying rat vigilance states and identifying crucial character regions for recognizing Chinese characters. PMID:27314346

  18. Selection and implementation of a distributed phased archive for a multivendor incremental approach to PACS

    NASA Astrophysics Data System (ADS)

    Smith, Edward M.; Wandtke, John; Robinson, Arvin E.

    1999-07-01

    The selection criteria for the archive were based on the objectives of the Medical Information, Communication and Archive System (MICAS), a multi-vendor incremental approach to PACS. These objectives include interoperability between all components, seamless integration of the Radiology Information System (RIS) with MICAS and eventually other hospital databases, all components must demonstrate DICOM compliance prior to acceptance and automated workflow that can be programmed to meet changes in the healthcare environment. The long-term multi-modality archive is being implemented in 3 or more phases with the first phase designed to provide a 12 to 18 month storage solution. This decision was made because the cost per GB of storage is rapidly decreasing and the speed at which data can be retrieved is increasing with time. The open-solution selected allows incorporation of leading edge, 'best of breed' hardware and software and provides maximum jukeboxes, provides maximum flexibility of workflow both within and outside of radiology. The selected solution is media independent, supports multiple jukeboxes, provides expandable storage capacity and will provide redundancy and fault tolerance at minimal cost. Some of the required attributes of the archive include scalable archive strategy, virtual image database with global query and object-oriented database. The selection process took approximately 10 months with Cemax-Icon being the vendor selected. Prior to signing a purchase order, Cemax-Icon performed a site survey, agreed upon the acceptance test protocol and provided a written guarantee of connectivity between their archive and the imaging modalities and other MICAS components.

  19. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention

    PubMed Central

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  20. Data update in a land information network

    NASA Astrophysics Data System (ADS)

    Mullin, Robin C.

    1988-01-01

    The on-going update of data exchanged in a land information network is examined. In the past, major developments have been undertaken to enable the exchange of data between land information systems. A model of a land information network and the data update process have been developed. Based on these, a functional description of the database and software to perform data updating is presented. A prototype of the data update process was implemented using the ARC/INFO geographic information system. This was used to test four approaches to data updating, i.e., bulk, block, incremental, and alert updates. A bulk update is performed by replacing a complete file with an updated file. A block update requires that the data set be partitioned into blocks. When an update occurs, only the blocks which are affected need to be transferred. An incremental update approach records each feature which is added or deleted and transmits only the features needed to update the copy of the file. An alert is a marker indicating that an update has occurred. It can be placed in a file to warn a user that if he is active in an area containing markers, updated data is available. The four approaches have been tested using a cadastral data set.

  1. Information-processing under incremental levels of physical loads: comparing racquet to combat sports.

    PubMed

    Mouelhi Guizani, S; Tenenbaum, G; Bouzaouach, I; Ben Kheder, A; Feki, Y; Bouaziz, M

    2006-06-01

    Skillful performance in combat and racquet sports consists of proficient technique accompanied with efficient information-processing while engaged in moderate to high physical effort. This study examined information processing and decision-making using simple reaction time (SRT) and choice reaction time (CRT) paradigms in athletes of combat sports and racquet ball games while undergoing incrementally increasing physical effort ranging from low to high intensities. Forty national level experienced athletics in the sports of tennis, table tennis, fencing, and boxing were selected for this study. Each subject performed both simple (SRT) and four-choice reaction time (4-CRT) tasks at rest, and while pedaling on a cycle ergometer at 20%, 40%, 60%, and 80% of their own maximal aerobic power (Pmax). RM MANCOVA revealed significant sport-type by physical load interaction effect mainly on CRT. Least significant difference (LSD) posthoc contrasts indicated that fencers and tennis players process information faster with incrementally increasing workload, while different patterns were obtained for boxers and table-tennis players. The error rate remained stable for each sport type over all conditions. Between-sport differences in SRT and CRT among the athletes were also noted. Findings provide evidence that the 4-CRT is a task that more closely corresponds to the original task athletes are familiar with and utilize in their practices and competitions. However, additional tests that mimic the real world experiences of each sport must be developed and used to capture the nature of information processing and response-selection in specific sports.

  2. Natural image classification driven by human brain activity

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Peng, Hanyang; Wang, Jinqiao; Tang, Ming; Xue, Rong; Zuo, Zhentao

    2016-03-01

    Natural image classification has been a hot topic in computer vision and pattern recognition research field. Since the performance of an image classification system can be improved by feature selection, many image feature selection methods have been developed. However, the existing supervised feature selection methods are typically driven by the class label information that are identical for different samples from the same class, ignoring with-in class image variability and therefore degrading the feature selection performance. In this study, we propose a novel feature selection method, driven by human brain activity signals collected using fMRI technique when human subjects were viewing natural images of different categories. The fMRI signals associated with subjects viewing different images encode the human perception of natural images, and therefore may capture image variability within- and cross- categories. We then select image features with the guidance of fMRI signals from brain regions with active response to image viewing. Particularly, bag of words features based on GIST descriptor are extracted from natural images for classification, and a sparse regression base feature selection method is adapted to select image features that can best predict fMRI signals. Finally, a classification model is built on the select image features to classify images without fMRI signals. The validation experiments for classifying images from 4 categories of two subjects have demonstrated that our method could achieve much better classification performance than the classifiers built on image feature selected by traditional feature selection methods.

  3. EFS: an ensemble feature selection tool implemented as R-package and web-application.

    PubMed

    Neumann, Ursula; Genze, Nikita; Heider, Dominik

    2017-01-01

    Feature selection methods aim at identifying a subset of features that improve the prediction performance of subsequent classification models and thereby also simplify their interpretability. Preceding studies demonstrated that single feature selection methods can have specific biases, whereas an ensemble feature selection has the advantage to alleviate and compensate for these biases. The software EFS (Ensemble Feature Selection) makes use of multiple feature selection methods and combines their normalized outputs to a quantitative ensemble importance. Currently, eight different feature selection methods have been integrated in EFS, which can be used separately or combined in an ensemble. EFS identifies relevant features while compensating specific biases of single methods due to an ensemble approach. Thereby, EFS can improve the prediction accuracy and interpretability in subsequent binary classification models. EFS can be downloaded as an R-package from CRAN or used via a web application at http://EFS.heiderlab.de.

  4. Goal Selection for Embedded Systems with Oversubscribed Resources

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; McLaren, David

    2010-01-01

    We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.

  5. Onboard Run-Time Goal Selection for Autonomous Operations

    NASA Technical Reports Server (NTRS)

    Rabideau, Gregg; Chien, Steve; McLaren, David

    2010-01-01

    We describe an efficient, online goal selection algorithm for use onboard spacecraft and its use for selecting goals at runtime. Our focus is on the re-planning that must be performed in a timely manner on the embedded system where computational resources are limited. In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.

  6. The Selective Mutism Questionnaire: Measurement Structure and Validity

    PubMed Central

    Letamendi, Andrea M.; Chavira, Denise A.; Hitchcock, Carla A.; Roesch, Scott C.; Shipon-Blum, Elisa; Stein, Murray B.; Roesch, Scott C.

    2010-01-01

    Objective To evaluate the factor structure, reliability, and validity of the 17-item Selective Mutism Questionnaire. Method Diagnostic interviews were administered via telephone to 102 parents of children identified with selective mutism (SM) and 43 parents of children without SM from varying U.S. geographic regions. Children were between the ages of 3 and 11 inclusive and comprised 58% girls and 42% boys. SM diagnoses were determined using the Anxiety Disorders Interview Schedule for Children - Parent Version (ADIS-C/P); SM severity was assessed using the 17-item Selective Mutism Questionnaire (SMQ); and behavioral and affective symptoms were assessed using the Child Behavior Checklist (CBCL). An exploratory factor analysis (EFA) was conducted to investigate the dimensionality of the SMQ and a modified parallel analysis procedure was used to confirm EFA results. Internal consistency, construct validity, and incremental validity were also examined. Results The EFA yielded a 13-item solution consisting of three factors: a) Social Situations Outside of School, b) School Situations, and c) Home and Family Situations. Internal consistency of SMQ factors and total scale ranged from moderate to high. Convergent and incremental validity were also well supported. Conclusions Measure structure findings are consistent with the 3-factor solution found in a previous psychometric evaluation of the SMQ. Results also suggest that the SMQ provides useful and unique information in the prediction of SM phenomenon beyond other child anxiety measures. PMID:18698268

  7. Warfighter Information Network-Tactical Increment 3 (WIN-T Inc 3)

    DTIC Science & Technology

    2015-12-01

    than 1 seconds. Force Protection Armor required to protect personnel operating WIN-T vehicles employed at BCT, Fires, AVN , BfSB, and select force...21, 2016 18:26:36 UNCLASSIFIED 12 Acronyms and Abbreviations AOR - Area of Responsibility ATH - At-the-Halt ATO - Approval to Operate AVN

  8. 77 FR 71865 - Over-the-Road Bus Accessibility Grant Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-04

    ... DEPARTMENT OF TRANSPORTATION Federal Transit Administration Over-the-Road Bus Accessibility Grant... selection of projects to be funded under Fiscal Year (FY) 2012 appropriations for the Over-the-Road Bus...-road buses to help finance the incremental capital and training costs of complying with DOT's over-the...

  9. 77 FR 5295 - Over-the-Road Bus Accessibility Program Announcement of Project Selections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-02

    ... DEPARTMENT OF TRANSPORTATION Federal Transit Administration Over-the-Road Bus Accessibility...-Road Bus (OTRB) Accessibility Program, authorized by Section 3038 of the Transportation Equity Act for... of over-the-road buses to help finance the incremental capital and training costs of complying with...

  10. The Evaluation and Selection of Adequate Causal Models: A Compensatory Education Example.

    ERIC Educational Resources Information Center

    Tanaka, Jeffrey S.

    1982-01-01

    Implications of model evaluation (using traditional chi square goodness of fit statistics, incremental fit indices for covariance structure models, and latent variable coefficients of determination) on substantive conclusions are illustrated with an example examining the effects of participation in a compensatory education program on posttreatment…

  11. Technological Discontinuities and Dominant Designs: A Cyclical Model of Technological Change.

    ERIC Educational Resources Information Center

    Anderson, Philip; Tushman, Michael L.

    1990-01-01

    Based on longitudinal studies of the cement, glass, and minicomputer industries, this article proposes a technological change model in which a technological breakthrough, or discontinuity, initiates an era of intense technical variation and selection, culminating in a single dominant design and followed by a period of incremental technical…

  12. Engagement beyond Interruption: A Performative Perspective on Listening and Ethics

    ERIC Educational Resources Information Center

    McRae, Chris; Nainby, Keith

    2015-01-01

    This article presents an understanding of listening as a performative and pedagogical act. Moving beyond existing theories of listening in communication and education studies that frame listening as a selective and incremental act, this article considers listening in terms of a performance studies and critical education studies perspective. An…

  13. Property Differencing for Incremental Checking

    NASA Technical Reports Server (NTRS)

    Yang, Guowei; Khurshid, Sarfraz; Person, Suzette; Rungta, Neha

    2014-01-01

    This paper introduces iProperty, a novel approach that facilitates incremental checking of programs based on a property di erencing technique. Speci cally, iProperty aims to reduce the cost of checking properties as they are initially developed and as they co-evolve with the program. The key novelty of iProperty is to compute the di erences between the new and old versions of expected properties to reduce the number and size of the properties that need to be checked during the initial development of the properties. Furthermore, property di erencing is used in synergy with program behavior di erencing techniques to optimize common regression scenarios, such as detecting regression errors or checking feature additions for conformance to new expected properties. Experimental results in the context of symbolic execution of Java programs annotated with properties written as assertions show the e ectiveness of iProperty in utilizing change information to enable more ecient checking.

  14. Simulation of Ventricular, Cavo-Pulmonary, and Biventricular Ventricular Assist Devices in Failing Fontan.

    PubMed

    Di Molfetta, Arianna; Amodeo, Antonio; Fresiello, Libera; Trivella, Maria Giovanna; Iacobelli, Roberta; Pilati, Mara; Ferrari, Gianfranco

    2015-07-01

    Considering the lack of donors, ventricular assist devices (VADs) could be an alternative to heart transplantation for failing Fontan patients, in spite of the lack of experience and the complex anatomy and physiopathology of these patients. Considering the high number of variables that play an important role such as type of Fontan failure, type of VAD connection, and setting (right VAD [RVAD], left VAD [LVAD], or biventricular VAD [BIVAD]), a numerical model could be useful to support clinical decisions. The aim of this article is to develop and test a lumped parameter model of the cardiovascular system simulating and comparing the VAD effects on failing Fontan. Hemodynamic and echocardiographic data of 10 Fontan patients were used to simulate the baseline patients' condition using a dedicated lumped parameter model. Starting from the simulated baseline and for each patient, a systolic dysfunction, a diastolic dysfunction, and an increment of the pulmonary vascular resistance were simulated. Then, for each patient and for each pathology, the RVAD, LVAD, and BIVAD implantations were simulated. The model can reproduce patients' baseline well. In the case of systolic dysfunction, the LVAD unloads the single ventricle and increases the cardiac output (CO) (35%) and the arterial systemic pressure (Pas) (25%). With RVAD, a decrement of inferior vena cava pressure (Pvci) (39%) was observed with 34% increment of CO, but an increment of the single ventricle external work (SVEW). With the BIVAD, an increment of Pas (29%) and CO (37%) was observed. In the case of diastolic dysfunction, the LVAD increases CO (42%) and the RVAD decreases the Pvci, while both increase the SVEW. In the case of pulmonary vascular resistance increment, the highest CO (50%) and Pas (28%) increment is obtained with an RVAD with the highest decrement of Pvci (53%) and an increment of the SVEW but with the lowest VAD power consumption. The use of numerical models could be helpful in this innovative field to evaluate the effect of VAD implantation on Fontan patients to support patient and VAD type selection personalizing the assistance. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  15. Feature selection methods for big data bioinformatics: A survey from the search perspective.

    PubMed

    Wang, Lipo; Wang, Yaoli; Chang, Qing

    2016-12-01

    This paper surveys main principles of feature selection and their recent applications in big data bioinformatics. Instead of the commonly used categorization into filter, wrapper, and embedded approaches to feature selection, we formulate feature selection as a combinatorial optimization or search problem and categorize feature selection methods into exhaustive search, heuristic search, and hybrid methods, where heuristic search methods may further be categorized into those with or without data-distilled feature ranking measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Feature selection method based on multi-fractal dimension and harmony search algorithm and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na

    2016-10-01

    Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.

  17. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    NASA Astrophysics Data System (ADS)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  18. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    PubMed Central

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  19. Features of Scots pine radial growth in conditions of provenance trial

    NASA Astrophysics Data System (ADS)

    Kuzmin, S.

    2012-12-01

    Provenance trial of Scots pine in Boguchany forestry of Krasnoyarsk krai is conducted on two different soils - dark-grey loam forest soil and sod-podzol sandy soil. Complex of negative factors for plant growth and development appears in dry conditions of sandy soil. It could results in decrease of resistance to diseases. Sandy soils in different climatic zones have such common traits as low absorbing capacity, poorness of elemental nutrition, low microbiological activity and moisture capacity, very high water permeability. But Scots pine trees growing in such conditions could have certain advantages and perspectives of use. In the scope of climate change (global warming) the study of Scots pine growth on sandy soil become urgent because of more frequent appearance of dry seasons. Purpose of the work is revelation of radial growth features of Scots pine with different origin in dry conditions of sandy soil and assessment of external factors influence. The main feature of radial growth of majority of studied pine provenances in conditions of sandy soil is presence of significant variation of increment with distinct decline in 25-years old with loss of tree rings in a number of cases. The reason of it is complex of factors: deficit of June precipitation and next following outbreak of fungal disease. Found «frost rings» for all trees of studied clymatypes in 1992 are the consequence of temperature decline from May 21 to June 2 - from 23 C degrees up to 2 C. Perspective climatypes with biggest radial increments and least sensitivity to fungal disease were revealed.

  20. Plasma microRNA profile as a predictor of early virological response to interferon treatment in chronic hepatitis B patients.

    PubMed

    Zhang, Xiaonan; Chen, Cuncun; Wu, Min; Chen, Liang; Zhang, Jiming; Zhang, Xinxin; Zhang, Zhanqin; Wu, Jingdi; Wang, Jiefei; Chen, Xiaorong; Huang, Tao; Chen, Lixiang; Yuan, Zhenghong

    2012-01-01

    Interferon (IFN) and pegylated interferon (PEG-IFN) treatment of chronic hepatitis B leads to a sustained virological response in a limited proportion of patients and has considerable side effects. To find novel markers associated with prognosis of IFN therapy, we investigated whether a pretreatment plasma microRNA profile could be used to predict early virological response to IFN. We performed microRNA microarray analysis of plasma samples from 94 patients with chronic hepatitis B who received IFN therapy. The microRNA profiles from 13 liver biopsy samples were also measured. The OneR feature ranking and incremental feature selection method were used to rank and optimize the number of features in the model. Support vector machine prediction engine and jack-knife cross-validation were used to generate and evaluate the prediction model. The optimized model consisting of 11 microRNAs yielded a 74.2% overall accuracy in the training group and was independently confirmed in the test group (71.4% accuracy). Univariate and multivariate logistic regression analyses confirmed its independent association with early virological response (OR=7.35; P=2.12×10(-5)). Combining the microRNA profile with the alanine aminotransferase level improved the overall accuracy from 73.4% to 77.3%. Co-transfection of an HBV replicative construct with microRNA mimics revealed that let-7f, miR-939 and miR-638 were functionally associated with the HBV life cycle. The 11 microRNA signatures in plasma, together with basic clinical variables, might provide an accurate method to assist in medication decisions and improve the overall sustained response to IFN treatment.

  1. Assessing and Adapting LiDAR-Derived Pit-Free Canopy Height Model Algorithm for Sites with Varying Vegetation Structure

    NASA Astrophysics Data System (ADS)

    Scholl, V.; Hulslander, D.; Goulden, T.; Wasser, L. A.

    2015-12-01

    Spatial and temporal monitoring of vegetation structure is important to the ecological community. Airborne Light Detection and Ranging (LiDAR) systems are used to efficiently survey large forested areas. From LiDAR data, three-dimensional models of forests called canopy height models (CHMs) are generated and used to estimate tree height. A common problem associated with CHMs is data pits, where LiDAR pulses penetrate the top of the canopy, leading to an underestimation of vegetation height. The National Ecological Observatory Network (NEON) currently implements an algorithm to reduce data pit frequency, which requires two height threshold parameters, increment size and range ceiling. CHMs are produced at a series of height increments up to a height range ceiling and combined to produce a CHM with reduced pits (referred to as a "pit-free" CHM). The current implementation uses static values for the height increment and ceiling (5 and 15 meters, respectively). To facilitate the generation of accurate pit-free CHMs across diverse NEON sites with varying vegetation structure, the impacts of adjusting the height threshold parameters were investigated through development of an algorithm which dynamically selects the height increment and ceiling. A series of pit-free CHMs were generated using three height range ceilings and four height increment values for three ecologically different sites. Height threshold parameters were found to change CHM-derived tree heights up to 36% compared to original CHMs. The extent of the parameters' influence on modelled tree heights was greater than expected, which will be considered during future CHM data product development at NEON. (A) Aerial image of Harvard National Forest, (B) standard CHM containing pits, appearing as black speckles, (C) a pit-free CHM created with the static algorithm implementation, and (D) a pit-free CHM created through varying the height threshold ceiling up to 82 m and the increment to 1 m.

  2. Economic analysis: randomized placebo-controlled clinical trial of erlotinib in advanced non-small cell lung cancer.

    PubMed

    Bradbury, Penelope A; Tu, Dongsheng; Seymour, Lesley; Isogai, Pierre K; Zhu, Liting; Ng, Raymond; Mittmann, Nicole; Tsao, Ming-Sound; Evans, William K; Shepherd, Frances A; Leighl, Natasha B

    2010-03-03

    The NCIC Clinical Trials Group conducted the BR.21 trial, a randomized placebo-controlled trial of erlotinib (an epidermal growth factor receptor tyrosine kinase inhibitor) in patients with previously treated advanced non-small cell lung cancer. This trial accrued patients between August 14, 2001, and January 31, 2003, and found that overall survival and quality of life were improved in the erlotinib arm than in the placebo arm. However, funding restrictions limit access to erlotinib in many countries. We undertook an economic analysis of erlotinib treatment in this trial and explored different molecular and clinical predictors of outcome to determine the cost-effectiveness of treating various populations with erlotinib. Resource utilization was determined from individual patient data in the BR.21 trial database. The trial recruited 731 patients (488 in the erlotinib arm and 243 in the placebo arm). Costs arising from erlotinib treatment, diagnostic tests, outpatient visits, acute hospitalization, adverse events, lung cancer-related concomitant medications, transfusions, and radiation therapy were captured. The incremental cost-effectiveness ratio was calculated as the ratio of incremental cost (in 2007 Canadian dollars) to incremental effectiveness (life-years gained). In exploratory analyses, we evaluated the benefits of treatment in selected subgroups to determine the impact on the incremental cost-effectiveness ratio. The incremental cost-effectiveness ratio for erlotinib treatment in the BR.21 trial population was $94,638 per life-year gained (95% confidence interval = $52,359 to $429,148). The major drivers of cost-effectiveness included the magnitude of survival benefit and erlotinib cost. Subgroup analyses revealed that erlotinib may be more cost-effective in never-smokers or patients with high EGFR gene copy number. With an incremental cost-effectiveness ratio of $94 638 per life-year gained, erlotinib treatment for patients with previously treated advanced non-small cell lung cancer is marginally cost-effective. The use of molecular predictors of benefit for targeted agents may help identify more or less cost-effective subgroups for treatment.

  3. Integrated feature extraction and selection for neuroimage classification

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Shen, Dinggang

    2009-02-01

    Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.

  4. Compact cancer biomarkers discovery using a swarm intelligence feature selection algorithm.

    PubMed

    Martinez, Emmanuel; Alvarez, Mario Moises; Trevino, Victor

    2010-08-01

    Biomarker discovery is a typical application from functional genomics. Due to the large number of genes studied simultaneously in microarray data, feature selection is a key step. Swarm intelligence has emerged as a solution for the feature selection problem. However, swarm intelligence settings for feature selection fail to select small features subsets. We have proposed a swarm intelligence feature selection algorithm based on the initialization and update of only a subset of particles in the swarm. In this study, we tested our algorithm in 11 microarray datasets for brain, leukemia, lung, prostate, and others. We show that the proposed swarm intelligence algorithm successfully increase the classification accuracy and decrease the number of selected features compared to other swarm intelligence methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Tooth brushing frequency and risk of new carious lesions.

    PubMed

    Holmes, Richard D

    2016-12-01

    Data sourcesMedline, Embase, CINHAL and the Cochrane databases.Study selectionTwo reviewers selected studies, and case-control, prospective cohort, retrospective cohort and experimental trials evaluating the effect of toothbrushing frequency on the incidence or increment of new carious lesions were considered.Data extraction and synthesisTwo reviewers undertook data abstraction independently using pre-piloted forms. Study quality was assessed using a quality assessment tool for quantitative studies developed by the Effective Public Health Practice Project (EPHPP). Meta-analysis of caries outcomes was carried out using RefMan and meta-regressions undertaken to assess the influence of sample size, follow-up period, caries diagnosis level and study methodological quality.ResultsThirty-three studies were included of which 13 were considered to be methodologically strong, 14 moderate and six weak. Twenty-five studies contributed to the quantitative analysis. Compared with frequent brushers, self-reported infrequent brushers demonstrated a higher incidence of carious lesions, OR=1.50 (95%CI: 1.34 -1.69). The odds of having carious lesions differed little when subgroup analysis was conducted to compare the incidence between ≥2 times/d vs <2 times/d; OR=1.45; (95%CI; 1.21 - 1.74) and ≥1 time/d vs <1 time/d brushers OR=1.56; (95%CI; 1.37 - 1.78). Brushing <2 times/day significantly caused an increment of carious lesions compared with ≥2/day brushing, standardised mean difference [SMD] =0.34; (95%CI; 0.18 - 0.49). Overall, infrequent brushing was associated with an increment of carious lesions, SMD= 0.28; (95%CI; 0.13 - 0.44). Meta-analysis conducted with the type of dentition as subgroups found the effect of infrequent brushing on incidence and increment of carious lesions was higher in deciduous, OR=1.75; (95%CI; 1.49 - 2.06) than permanent dentition OR=1.39; (95% CI: 1.29 -1.49). Meta-regression indicated that none of the included variables influenced the effect estimate.ConclusionsIndividuals who state that they brush their teeth infrequently are at greater risk for the incidence or increment of new carious lesions than those brushing more frequently. The effect is more pronounced in the deciduous than in the permanent dentition. A few studies indicate that this effect is independent of the presence of fluoride in toothpaste.

  6. Different functional classes of genes are characterized by different compositional properties.

    PubMed

    D'Onofrio, Giuseppe; Ghosh, Tapash Chandra; Saccone, Salvatore

    2007-12-22

    A compositional analysis on a set of human genes classified in several functional classes was performed. We found out that the GC3, i.e. the GC level at the third codon positions, of the genes involved in cellular metabolism was significantly higher than those involved in information storage and processing. Analyses of human/Xenopus ortologous genes showed that: (i) the GC3 increment of the genes involved in cellular metabolism was significantly higher than those involved in information storage and processing; and (ii) a strong correlation between the GC3 and the corresponding GCi, i.e. the GC level of introns, was found in each functional class. The non-randomness of the GC increments favours the selective hypothesis of gene/genome evolution.

  7. Reducing voluntary, avoidable turnover through selection.

    PubMed

    Barrick, Murray R; Zimmerman, Ryan D

    2005-01-01

    The authors investigated the efficacy of several variables used to predict voluntary, organizationally avoidable turnover even before the employee is hired. Analyses conducted on applicant data collected in 2 separate organizations (N = 445) confirmed that biodata, clear-purpose attitudes and intentions, and disguised-purpose dispositional retention scales predicted voluntary, avoidable turnover (rs ranged from -.16 to -.22, R = .37, adjusted R = .33). Results also revealed that biodata scales and disguised-purpose retention scales added incremental validity, whereas clear-purpose retention scales did not explain significant incremental variance in turnover beyond what was explained by biodata and disguised-purpose scales. Furthermore, disparate impact (subgroup differences on race, sex, and age) was consistently small (average d = 0.12 when the majority group scored higher than the minority group).

  8. Incremental Support Vector Machine Framework for Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Awad, Mariette; Jiang, Xianhua; Motai, Yuichi

    2006-12-01

    Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  9. Multi-task feature selection in microarray data by binary integer programming.

    PubMed

    Lan, Liang; Vucetic, Slobodan

    2013-12-20

    A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.

  10. SAMS Acceleration Measurements on Mir From January to May 1997 (NASA Increment 4)

    NASA Technical Reports Server (NTRS)

    DeLombard, Richard

    1998-01-01

    During NASA Increment 4 (January to May 1997), about 5 gigabytes of acceleration data were collected by the Space Acceleration Measurements System (SAMS) onboard the Russian Space Station, Mir. The data were recorded on 28 optical disks which were returned to Earth on STS-84. During this increment, SAMS data were collected in the Priroda module to support the Mir Structural Dynamics Experiment (MiSDE), the Binary Colloidal Alloy Tests (BCAT), Angular Liquid Bridge (ALB), Candle Flames in Microgravity (CFM), Diffusion Controlled Apparatus Module (DCAM), Enhanced Dynamic Load Sensors (EDLS), Forced Flow Flame Spreading Test (FFFT), Liquid Metal Diffusion (LMD), Protein Crystal Growth in Dewar (PCG/Dewar), Queen's University Experiments in Liquid Diffusion (QUELD), and Technical Evaluation of MIM (TEM). This report points out some of the salient features of the microgravity environment to which these experiments were exposed. Also documented are mission events of interest such as the docked phase of STS-84 operations, a Progress engine burn, Soyuz vehicle docking and undocking, and Progress vehicle docking. This report presents an overview of the SAMS acceleration measurements recorded by 10 Hz and 100 Hz sensor heads. The analyses included herein complement those presented in previous summary reports prepared by the Principal Investigator Microgravity Services (PIMS) group.

  11. Enlargement and contracture of C2-ceramide channels.

    PubMed

    Siskind, Leah J; Davoody, Amirparviz; Lewin, Naomi; Marshall, Stephanie; Colombini, Marco

    2003-09-01

    Ceramides are known to play a major regulatory role in apoptosis by inducing cytochrome c release from mitochondria. We have previously reported that ceramide, but not dihydroceramide, forms large and stable channels in phospholipid membranes and outer membranes of isolated mitochondria. C(2)-ceramide channel formation is characterized by conductance increments ranging from <1 to >200 nS. These conductance increments often represent the enlargement and contracture of channels rather than the opening and closure of independent channels. Enlargement is supported by the observation that many small conductance increments can lead to a large decrement. Also the initial conductances favor cations, but this selectivity drops dramatically with increasing total conductance. La(+3) causes rapid ceramide channel disassembly in a manner indicative of large conducting structures. These channels have a propensity to contract by a defined size (often multiples of 4 nS) indicating the formation of cylindrical channels with preferred diameters rather than a continuum of sizes. The results are consistent with ceramides forming barrel-stave channels whose size can change by loss or insertion of multiple ceramide columns.

  12. Current content of selected pollutants in moss, humus, soil and bark and long-term radial growth of pine trees in the Mezaparks forest in Riga.

    PubMed

    Pīrāga, Dace; Tabors, Guntis; Nikodemus, Oļģerts; Žīgure, Zane; Brūmelis, Guntis

    2017-05-01

    The aim of this study was to evaluate the use of various indicators in the assessment of environmental pollution and to determine the response of pine to changes of pollution levels. Mezaparks is a part of Riga that has been subject to various long-term effects of atmospheric pollution and, in particular, historically from a large superphosphate factory. To determine the spatial distribution of pollution, moss, pine bark and soil O and B horizons were used as sorbents in this study, as well as the additional annual increment of pine trees. The current spatial distribution of pollution is best shown by heavy metal accumulation in mosses and the long-term accumulation of P 2 O 5 pollution by the soil O horizon. The methodological problems of using these sorbents were explored in the study. Environmental pollution and its changes could be associated with the tree growth ring annual additional increment of Mezaparks pine forest stands. The additional increment increased after the closing of the Riga superphosphate factory.

  13. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance

    PubMed Central

    Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy. PMID:29795600

  14. Enlargement and Contracture of C2-Ceramide Channels

    PubMed Central

    Siskind, Leah J.; Davoody, Amirparviz; Lewin, Naomi; Marshall, Stephanie; Colombini, Marco

    2003-01-01

    Ceramides are known to play a major regulatory role in apoptosis by inducing cytochrome c release from mitochondria. We have previously reported that ceramide, but not dihydroceramide, forms large and stable channels in phospholipid membranes and outer membranes of isolated mitochondria. C2-ceramide channel formation is characterized by conductance increments ranging from <1 to >200 nS. These conductance increments often represent the enlargement and contracture of channels rather than the opening and closure of independent channels. Enlargement is supported by the observation that many small conductance increments can lead to a large decrement. Also the initial conductances favor cations, but this selectivity drops dramatically with increasing total conductance. La+3 causes rapid ceramide channel disassembly in a manner indicative of large conducting structures. These channels have a propensity to contract by a defined size (often multiples of 4 nS) indicating the formation of cylindrical channels with preferred diameters rather than a continuum of sizes. The results are consistent with ceramides forming barrel-stave channels whose size can change by loss or insertion of multiple ceramide columns. PMID:12944273

  15. Incremental fuzzy C medoids clustering of time series data using dynamic time warping distance.

    PubMed

    Liu, Yongli; Chen, Jingli; Wu, Shuai; Liu, Zhizhong; Chao, Hao

    2018-01-01

    Clustering time series data is of great significance since it could extract meaningful statistics and other characteristics. Especially in biomedical engineering, outstanding clustering algorithms for time series may help improve the health level of people. Considering data scale and time shifts of time series, in this paper, we introduce two incremental fuzzy clustering algorithms based on a Dynamic Time Warping (DTW) distance. For recruiting Single-Pass and Online patterns, our algorithms could handle large-scale time series data by splitting it into a set of chunks which are processed sequentially. Besides, our algorithms select DTW to measure distance of pair-wise time series and encourage higher clustering accuracy because DTW could determine an optimal match between any two time series by stretching or compressing segments of temporal data. Our new algorithms are compared to some existing prominent incremental fuzzy clustering algorithms on 12 benchmark time series datasets. The experimental results show that the proposed approaches could yield high quality clusters and were better than all the competitors in terms of clustering accuracy.

  16. Attentional Selection of Feature Conjunctions Is Accomplished by Parallel and Independent Selection of Single Features.

    PubMed

    Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A

    2015-07-08

    Experiments that study feature-based attention have often examined situations in which selection is based on a single feature (e.g., the color red). However, in more complex situations relevant stimuli may not be set apart from other stimuli by a single defining property but by a specific combination of features. Here, we examined sustained attentional selection of stimuli defined by conjunctions of color and orientation. Human observers attended to one out of four concurrently presented superimposed fields of randomly moving horizontal or vertical bars of red or blue color to detect brief intervals of coherent motion. Selective stimulus processing in early visual cortex was assessed by recordings of steady-state visual evoked potentials (SSVEPs) elicited by each of the flickering fields of stimuli. We directly contrasted attentional selection of single features and feature conjunctions and found that SSVEP amplitudes on conditions in which selection was based on a single feature only (color or orientation) exactly predicted the magnitude of attentional enhancement of SSVEPs when attending to a conjunction of both features. Furthermore, enhanced SSVEP amplitudes elicited by attended stimuli were accompanied by equivalent reductions of SSVEP amplitudes elicited by unattended stimuli in all cases. We conclude that attentional selection of a feature-conjunction stimulus is accomplished by the parallel and independent facilitation of its constituent feature dimensions in early visual cortex. The ability to perceive the world is limited by the brain's processing capacity. Attention affords adaptive behavior by selectively prioritizing processing of relevant stimuli based on their features (location, color, orientation, etc.). We found that attentional mechanisms for selection of different features belonging to the same object operate independently and in parallel: concurrent attentional selection of two stimulus features is simply the sum of attending to each of those features separately. This result is key to understanding attentional selection in complex (natural) scenes, where relevant stimuli are likely to be defined by a combination of stimulus features. Copyright © 2015 the authors 0270-6474/15/359912-08$15.00/0.

  17. Collective feature selection to identify crucial epistatic variants.

    PubMed

    Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D

    2018-01-01

    Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.

  18. AVC: Selecting discriminative features on basis of AUC by maximizing variable complementarity.

    PubMed

    Sun, Lei; Wang, Jun; Wei, Jinmao

    2017-03-14

    The Receiver Operator Characteristic (ROC) curve is well-known in evaluating classification performance in biomedical field. Owing to its superiority in dealing with imbalanced and cost-sensitive data, the ROC curve has been exploited as a popular metric to evaluate and find out disease-related genes (features). The existing ROC-based feature selection approaches are simple and effective in evaluating individual features. However, these approaches may fail to find real target feature subset due to their lack of effective means to reduce the redundancy between features, which is essential in machine learning. In this paper, we propose to assess feature complementarity by a trick of measuring the distances between the misclassified instances and their nearest misses on the dimensions of pairwise features. If a misclassified instance and its nearest miss on one feature dimension are far apart on another feature dimension, the two features are regarded as complementary to each other. Subsequently, we propose a novel filter feature selection approach on the basis of the ROC analysis. The new approach employs an efficient heuristic search strategy to select optimal features with highest complementarities. The experimental results on a broad range of microarray data sets validate that the classifiers built on the feature subset selected by our approach can get the minimal balanced error rate with a small amount of significant features. Compared with other ROC-based feature selection approaches, our new approach can select fewer features and effectively improve the classification performance.

  19. Integrated mission management operations

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Operations required to launch a modular space station and to provides sustaining ground operations for support of that orbiting station throughout its 10 year mission are studied. A baseline, incrementally manned program and attendent experiment program options are derived. In addition, features of the program that significantly effect initial development and early operating costs are identified, and their impact on the program is assessed. A preliminary design of the approved modular space station configuration is formulated.

  20. Registering Ground and Satellite Imagery for Visual Localization

    DTIC Science & Technology

    2012-08-01

    reckoning, inertial, stereo, light detection and ranging ( LIDAR ), cellular radio, and visual. As no sensor or algorithm provides perfect localization in...by metric localization approaches to confine the region of a map that needs to be searched. Simultaneous Localization and Mapping ( SLAM ) (5, 6), using...estimate the metric location of the camera. Se et al. (7) use SIFT features for both appearance-based global localization and incremental 3D SLAM . Johns and

  1. An adaptive incremental approach to constructing ensemble classifiers: application in an information-theoretic computer-aided decision system for detection of masses in mammograms.

    PubMed

    Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D

    2009-07-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC = 0.905 +/- 0.024) in performance as compared to the original IT-CAD system (AUC = 0.865 +/- 0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters.

  2. Selective Mutism Questionnaire: measurement structure and validity.

    PubMed

    Letamendi, Andrea M; Chavira, Denise A; Hitchcock, Carla A; Roesch, Scott C; Shipon-Blum, Elisa; Stein, Murray B

    2008-10-01

    To evaluate the factor structure, reliability, and validity of the 17-item Selective Mutism Questionnaire (SMQ). Diagnostic interviews were administered via telephone to 102 parents of children identified with selective mutism (SM) and 43 parents of children without SM from varying U.S. geographic regions. Children were between the ages of 3 and 11 inclusive and comprised 58% girls and 42% boys. SM diagnoses were determined using the Anxiety Disorders Interview Schedule for Children-Parent Version; SM severity was assessed using the 17-item SMQ; and behavioral and affective symptoms were assessed using the Child Behavior Checklist. An exploratory factor analysis was conducted to investigate the dimensionality of the SMQ and a modified parallel analysis procedure was used to confirm exploratory factor analysis results. Internal consistency, construct validity, and incremental validity were also examined. The exploratory factor analysis yielded a 13-item solution consisting of three factors: social situations outside of school, school situations, and home and family situations. Internal consistency of SMQ factors and total scale ranged from moderate to high. Convergent and incremental validity was also well supported. Measure structure findings are consistent with the three-factor solution found in a previous psychometric evaluation of the SMQ. Results also suggest that the SMQ provides useful and unique information in the prediction of SM phenomena beyond other child anxiety measures.

  3. A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.

    PubMed

    Ni, Qianwu; Chen, Lei

    2017-01-01

    Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. Non-negative matrix factorization in texture feature for classification of dementia with MRI data

    NASA Astrophysics Data System (ADS)

    Sarwinda, D.; Bustamam, A.; Ardaneswari, G.

    2017-07-01

    This paper investigates applications of non-negative matrix factorization as feature selection method to select the features from gray level co-occurrence matrix. The proposed approach is used to classify dementia using MRI data. In this study, texture analysis using gray level co-occurrence matrix is done to feature extraction. In the feature extraction process of MRI data, we found seven features from gray level co-occurrence matrix. Non-negative matrix factorization selected three features that influence of all features produced by feature extractions. A Naïve Bayes classifier is adapted to classify dementia, i.e. Alzheimer's disease, Mild Cognitive Impairment (MCI) and normal control. The experimental results show that non-negative factorization as feature selection method able to achieve an accuracy of 96.4% for classification of Alzheimer's and normal control. The proposed method also compared with other features selection methods i.e. Principal Component Analysis (PCA).

  5. Comparison of lifetime incremental cost:utility ratios of surgery relative to failed medical management for the treatment of hip, knee and spine osteoarthritis modelled using 2-year postsurgical values

    PubMed Central

    Tso, Peggy; Walker, Kevin; Mahomed, Nizar; Coyte, Peter C.; Rampersaud, Y. Raja

    2012-01-01

    Background Demand for surgery to treat osteoarthritis (OA) of the hip, knee and spine has risen dramatically. Whereas total hip (THA) and total knee arthroplasty (TKA) have been widely accepted as cost-effective, spine surgeries (decompression, decompression with fusion) to treat degenerative conditions remain underfunded compared with other surgeries. Methods An incremental cost–utility analysis comparing decompression and decompression with fusion to THA and TKA, from the perspective of the provincial health insurance system, was based on an observational matched-cohort study of prospectively collected outcomes and retrospectively collected costs. Patient outcomes were measured using short-form (SF)-36 surveys over a 2-year follow-up period. Utility was modelled over the lifetime, and quality-adjusted life years (QALYs) were determined. We calculated the incremental cost per QALY gained by estimating mean incremental lifetime costs and QALYs of surgery compared with medical management of each diagnosis group after discounting costs and QALYs at 3%. Sensitivity analyses were also conducted. Results The lifetime incremental cost:utility ratios (ICURs) discounted at 3% were $5321 per QALY for THA, $11 275 per QALY for TKA, $2307 per QALY for spinal decompression and $7153 per QALY for spinal decompression with fusion. The sensitivity analyses did not alter the ranking of the lifetime ICURs. Conclusion In appropriately selected patients with leg-dominant symptoms secondary to focal lumbar spinal stenosis who have failed medical management, the lifetime ICUR for surgical treatment of lumbar spinal stenosis is similar to those of THA and TKA for the treatment of OA. PMID:22630061

  6. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2017-04-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  7. Feature Selection for Classification of Polar Regions Using a Fuzzy Expert System

    NASA Technical Reports Server (NTRS)

    Penaloza, Mauel A.; Welch, Ronald M.

    1996-01-01

    Labeling, feature selection, and the choice of classifier are critical elements for classification of scenes and for image understanding. This study examines several methods for feature selection in polar regions, including the list, of a fuzzy logic-based expert system for further refinement of a set of selected features. Six Advanced Very High Resolution Radiometer (AVHRR) Local Area Coverage (LAC) arctic scenes are classified into nine classes: water, snow / ice, ice cloud, land, thin stratus, stratus over water, cumulus over water, textured snow over water, and snow-covered mountains. Sixty-seven spectral and textural features are computed and analyzed by the feature selection algorithms. The divergence, histogram analysis, and discriminant analysis approaches are intercompared for their effectiveness in feature selection. The fuzzy expert system method is used not only to determine the effectiveness of each approach in classifying polar scenes, but also to further reduce the features into a more optimal set. For each selection method,features are ranked from best to worst, and the best half of the features are selected. Then, rules using these selected features are defined. The results of running the fuzzy expert system with these rules show that the divergence method produces the best set features, not only does it produce the highest classification accuracy, but also it has the lowest computation requirements. A reduction of the set of features produced by the divergence method using the fuzzy expert system results in an overall classification accuracy of over 95 %. However, this increase of accuracy has a high computation cost.

  8. Modeling Incremental Initial Active Duty Continuation Probabilities in the Selected Marine Corps Reserve

    DTIC Science & Technology

    2014-03-01

    Regression. 2nd ed. Hoboken, NJ: John Wiley & Sons, 2000. Lien, Dianna, Aline Quester, and Robert Shuford, Marine Corps Deployment Tempo and Retention...calhoun.nps.edu/public/bitstream/handle/10945/5778/11Mar_Lizarraga.pdf? sequence=1 Quester, Aline , Laura Kelley, Cathy Hiatt, and Robert Shoford. Marine

  9. Academic Specialisation and Returns to Education: Evidence from India

    ERIC Educational Resources Information Center

    Saha, Bibhas; Sensarma, Rudra

    2011-01-01

    We study returns to academic specialisation for Indian corporate sector workers by analysing cross-sectional data on male employees randomly selected from six large firms. Our analysis shows that going to college pays off, as it brings significant incremental returns over and above school education. However, the increase in returns is more…

  10. Optimal Diameter Growth Equations for Major Tree Species of the Midsouth

    Treesearch

    Don C. Bragg

    2003-01-01

    Optimal diameter growth equations for 60 major tree species were fit using the potential relative increment (PRI) methodology. Almost 175,000 individuals from the Midsouth (Arkansas, Louisiana, Missouri, Oklahoma, and Texas) were selected from the USDA Forest Service's Eastwide Forest Inventory Database (EFIDB). These records were then reduced to the individuals...

  11. Application of a Multidimensional Nested Logit Model to Multiple-Choice Test Items

    ERIC Educational Resources Information Center

    Bolt, Daniel M.; Wollack, James A.; Suh, Youngsuk

    2012-01-01

    Nested logit models have been presented as an alternative to multinomial logistic models for multiple-choice test items (Suh and Bolt in "Psychometrika" 75:454-473, 2010) and possess a mathematical structure that naturally lends itself to evaluating the incremental information provided by attending to distractor selection in scoring. One potential…

  12. Beyond Exposure: A Person-Oriented Approach to Adolescent Media Diets

    ERIC Educational Resources Information Center

    Schooler, Deborah; Sorsoli, C. Lynn; Kim, Janna L.; Tolman, Deborah L.

    2009-01-01

    Research on adolescents' use of sexual media has been dominated by a variable-oriented perspective, focusing on incremental effects of media exposure on sexual behavior. The present investigation examines the ways in which adolescents select and organize their television viewing. This study used cluster analysis to identify, validate, and describe…

  13. Designing lymphocyte functional structure for optimal signal detection: voilà, T cells.

    PubMed

    Noest, A J

    2000-11-21

    One basic task of immune systems is to detect signals from unknown "intruders" amidst a noisy background of harmless signals. To clarify the functional importance of many observed lymphocyte properties, I ask: What properties would a cell have if one designed it according to the theory of optimal detection, with minimal regard for biological constraints? Sparse and reasonable assumptions about the statistics of available signals prove sufficient for deriving many features of the optimal functional structure, in an incremental and modular design. The use of one common formalism guarantees that all parts of the design collaborate to solve the detection task. Detection performance is computed at several stages of the design. Comparison between design variants reveals e.g. the importance of controlling the signal integration time. This predicts that an appropriate control mechanism should exist. Comparing the design to reality, I find a striking similarity with many features of T cells. For example, the formalism dictates clonal specificity, serial receptor triggering, (grades of) anergy, negative and positive selection, co-stimulation, high-zone tolerance, and clonal production of cytokines. Serious mismatches should be found if T cells were hindered by mechanistic constraints or vestiges of their (co-)evolutionary history, but I have not found clear examples. By contrast, fundamental mismatches abound when comparing the design to immune systems of e.g. invertebrates. The wide-ranging differences seem to hinge on the (in)ability to generate a large diversity of receptors. Copyright 2000 Academic Press.

  14. Affordance Boundaries Are Defined by Dynamic Capabilities of Parkour Athletes in Dropping from Various Heights

    PubMed Central

    Croft, James L.; Bertram, John E. A.

    2017-01-01

    Available behaviors are determined by the fit between features of the individual and reciprocal features of the environment. Beyond some critical boundary certain behaviors become impossible causing sudden transitions from one movement pattern to another. Parkour athletes have developed multiple movement patterns to deal with their momentum during landing. We were interested in whether drop distance would cause a sudden transition between a two-footed (precision) landing and a load-distributing roll and whether the transition height could be predicted by dynamic and geometric characteristics of individual subjects. Kinematics and ground reaction forces were measured as Parkour athletes stepped off a box from heights that were incrementally increased or decreased from 0.6 to 2.3 m. Individuals were more likely to roll from higher drops; those with greater body mass and less explosive leg power, were more likely to transition to a roll landing at a lower height. At some height a two-footed landing is no longer feasible but for some athletes this height was well within the maximum drop height used in this study. During low drops the primary task constraint of managing momentum could be achieved with either a precision landing or a roll. This meant that participants were free to select their preferred landing strategy, which was only partially influenced by the physical demands of the task. However, athletes with greater leg power appeared capable of managing impulse absorption through a leg mediated strategy up to a greater drop height. PMID:28979219

  15. iPro54-PseKNC: a sequence-based predictor for identifying sigma-54 promoters in prokaryote with pseudo k-tuple nucleotide composition

    PubMed Central

    Lin, Hao; Deng, En-Ze; Ding, Hui; Chen, Wei; Chou, Kuo-Chen

    2014-01-01

    The σ54 promoters are unique in prokaryotic genome and responsible for transcripting carbon and nitrogen-related genes. With the avalanche of genome sequences generated in the postgenomic age, it is highly desired to develop automated methods for rapidly and effectively identifying the σ54 promoters. Here, a predictor called ‘iPro54-PseKNC’ was developed. In the predictor, the samples of DNA sequences were formulated by a novel feature vector called ‘pseudo k-tuple nucleotide composition’, which was further optimized by the incremental feature selection procedure. The performance of iPro54-PseKNC was examined by the rigorous jackknife cross-validation tests on a stringent benchmark data set. As a user-friendly web-server, iPro54-PseKNC is freely accessible at http://lin.uestc.edu.cn/server/iPro54-PseKNC. For the convenience of the vast majority of experimental scientists, a step-by-step protocol guide was provided on how to use the web-server to get the desired results without the need to follow the complicated mathematics that were presented in this paper just for its integrity. Meanwhile, we also discovered through an in-depth statistical analysis that the distribution of distances between the transcription start sites and the translation initiation sites were governed by the gamma distribution, which may provide a fundamental physical principle for studying the σ54 promoters. PMID:25361964

  16. Unbiased feature selection in learning random forests for high-dimensional data.

    PubMed

    Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi

    2015-01-01

    Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.

  17. Train axle bearing fault detection using a feature selection scheme based multi-scale morphological filter

    NASA Astrophysics Data System (ADS)

    Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin

    2018-02-01

    This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.

  18. Genetic Particle Swarm Optimization-Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection.

    PubMed

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-07-30

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.

  19. Genetic Programming and Frequent Itemset Mining to Identify Feature Selection Patterns of iEEG and fMRI Epilepsy Data

    PubMed Central

    Smart, Otis; Burrell, Lauren

    2014-01-01

    Pattern classification for intracranial electroencephalogram (iEEG) and functional magnetic resonance imaging (fMRI) signals has furthered epilepsy research toward understanding the origin of epileptic seizures and localizing dysfunctional brain tissue for treatment. Prior research has demonstrated that implicitly selecting features with a genetic programming (GP) algorithm more effectively determined the proper features to discern biomarker and non-biomarker interictal iEEG and fMRI activity than conventional feature selection approaches. However for each the iEEG and fMRI modalities, it is still uncertain whether the stochastic properties of indirect feature selection with a GP yield (a) consistent results within a patient data set and (b) features that are specific or universal across multiple patient data sets. We examined the reproducibility of implicitly selecting features to classify interictal activity using a GP algorithm by performing several selection trials and subsequent frequent itemset mining (FIM) for separate iEEG and fMRI epilepsy patient data. We observed within-subject consistency and across-subject variability with some small similarity for selected features, indicating a clear need for patient-specific features and possible need for patient-specific feature selection or/and classification. For the fMRI, using nearest-neighbor classification and 30 GP generations, we obtained over 60% median sensitivity and over 60% median selectivity. For the iEEG, using nearest-neighbor classification and 30 GP generations, we obtained over 65% median sensitivity and over 65% median selectivity except one patient. PMID:25580059

  20. The effect of PD-L1 testing on the cost-effectiveness and economic impact of immune checkpoint inhibitors for the second-line treatment of NSCLC.

    PubMed

    Aguiar, P N; Perry, L A; Penny-Dimri, J; Babiker, H; Tadokoro, H; de Mello, R A; Lopes, G L

    2017-09-01

    Immune checkpoint inhibitors improve outcomes compared with chemotherapy in lung cancer. Tumor PD-L1 receptor expression is being studied as a predictive biomarker. The objective of this study was to assess the cost-effectiveness and economic impact of second-line treatment with nivolumab, pembrolizumab, and atezolizumab with and without the use of PD-L1 testing for patient selection. We developed a decision-analytic model to determine the cost-effectiveness of PD-L1 assessment and second-line immunotherapy versus docetaxel. The model used outcomes data from randomized clinical trials (RCTs) and drug acquisition costs from the United States. Thereafter, we used epidemiologic data to estimate the economic impact of the treatment. We included four RCTs (2 with nivolumab, 1 with pembrolizumab, and 1 with atezolizumab). The incremental quality-adjusted life year (QALY) for nivolumab was 0.417 among squamous tumors and 0.287 among non-squamous tumors and the incremental cost-effectiveness ratio (ICER) were $155 605 and $187 685, respectively. The QALY gain in the base case for atezolizumab was 0.354 and the ICER was $215 802. Compared with treating all patients, the selection of patients by PD-L1 expression improved incremental QALY by up to 183% and decreased the ICER by up to 65%. Pembrolizumab was studied only in patients whose tumors expressed PD-L1. The QALY gain was 0.346 and the ICER was $98 421. Patient selection also reduced the budget impact of immunotherapy. The use of PD-L1 expression as a biomarker increases cost-effectiveness of immunotherapy but also diminishes the number of potential life-years saved. © The Author 2017. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  1. Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.

    NASA Astrophysics Data System (ADS)

    Sasaki, Hironori

    This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically investigated using the photorefractive dynamics described in previous chapters of the dissertation. The implementation of the feed-forward image compression network with 900 input and 9 output neurons with 6-bit interconnection accuracy is experimentally demonstrated. Learning of the Perceptron network that determines sex based on input face images of 900 pixels is also successfully demonstrated.

  2. Pliocene Seasonality along the US Atlantic Coastal Plain Inferred from Growth Increment Analysis of Mercenaria carolinensis

    NASA Astrophysics Data System (ADS)

    Winkelstern, I. Z.; Surge, D. M.

    2010-12-01

    Pliocene sea surface temperature (SST) data from the US Atlantic coastal plain is currently insufficient for a detailed understanding of the climatic shifts that occurred during the period. Previous studies, based on oxygen isotope proxy data from marine shells and bryozoan zooid size analysis, have provided constraints on possible annual-scale SST ranges for the region. However, more data are required to fully understand the forcing mechanisms affecting regional Pliocene climate and evaluate modeled temperature projections. Bivalve sclerochronology (growth increment analysis) is an alternative proxy for SST that can provide annually resolved multi-year time series. The method has been validated in previous studies using modern Arctica, Chione, and Mercenaria. We analyzed Pliocene Mercenaria carolinensis shells using sclerochronologic methods and tested the hypothesis that higher SST ranges are reflected in shells selected from the warmest climate interval (3.5-3.3 Ma, upper Yorktown Formation, Virginia) and lower SST ranges are observable in shells selected from the subsequent cooling interval (2.4-1.8 Ma, Chowan River Formation, North Carolina). These results further establish the validity of growth increment analysis using fossil shells and provide the first large dataset (from the region) of reconstructed annual SST from floating time series during these intervals. These data will enhance our knowledge about a warm climate state that has been identified in the 2007 IPCC report as an analogue for expected global warming. Future work will expand this study to include sampling in Florida to gain detailed information about Pliocene SST along a latitudinal gradient.

  3. Method of generating features optimal to a dataset and classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.

    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  4. Temporal Correlation Mechanisms and Their Role in Feature Selection: A Single-Unit Study in Primate Somatosensory Cortex

    PubMed Central

    Gomez-Ramirez, Manuel; Trzcinski, Natalie K.; Mihalas, Stefan; Niebur, Ernst

    2014-01-01

    Studies in vision show that attention enhances the firing rates of cells when it is directed towards their preferred stimulus feature. However, it is unknown whether other sensory systems employ this mechanism to mediate feature selection within their modalities. Moreover, whether feature-based attention modulates the correlated activity of a population is unclear. Indeed, temporal correlation codes such as spike-synchrony and spike-count correlations (rsc) are believed to play a role in stimulus selection by increasing the signal and reducing the noise in a population, respectively. Here, we investigate (1) whether feature-based attention biases the correlated activity between neurons when attention is directed towards their common preferred feature, (2) the interplay between spike-synchrony and rsc during feature selection, and (3) whether feature attention effects are common across the visual and tactile systems. Single-unit recordings were made in secondary somatosensory cortex of three non-human primates while animals engaged in tactile feature (orientation and frequency) and visual discrimination tasks. We found that both firing rate and spike-synchrony between neurons with similar feature selectivity were enhanced when attention was directed towards their preferred feature. However, attention effects on spike-synchrony were twice as large as those on firing rate, and had a tighter relationship with behavioral performance. Further, we observed increased rsc when attention was directed towards the visual modality (i.e., away from touch). These data suggest that similar feature selection mechanisms are employed in vision and touch, and that temporal correlation codes such as spike-synchrony play a role in mediating feature selection. We posit that feature-based selection operates by implementing multiple mechanisms that reduce the overall noise levels in the neural population and synchronize activity across subpopulations that encode the relevant features of sensory stimuli. PMID:25423284

  5. Feature Selection Methods for Zero-Shot Learning of Neural Activity.

    PubMed

    Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R

    2017-01-01

    Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.

  6. Improving the Methods for the Economic Evaluation of Medical Devices.

    PubMed

    Tarricone, Rosanna; Callea, Giuditta; Ogorevc, Marko; Prevolnik Rupel, Valentina

    2017-02-01

    Medical devices (MDs) have distinctive features, such as incremental innovation, dynamic pricing, the learning curve and organisational impact, that need to be considered when they are evaluated. This paper investigates how MDs have been assessed in practice, in order to identify methodological gaps that need to be addressed to improve the decision-making process for their adoption. We used the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist supplemented by some additional categories to assess the quality of reporting and consideration of the distinctive features of MDs. Two case studies were considered: transcatheter aortic valve implantation (TAVI) representing an emerging technology and implantable cardioverter defibrillators (ICDs) representing a mature technology. Economic evaluation studies published as journal articles or within Health Technology Assessment reports were identified through a systematic literature review. A total of 19 studies on TAVI and 41 studies on ICDs were analysed. Learning curve was considered in only 16% of studies on TAVI. Incremental innovation was more frequently mentioned in the studies of ICDs, but its impact was considered in only 34% of the cases. Dynamic pricing was the most recognised feature but was empirically tested in less than half of studies of TAVI and only 32% of studies on ICDs. Finally, organisational impact was considered in only one study of ICDs and in almost all studies on TAVI, but none of them estimated its impact. By their very nature, most of the distinctive features of MDs cannot be fully assessed at market entry. However, their potential impact could be modelled, based on the experience with previous MDs, in order to make a preliminary recommendation. Then, well-designed post-market studies could help in reducing uncertainties and make policymakers more confident to achieve conclusive recommendations. © 2017 The Authors. Health Economics published by John Wiley & Sons, Ltd. © 2017 The Authors. Health Economics published by John Wiley & Sons, Ltd.

  7. Multi-Reanalysis Comparison of Variability in Analysis Increment of Column-Integrated Water Vapor Associated with Madden-Julian Oscillation

    NASA Astrophysics Data System (ADS)

    Yokoi, S.

    2014-12-01

    This study conducts a comparison of three reanalysis products (JRA-55, JRA-25, and ERA-Interim) in representation of Madden-Julian Oscillation (MJO), focusing on column-integrated water vapor (CWV) that is considered as an essential variable for discussing MJO dynamics. Besides the analysis fields of CWV, which exhibit spatio-temporal distributions that are quite similar to satellite observations, CWV tendency simulated by forecast models and analysis increment calculated by data assimilation are examined. For JRA-55, it is revealed that, while its forecast model is able to simulate eastward propagation of the CWV anomaly, it tends to weaken the amplitude, and data assimilation process sustains the amplitude. The multi-reanalysis comparison of the analysis increment further reveals that this weakening bias is probably caused by excessively weak cloud-radiative feedback represented by the model. This bias in the feedback strength makes anomalous moisture supply by the vertical advection term in the CWV budget equation too insensitive to precipitation anomaly, resulting in reduction of the amplitude of CWV anomaly. ERA-Interim has a nearly opposite feature; the forecast model represents excessively strong feedback and unrealistically strengthens the amplitude, while the data assimilation weakens it. These results imply the necessity of accurate representation of the cloud-radiative feedback strength for a short-term MJO forecast, and may be evidence to support the argument that this feedback is essential for the existence of MJO. Furthermore, this study demonstrates that the multi-reanalysis comparison of the analysis increment will provide useful information for identifying model biases and, potentially, for estimating parameters that are difficult to estimate solely from observation data, such as gross moist stability.

  8. Comparative wood anatomy of root and stem of Citharexylum myrianthum (Verbenaceae)

    Treesearch

    Carmen Regina Marcati; Leandro Roberto Longo; Alex Wiedenhoeft; Claudia Franca Barros

    2014-01-01

    Root and stem wood anatomy of C. myrianthum (Verbenaceae) from a semideciduous seasonal forest in Botucatu municipality (22º52’20”S and 48º26’37”W), São Paulo state, Brazil, were studied. Growth increments demarcated by semi-ring porosity and marginal bands of axial parenchyma were observed in the wood of both root and stem. Many qualitative features...

  9. Max-AUC Feature Selection in Computer-Aided Detection of Polyps in CT Colonography

    PubMed Central

    Xu, Jian-Wu; Suzuki, Kenji

    2014-01-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level. PMID:24608058

  10. Max-AUC feature selection in computer-aided detection of polyps in CT colonography.

    PubMed

    Xu, Jian-Wu; Suzuki, Kenji

    2014-03-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level.

  11. Feature engineering for drug name recognition in biomedical texts: feature conjunction and feature selection.

    PubMed

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong; Fan, Xiaoming

    2015-01-01

    Drug name recognition (DNR) is a critical step for drug information extraction. Machine learning-based methods have been widely used for DNR with various types of features such as part-of-speech, word shape, and dictionary feature. Features used in current machine learning-based methods are usually singleton features which may be due to explosive features and a large number of noisy features when singleton features are combined into conjunction features. However, singleton features that can only capture one linguistic characteristic of a word are not sufficient to describe the information for DNR when multiple characteristics should be considered. In this study, we explore feature conjunction and feature selection for DNR, which have never been reported. We intuitively select 8 types of singleton features and combine them into conjunction features in two ways. Then, Chi-square, mutual information, and information gain are used to mine effective features. Experimental results show that feature conjunction and feature selection can improve the performance of the DNR system with a moderate number of features and our DNR system significantly outperforms the best system in the DDIExtraction 2013 challenge.

  12. Effect of feature-selective attention on neuronal responses in macaque area MT

    PubMed Central

    Chen, X.; Hoffmann, K.-P.; Albright, T. D.

    2012-01-01

    Attention influences visual processing in striate and extrastriate cortex, which has been extensively studied for spatial-, object-, and feature-based attention. Most studies exploring neural signatures of feature-based attention have trained animals to attend to an object identified by a certain feature and ignore objects/displays identified by a different feature. Little is known about the effects of feature-selective attention, where subjects attend to one stimulus feature domain (e.g., color) of an object while features from different domains (e.g., direction of motion) of the same object are ignored. To study this type of feature-selective attention in area MT in the middle temporal sulcus, we trained macaque monkeys to either attend to and report the direction of motion of a moving sine wave grating (a feature for which MT neurons display strong selectivity) or attend to and report its color (a feature for which MT neurons have very limited selectivity). We hypothesized that neurons would upregulate their firing rate during attend-direction conditions compared with attend-color conditions. We found that feature-selective attention significantly affected 22% of MT neurons. Contrary to our hypothesis, these neurons did not necessarily increase firing rate when animals attended to direction of motion but fell into one of two classes. In one class, attention to color increased the gain of stimulus-induced responses compared with attend-direction conditions. The other class displayed the opposite effects. Feature-selective activity modulations occurred earlier in neurons modulated by attention to color compared with neurons modulated by attention to motion direction. Thus feature-selective attention influences neuronal processing in macaque area MT but often exhibited a mismatch between the preferred stimulus dimension (direction of motion) and the preferred attention dimension (attention to color). PMID:22170961

  13. Effect of feature-selective attention on neuronal responses in macaque area MT.

    PubMed

    Chen, X; Hoffmann, K-P; Albright, T D; Thiele, A

    2012-03-01

    Attention influences visual processing in striate and extrastriate cortex, which has been extensively studied for spatial-, object-, and feature-based attention. Most studies exploring neural signatures of feature-based attention have trained animals to attend to an object identified by a certain feature and ignore objects/displays identified by a different feature. Little is known about the effects of feature-selective attention, where subjects attend to one stimulus feature domain (e.g., color) of an object while features from different domains (e.g., direction of motion) of the same object are ignored. To study this type of feature-selective attention in area MT in the middle temporal sulcus, we trained macaque monkeys to either attend to and report the direction of motion of a moving sine wave grating (a feature for which MT neurons display strong selectivity) or attend to and report its color (a feature for which MT neurons have very limited selectivity). We hypothesized that neurons would upregulate their firing rate during attend-direction conditions compared with attend-color conditions. We found that feature-selective attention significantly affected 22% of MT neurons. Contrary to our hypothesis, these neurons did not necessarily increase firing rate when animals attended to direction of motion but fell into one of two classes. In one class, attention to color increased the gain of stimulus-induced responses compared with attend-direction conditions. The other class displayed the opposite effects. Feature-selective activity modulations occurred earlier in neurons modulated by attention to color compared with neurons modulated by attention to motion direction. Thus feature-selective attention influences neuronal processing in macaque area MT but often exhibited a mismatch between the preferred stimulus dimension (direction of motion) and the preferred attention dimension (attention to color).

  14. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  15. Modified Bat Algorithm for Feature Selection with the Wisconsin Diagnosis Breast Cancer (WDBC) Dataset

    PubMed

    Jeyasingh, Suganthi; Veluchamy, Malathi

    2017-05-01

    Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License

  16. Genetic Particle Swarm Optimization–Based Feature Selection for Very-High-Resolution Remotely Sensed Imagery Object Change Detection

    PubMed Central

    Chen, Qiang; Chen, Yunhao; Jiang, Weiguo

    2016-01-01

    In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm. PMID:27483285

  17. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Sentiment analysis of feature ranking methods for classification accuracy

    NASA Astrophysics Data System (ADS)

    Joseph, Shashank; Mugauri, Calvin; Sumathy, S.

    2017-11-01

    Text pre-processing and feature selection are important and critical steps in text mining. Text pre-processing of large volumes of datasets is a difficult task as unstructured raw data is converted into structured format. Traditional methods of processing and weighing took much time and were less accurate. To overcome this challenge, feature ranking techniques have been devised. A feature set from text preprocessing is fed as input for feature selection. Feature selection helps improve text classification accuracy. Of the three feature selection categories available, the filter category will be the focus. Five feature ranking methods namely: document frequency, standard deviation information gain, CHI-SQUARE, and weighted-log likelihood -ratio is analyzed.

  19. Successful Principalship in Norway: Sustainable Ethos and Incremental Changes?

    ERIC Educational Resources Information Center

    Moller, Jorunn; Vedoy, Gunn; Presthus, Anne Marie; Skedsmo, Guri

    2009-01-01

    Purpose: The purpose of this paper is to explore whether and how success has been sustained over time in schools which were identified as being successful five years ago. Design/methodology/approach: Three schools were selected for a revisit, and the sample included two combined schools (grade 1-10) and one upper secondary school (grade 11-13). In…

  20. Multibeam collimator uses prism stack

    NASA Technical Reports Server (NTRS)

    Minott, P. O.

    1981-01-01

    Optical instrument creates many divergent light beams for surveying and machine element alignment applications. Angles and refractive indices of stack of prisms are selected to divert incoming laser beam by small increments, different for each prism. Angles of emerging beams thus differ by small, precisely-controlled amounts. Instrument is nearly immune to vibration, changes in gravitational force, temperature variations, and mechanical distortion.

  1. Non-metallic coating thickness prediction using artificial neural network and support vector machine with time resolved thermography

    NASA Astrophysics Data System (ADS)

    Wang, Hongjin; Hsieh, Sheng-Jen; Peng, Bo; Zhou, Xunfei

    2016-07-01

    A method without requirements on knowledge about thermal properties of coatings or those of substrates will be interested in the industrial application. Supervised machine learning regressions may provide possible solution to the problem. This paper compares the performances of two regression models (artificial neural networks (ANN) and support vector machines for regression (SVM)) with respect to coating thickness estimations made based on surface temperature increments collected via time resolved thermography. We describe SVM roles in coating thickness prediction. Non-dimensional analyses are conducted to illustrate the effects of coating thicknesses and various factors on surface temperature increments. It's theoretically possible to correlate coating thickness with surface increment. Based on the analyses, the laser power is selected in such a way: during the heating, the temperature increment is high enough to determine the coating thickness variance but low enough to avoid surface melting. Sixty-one pain-coated samples with coating thicknesses varying from 63.5 μm to 571 μm are used to train models. Hyper-parameters of the models are optimized by 10-folder cross validation. Another 28 sets of data are then collected to test the performance of the three methods. The study shows that SVM can provide reliable predictions of unknown data, due to its deterministic characteristics, and it works well when used for a small input data group. The SVM model generates more accurate coating thickness estimates than the ANN model.

  2. Mutual information criterion for feature selection with application to classification of breast microcalcifications

    NASA Astrophysics Data System (ADS)

    Diamant, Idit; Shalhon, Moran; Goldberger, Jacob; Greenspan, Hayit

    2016-03-01

    Classification of clustered breast microcalcifications into benign and malignant categories is an extremely challenging task for computerized algorithms and expert radiologists alike. In this paper we present a novel method for feature selection based on mutual information (MI) criterion for automatic classification of microcalcifications. We explored the MI based feature selection for various texture features. The proposed method was evaluated on a standardized digital database for screening mammography (DDSM). Experimental results demonstrate the effectiveness and the advantage of using the MI-based feature selection to obtain the most relevant features for the task and thus to provide for improved performance as compared to using all features.

  3. Feature Selection Methods for Zero-Shot Learning of Neural Activity

    PubMed Central

    Caceres, Carlos A.; Roos, Matthew J.; Rupp, Kyle M.; Milsap, Griffin; Crone, Nathan E.; Wolmetz, Michael E.; Ratto, Christopher R.

    2017-01-01

    Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy. PMID:28690513

  4. A genetic algorithm for optimization of neural network capable of learning to search for food in a maze

    NASA Astrophysics Data System (ADS)

    Budilova, E. V.; Terekhin, A. T.; Chepurnov, S. A.

    1994-09-01

    A hypothetical neural scheme is proposed that ensures efficient decision making by an animal searching for food in a maze. Only the general structure of the network is fixed; its quantitative characteristics are found by numerical optimization that simulates the process of natural selection. Selection is aimed at maximization of the expected number of descendants, which is directly related to the energy stored during the reproductive cycle. The main parameters to be optimized are the increments of the interneuronal links and the working-memory constants.

  5. Enhancing the Performance of LibSVM Classifier by Kernel F-Score Feature Selection

    NASA Astrophysics Data System (ADS)

    Sarojini, Balakrishnan; Ramaraj, Narayanasamy; Nickolas, Savarimuthu

    Medical Data mining is the search for relationships and patterns within the medical datasets that could provide useful knowledge for effective clinical decisions. The inclusion of irrelevant, redundant and noisy features in the process model results in poor predictive accuracy. Much research work in data mining has gone into improving the predictive accuracy of the classifiers by applying the techniques of feature selection. Feature selection in medical data mining is appreciable as the diagnosis of the disease could be done in this patient-care activity with minimum number of significant features. The objective of this work is to show that selecting the more significant features would improve the performance of the classifier. We empirically evaluate the classification effectiveness of LibSVM classifier on the reduced feature subset of diabetes dataset. The evaluations suggest that the feature subset selected improves the predictive accuracy of the classifier and reduce false negatives and false positives.

  6. The fate of task-irrelevant visual motion: perceptual load versus feature-based attention.

    PubMed

    Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli

    2009-11-18

    We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.

  7. Classification Influence of Features on Given Emotions and Its Application in Feature Selection

    NASA Astrophysics Data System (ADS)

    Xing, Yin; Chen, Chuang; Liu, Li-Long

    2018-04-01

    In order to solve the problem that there is a large amount of redundant data in high-dimensional speech emotion features, we analyze deeply the extracted speech emotion features and select better features. Firstly, a given emotion is classified by each feature. Secondly, the recognition rate is ranked in descending order. Then, the optimal threshold of features is determined by rate criterion. Finally, the better features are obtained. When applied in Berlin and Chinese emotional data set, the experimental results show that the feature selection method outperforms the other traditional methods.

  8. Feature Selection in Classification of Eye Movements Using Electrooculography for Activity Recognition

    PubMed Central

    Mala, S.; Latha, K.

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition. PMID:25574185

  9. Feature selection in classification of eye movements using electrooculography for activity recognition.

    PubMed

    Mala, S; Latha, K

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.

  10. Evaluation of auxiliary power subsystems for gas engine heat pumps, phase 2

    NASA Astrophysics Data System (ADS)

    Rasmussen, R. W.; Wahlstedt, D. A.; Planer, N.; Fink, J.; Persson, E.

    1988-12-01

    The need to determine the practical, technical and economic viability for a stand-alone Gas Engine Heat Pump (GEHP) system capable of generating its own needed electricity is addressed. Thirty-eight reasonable design configurations were conceived based upon small-sized power conversion equipment that is either commercially available or close to emerging on the market. Nine of these configurations were analyzed due to their potential for low first cost, high conversion efficiency, availability or simplicity. It was found that electric consumption can be reduced by over 60 percent through the implementation of high efficiency, brushless, permanent magnet motors as fan and pump drivers. Of the nine selected configurations employing variable-speed fans, two were found to have simple incremental payback periods of 4.2 to 16 years, depending on the U.S. city chosen for analysis. Although the auxiliary power subsystem option is only marginally attractive from an economic standpoint, the increased gas load provided to the local gas utility may be sufficient to encourage further development. The ability of the system to operate completely disconnected from the electric power source may be a feature of high merit.

  11. Effects of protocol step length on biomechanical measures in swimming.

    PubMed

    Barbosa, Tiago M; de Jesus, Kelly; Abraldes, J Arturo; Ribeiro, João; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo J

    2015-03-01

    The assessment of energetic and mechanical parameters in swimming often requires the use of an intermittent incremental protocol, whose step lengths are corner stones for the efficiency of the evaluation procedures. To analyze changes in swimming kinematics and interlimb coordination behavior in 3 variants, with different step lengths, of an intermittent incremental protocol. Twenty-two male swimmers performed n×di variants of an intermittent and incremental protocol (n≤7; d1=200 m, d2=300 m, and d3=400 m). Swimmers were videotaped in the sagittal plane for 2-dimensional kinematical analysis using a dual-media setup. Video images were digitized with a motion-capture system. Parameters that were assessed included the stroke kinematics, the segmental and anatomical landmark kinematics, and interlimb coordination. Movement efficiency was also estimated. There were no significant variations in any of the selected variables according to the step lengths. A high to very high relationship was observed between step lengths. The bias was much reduced and the 95%CI fairly tight. Since there were no meaningful differences between the 3 protocol variants, the 1 with shortest step length (ie, 200 m) should be adopted for logistical reasons.

  12. Micromagnetic simulation study of magnetization reversal in torus-shaped permalloy nanorings

    NASA Astrophysics Data System (ADS)

    Mishra, Amaresh Chandra; Giri, R.

    2017-09-01

    Using micromagnetic simulation, the magnetization reversal of soft permalloy rings of torus shape with major radius R varying within 20-100 nm has been investigated. The minor radius r of the torus rings was increased from 5 nm up to a maximum value rmax such that R- rmax = 10 nm. Micromagnetic simulation of in-plane hysteresis curve of these nanorings revealed that in the case of very thin rings (r ≤ 10 nm), the remanent state is found to be an onion state, whereas for all other rings, the remanent state is a vortex state. The area of the hysteresis loop was found to be decreasing gradually with the increment of r. The normalized area under the hysteresis loops (AN) increases initially with increment of r. It attains a maximum for a certain value of r = r0 and again decreases thereafter. This value r0 increases as we decrease R and as a result, this peak feature is hardly visible in the case of smaller rings (rings having small R).

  13. The Incremental Multiresolution Matrix Factorization Algorithm

    PubMed Central

    Ithapu, Vamsi K.; Kondor, Risi; Johnson, Sterling C.; Singh, Vikas

    2017-01-01

    Multiresolution analysis and matrix factorization are foundational tools in computer vision. In this work, we study the interface between these two distinct topics and obtain techniques to uncover hierarchical block structure in symmetric matrices – an important aspect in the success of many vision problems. Our new algorithm, the incremental multiresolution matrix factorization, uncovers such structure one feature at a time, and hence scales well to large matrices. We describe how this multiscale analysis goes much farther than what a direct “global” factorization of the data can identify. We evaluate the efficacy of the resulting factorizations for relative leveraging within regression tasks using medical imaging data. We also use the factorization on representations learned by popular deep networks, providing evidence of their ability to infer semantic relationships even when they are not explicitly trained to do so. We show that this algorithm can be used as an exploratory tool to improve the network architecture, and within numerous other settings in vision. PMID:29416293

  14. The role of prominence in Spanish sentence comprehension: An ERP study.

    PubMed

    Gattei, Carolina A; Tabullo, Ángel; París, Luis; Wainselboim, Alejandro J

    2015-11-01

    Prominence is the hierarchical relation among arguments that allows us to understand 'Who did what to whom' in a sentence. The present study aimed to provide evidence about the role of prominence information for the incremental interpretation of arguments in Spanish. We investigated the time course of neural correlates associated to the comprehension of sentences that require a reversal of argument prominence hierarchization. We also studied how the amount of available prominence information may affect the incremental build-up of verbal expectations. Results of the ERP data revealed that at the disambiguating verb region, object-initial sentences (only one argument available) elicited a centro-parietal negativity with a peak at 400 ms post-onset. Subject-initial sentences (two arguments available) yielded a broadly distributed positivity at around 650 ms. This dissociation suggests that argument interpretation may depend on their morphosyntactic features, and also on the amount of prominence information available before the verb is encountered. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Coexistence of positive and negative refractive index sensitivity in the liquid-core photonic crystal fiber based plasmonic sensor.

    PubMed

    Shuai, Binbin; Xia, Li; Liu, Deming

    2012-11-05

    We present and numerically characterize a liquid-core photonic crystal fiber based plasmonic sensor. The coupling properties and sensing performance are investigated by the finite element method. It is found that not only the plasmonic mode dispersion relation but also the fundamental mode dispersion relation is rather sensitive to the analyte refractive index (RI). The positive and negative RI sensitivity coexist in the proposed design. It features a positive RI sensitivity when the increment of the SPP mode effective index is larger than that of the fundamental mode, but the sensor shows a negative RI sensitivity once the increment of the fundamental mode gets larger. A maximum negative RI sensitivity of -5500nm/RIU (Refractive Index Unit) is achieved in the sensing range of 1.50-1.53. The effects of the structural parameters on the plasmonic excitations are also studied, with a view of tuning and optimizing the resonant spectrum.

  16. International Space Station Increment-2 Quick Look Report

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Hrovat, Kenneth; Kelly, Eric

    2001-01-01

    The objective of this quick look report is to disseminate the International Space Station (ISS) Increment-2 reduced gravity environment preliminary analysis in a timely manner to the microgravity scientific community. This report is a quick look at the processed acceleration data collected by the Microgravity Acceleration Measurement System (MAMS) during the period of May 3 to June 8, 2001. The report is by no means an exhaustive examination of all the relevant activities, which occurred during the time span mentioned above for two reasons. First, the time span being considered in this report is rather short since the MAMS was not active throughout the time span being considered to allow a detailed characterization. Second, as the name of the report implied, it is a quick look at the acceleration data. Consequently, a more comprehensive report, the ISS Increment-2 report, will be published following the conclusion of the Increment-2 tour of duty. NASA sponsors the MAMS and the Space Acceleration Microgravity System (SAMS) to support microgravity science experiments, which require microgravity acceleration measurements. On April 19, 2001, both the MAMS and the SAMS units were launched on STS-100 from the Kennedy Space Center for installation on the ISS. The MAMS unit was flown to the station in support of science experiments requiring quasisteady acceleration data measurements, while the SAMS unit was flown to support experiments requiring vibratory acceleration data measurement. Both acceleration systems are also used in support of the vehicle microgravity requirements verification. The ISS reduced gravity environment analysis presented in this report uses mostly the MAMS acceleration data measurements (the Increment-2 report will cover both systems). The MAMS has two sensors. The MAMS Orbital Acceleration Research Experiment Sensor Subsystem, which is a low frequency range sensor (up to 1 Hz), is used to characterize the quasi-steady environment for payloads and vehicle. The MAMS High Resolution Acceleration Package is used to characterize the ISS vibratory environment up to 100 Hz. This quick look report presents some selected quasi-steady and vibratory activities recorded by the MAMS during the ongoing ISS Increment-2 tour of duty.

  17. Perceptual quality estimation of H.264/AVC videos using reduced-reference and no-reference models

    NASA Astrophysics Data System (ADS)

    Shahid, Muhammad; Pandremmenou, Katerina; Kondi, Lisimachos P.; Rossholm, Andreas; Lövström, Benny

    2016-09-01

    Reduced-reference (RR) and no-reference (NR) models for video quality estimation, using features that account for the impact of coding artifacts, spatio-temporal complexity, and packet losses, are proposed. The purpose of this study is to analyze a number of potentially quality-relevant features in order to select the most suitable set of features for building the desired models. The proposed sets of features have not been used in the literature and some of the features are used for the first time in this study. The features are employed by the least absolute shrinkage and selection operator (LASSO), which selects only the most influential of them toward perceptual quality. For comparison, we apply feature selection in the complete feature sets and ridge regression on the reduced sets. The models are validated using a database of H.264/AVC encoded videos that were subjectively assessed for quality in an ITU-T compliant laboratory. We infer that just two features selected by RR LASSO and two bitstream-based features selected by NR LASSO are able to estimate perceptual quality with high accuracy, higher than that of ridge, which uses more features. The comparisons with competing works and two full-reference metrics also verify the superiority of our models.

  18. Feature Grouping and Selection Over an Undirected Graph.

    PubMed

    Yang, Sen; Yuan, Lei; Lai, Ying-Cheng; Shen, Xiaotong; Wonka, Peter; Ye, Jieping

    2012-01-01

    High-dimensional regression/classification continues to be an important and challenging problem, especially when features are highly correlated. Feature selection, combined with additional structure information on the features has been considered to be promising in promoting regression/classification performance. Graph-guided fused lasso (GFlasso) has recently been proposed to facilitate feature selection and graph structure exploitation, when features exhibit certain graph structures. However, the formulation in GFlasso relies on pairwise sample correlations to perform feature grouping, which could introduce additional estimation bias. In this paper, we propose three new feature grouping and selection methods to resolve this issue. The first method employs a convex function to penalize the pairwise l ∞ norm of connected regression/classification coefficients, achieving simultaneous feature grouping and selection. The second method improves the first one by utilizing a non-convex function to reduce the estimation bias. The third one is the extension of the second method using a truncated l 1 regularization to further reduce the estimation bias. The proposed methods combine feature grouping and feature selection to enhance estimation accuracy. We employ the alternating direction method of multipliers (ADMM) and difference of convex functions (DC) programming to solve the proposed formulations. Our experimental results on synthetic data and two real datasets demonstrate the effectiveness of the proposed methods.

  19. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  20. An adaptive incremental approach to constructing ensemble classifiers: Application in an information-theoretic computer-aided decision system for detection of masses in mammograms

    PubMed Central

    Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.

    2009-01-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC=0.905±0.024) in performance as compared to the original IT-CAD system (AUC=0.865±0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters. PMID:19673196

  1. Characterization of pancreatic islets in two selectively bred mouse lines with different susceptibilities to high-fat diet-induced glucose intolerance.

    PubMed

    Nagao, Mototsugu; Asai, Akira; Inaba, Wataru; Kawahara, Momoyo; Shuto, Yuki; Kobayashi, Shunsuke; Sanoyama, Daisuke; Sugihara, Hitoshi; Yagihashi, Soroku; Oikawa, Shinichi

    2014-01-01

    Hereditary predisposition to diet-induced type 2 diabetes has not yet been fully elucidated. We recently established 2 mouse lines with different susceptibilities (resistant and prone) to high-fat diet (HFD)-induced glucose intolerance by selective breeding (designated selectively bred diet-induced glucose intolerance-resistant [SDG-R] and -prone [SDG-P], respectively). To investigate the predisposition to HFD-induced glucose intolerance in pancreatic islets, we examined the islet morphological features and functions in these novel mouse lines. Male SDG-P and SDG-R mice were fed a HFD for 5 weeks. Before and after HFD feeding, glucose tolerance was evaluated by oral glucose tolerance test (OGTT). Morphometry and functional analyses of the pancreatic islets were also performed before and after the feeding period. Before HFD feeding, SDG-P mice showed modestly higher postchallenge blood glucose levels and lower insulin increments in OGTT than SDG-R mice. Although SDG-P mice showed greater β cell proliferation than SDG-R mice under HFD feeding, SDG-P mice developed overt glucose intolerance, whereas SDG-R mice maintained normal glucose tolerance. Regardless of whether it was before or after HFD feeding, the isolated islets from SDG-P mice showed impaired glucose- and KCl-stimulated insulin secretion relative to those from SDG-R mice; accordingly, the expression levels of the insulin secretion-related genes in SDG-P islets were significantly lower than those in SDG-R islets. These findings suggest that the innate predispositions in pancreatic islets may determine the susceptibility to diet-induced diabetes. SDG-R and SDG-P mice may therefore be useful polygenic animal models to study the gene-environment interactions in the development of type 2 diabetes.

  2. Characterization of Pancreatic Islets in Two Selectively Bred Mouse Lines with Different Susceptibilities to High-Fat Diet-Induced Glucose Intolerance

    PubMed Central

    Nagao, Mototsugu; Asai, Akira; Inaba, Wataru; Kawahara, Momoyo; Shuto, Yuki; Kobayashi, Shunsuke; Sanoyama, Daisuke; Sugihara, Hitoshi; Yagihashi, Soroku; Oikawa, Shinichi

    2014-01-01

    Hereditary predisposition to diet-induced type 2 diabetes has not yet been fully elucidated. We recently established 2 mouse lines with different susceptibilities (resistant and prone) to high-fat diet (HFD)-induced glucose intolerance by selective breeding (designated selectively bred diet-induced glucose intolerance-resistant [SDG-R] and -prone [SDG-P], respectively). To investigate the predisposition to HFD-induced glucose intolerance in pancreatic islets, we examined the islet morphological features and functions in these novel mouse lines. Male SDG-P and SDG-R mice were fed a HFD for 5 weeks. Before and after HFD feeding, glucose tolerance was evaluated by oral glucose tolerance test (OGTT). Morphometry and functional analyses of the pancreatic islets were also performed before and after the feeding period. Before HFD feeding, SDG-P mice showed modestly higher postchallenge blood glucose levels and lower insulin increments in OGTT than SDG-R mice. Although SDG-P mice showed greater β cell proliferation than SDG-R mice under HFD feeding, SDG-P mice developed overt glucose intolerance, whereas SDG-R mice maintained normal glucose tolerance. Regardless of whether it was before or after HFD feeding, the isolated islets from SDG-P mice showed impaired glucose- and KCl-stimulated insulin secretion relative to those from SDG-R mice; accordingly, the expression levels of the insulin secretion-related genes in SDG-P islets were significantly lower than those in SDG-R islets. These findings suggest that the innate predispositions in pancreatic islets may determine the susceptibility to diet-induced diabetes. SDG-R and SDG-P mice may therefore be useful polygenic animal models to study the gene–environment interactions in the development of type 2 diabetes. PMID:24454742

  3. Progress in the planar CPn SOFC system design verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elangovan, S.; Hartvigsen, J.; Khandkar, A.

    1996-04-01

    SOFCo is developing a high efficiency, modular and scaleable planar SOFC module termed the CPn design. This design has been verified in a 1.4 kW module test operated directly on pipeline natural gas. The design features multistage oxidation of fuel wherein the fuel is consumed incrementally over several stages. High efficiency is achieved by uniform current density distribution per stage, which lowers the stack resistance. Additional benefits include thermal regulation and compactness. Test results from stack modules operating in pipeline natural gas are presented.

  4. Natural image statistics and low-complexity feature selection.

    PubMed

    Vasconcelos, Manuela; Vasconcelos, Nuno

    2009-02-01

    Low-complexity feature selection is analyzed in the context of visual recognition. It is hypothesized that high-order dependences of bandpass features contain little information for discrimination of natural images. This hypothesis is characterized formally by the introduction of the concepts of conjunctive interference and decomposability order of a feature set. Necessary and sufficient conditions for the feasibility of low-complexity feature selection are then derived in terms of these concepts. It is shown that the intrinsic complexity of feature selection is determined by the decomposability order of the feature set and not its dimension. Feature selection algorithms are then derived for all levels of complexity and are shown to be approximated by existing information-theoretic methods, which they consistently outperform. The new algorithms are also used to objectively test the hypothesis of low decomposability order through comparison of classification performance. It is shown that, for image classification, the gain of modeling feature dependencies has strongly diminishing returns: best results are obtained under the assumption of decomposability order 1. This suggests a generic law for bandpass features extracted from natural images: that the effect, on the dependence of any two features, of observing any other feature is constant across image classes.

  5. SIRTF Science Operations System Design

    NASA Technical Reports Server (NTRS)

    Green, William

    1999-01-01

    SIRTF Science Operations System Design William B. Green Manager, SIRTF Science Center California Institute of Technology M/S 310-6 1200 E. California Blvd., Pasadena CA 91125 (626) 395 8572 Fax (626) 568 0673 bgreen@ipac.caltech.edu. The Space Infrared Telescope Facility (SIRTF) will be launched in December 2001, and perform an extended series of science observations at wavelengths ranging from 20 to 160 microns for five years or more. The California Institute of Technology has been selected as the home for the SIRTF Science Center (SSC). The SSC will be responsible for evaluating and selecting observation proposals, providing technical support to the science community, performing mission planning and science observation scheduling activities, instrument calibration during operations and instrument health monitoring, production of archival quality data products, and management of science research grants. The science payload consists of three instruments delivered by instrument Principal Investigators located at University of Arizona, Cornell, and Harvard Smithsonian Astrophysical Observatory. The SSC is responsible for design, development, and operation of the Science Operations System (SOS) which will support the functions assigned to the SSC by NASA. The SIRTF spacecraft, mission profile, and science instrument design have undergone almost ten years of refinement. SIRTF development and operations activities are highly cost constrained. The cost constraints have impacted the design of the SOS in several ways. The Science Operations System has been designed to incorporate a set of efficient, easy to use tools which will make it possible for scientists to propose observation sequences in a rapid and automated manner. The use of highly automated tools for requesting observations will simplify the long range observatory scheduling process, and the short term scheduling of science observations. Pipeline data processing will be highly automated and data-driven, utilizing a variety of tools developed at JPL, the instrument development teams, and Space Telescope Science Institute to automate processing. An incremental ground data system development approach has been adopted, featuring periodic deliveries that are validated with the flight hardware throughout the various phases of system level development and testing. This approach minimizes development time and decreases operations risk. This paper will describe the top level architecture of the SOS and the basic design concepts. A summary of the incremental development approach will be presented. Examples of the unique science user tools now under final development prior to the first proposal call scheduled for mid-2000 will be shown.

  6. Recurring Slope Lineae (RSL) Observations Suggest Widespread Occurrence and Complex Behavior

    NASA Astrophysics Data System (ADS)

    Stillman, D. E.; Grimm, R. E.; Wagstaff, K.; Bue, B. D.; Michaels, T. I.

    2017-12-01

    RSL are described as narrow dark features that incrementally lengthen down steep slopes during warm seasons, fade in cold seasons, and recur annually. HiRISE observations from 5+ Mars years have allowed us to confirm 100 RSL sites and identify more than 600 candidate RSL sites. Detailed analysis of a few RSL sites has been performed using computer assisted analysis. RSL occur in low-albedo (dust-poor) regions with a latitude range of 42.2°N - 53.1°S. They are densely clustered throughout Valles Marineris (VM), in the light-toned layered deposits of Margaritifer and SW Arabia Terrae, Cerberus Fossae, and well-preserved impact craters in Chryse and Acidalia Planitae (CAP). RSL sites are also found at lower densities throughout the low-albedo highland terrains. RSL incrementally lengthen when their slopes are warm, thus the season at which RSL lengthen is dependent on latitude and slope orientation. While RSL occur on all slope orientation there is a large bias to W-facing and equatorial facing slopes. During the RSL activity season, RSL lengthening does not appear to be constant: (1) CAP RSL initially quickly lengthen and slow their lengthening rate by about an order of magnitude as temperatures increase, (2) many VM RSL sites possess RSL that fade at the same time that neighboring RSL on the same slope incrementally lengthen, and (3) some RSL sites in the southern mid-latitudes show at least two pulses of RSL activity - during the southern fall and summer RSL incrementally lengthen, fade, and then start incrementally lengthening again followed by fading as temperatures cool. The correlation of RSL activity to surface temperature, spectrally-derived hydrated salts, and quick fading all point to a wet formation mechanism. However, water sources remain problematic as water budgets suggest a much greater amount of water than could be trapped from the atmosphere. Additionally, some RSL occur in locations where subsurface discharge via an aquifer would be challenging. Thus, our presentation will exhibit the complex behaviors of RSL and compare these behaviors to wet, dry, and hybrid formation mechanisms. Overall, a formation mechanism that is consistent with all the observations remains elusive.

  7. Open-label randomized clinical trial of atropine bolus injection versus incremental boluses plus infusion for organophosphate poisoning in Bangladesh.

    PubMed

    Abedin, Mohammed Joynal; Sayeed, Abdullah Abu; Basher, Ariful; Maude, Richard J; Hoque, Gofranul; Faiz, M A

    2012-06-01

    Severe organophosphate compound (OPC) poisoning is an important clinical problem in many countries of the world. Unfortunately, little clinical research has been performed and little evidence exists with which to determine the best therapy. A study was therefore undertaken to determine the optimal dosing regimen for atropine in the treatment of OPC poisoning. An open-label randomized clinical trial was conducted in Chittagong Medical College Hospital, Chittagong, Bangladesh, on 156 hospitalized individuals with OPC poisoning from June to September 2006. The aim was to compare the efficacy and safety of conventional bolus doses with individualized incremental doses of atropine for atropinization followed by continuous atropine infusion for management of OPC poisoning. Inclusion criteria were patients with a clear history of OPC poisoning with clear clinical signs of toxicity, i.e. features of cholinergic crisis. The patients were observed for at least 96 h. Immediate outcome and complications were recorded. Out of 156 patients, 81 patients received conventional bolus dose atropine (group A) and 75 patients received rapidly incremental doses of atropine followed by infusion (group B). The mortality in group 'A' was 22.5% (18/80) and in group 'B' 8% (6/75) (p < 0.05). The mean duration of atropinization in group 'A' was 151.74 min compared to 23.90 min for group 'B' (p < 0.001). More patients in group A experienced atropine toxicity than in group 'B' (28.4% versus 12.0%, p < 0.05); intermediate syndrome was more common in group 'A' than in group 'B' (13.6% versus 4%, p < 0.05), and respiratory support was required more often for patients in group 'A' than in group 'B' (24.7% versus 8%, p < 0.05). Rapid incremental dose atropinization followed by atropine infusion reduces mortality and morbidity from OPC poisoning and shortens the length of hospital stay and recovery. Incremental atropine and infusion should become the treatment of choice for OPC poisoning. Given the paucity of existing evidence, further clinical studies should be performed to determine the optimal dosing regimen of atropine that most rapidly and safely achieves atropinization in these patients.

  8. Effective traffic features selection algorithm for cyber-attacks samples

    NASA Astrophysics Data System (ADS)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  9. Relevance popularity: A term event model based feature selection scheme for text classification.

    PubMed

    Feng, Guozhong; An, Baiguo; Yang, Fengqin; Wang, Han; Zhang, Libiao

    2017-01-01

    Feature selection is a practical approach for improving the performance of text classification methods by optimizing the feature subsets input to classifiers. In traditional feature selection methods such as information gain and chi-square, the number of documents that contain a particular term (i.e. the document frequency) is often used. However, the frequency of a given term appearing in each document has not been fully investigated, even though it is a promising feature to produce accurate classifications. In this paper, we propose a new feature selection scheme based on a term event Multinomial naive Bayes probabilistic model. According to the model assumptions, the matching score function, which is based on the prediction probability ratio, can be factorized. Finally, we derive a feature selection measurement for each term after replacing inner parameters by their estimators. On a benchmark English text datasets (20 Newsgroups) and a Chinese text dataset (MPH-20), our numerical experiment results obtained from using two widely used text classifiers (naive Bayes and support vector machine) demonstrate that our method outperformed the representative feature selection methods.

  10. Hybrid feature selection for supporting lightweight intrusion detection systems

    NASA Astrophysics Data System (ADS)

    Song, Jianglong; Zhao, Wentao; Liu, Qiang; Wang, Xin

    2017-08-01

    Redundant and irrelevant features not only cause high resource consumption but also degrade the performance of Intrusion Detection Systems (IDS), especially when coping with big data. These features slow down the process of training and testing in network traffic classification. Therefore, a hybrid feature selection approach in combination with wrapper and filter selection is designed in this paper to build a lightweight intrusion detection system. Two main phases are involved in this method. The first phase conducts a preliminary search for an optimal subset of features, in which the chi-square feature selection is utilized. The selected set of features from the previous phase is further refined in the second phase in a wrapper manner, in which the Random Forest(RF) is used to guide the selection process and retain an optimized set of features. After that, we build an RF-based detection model and make a fair comparison with other approaches. The experimental results on NSL-KDD datasets show that our approach results are in higher detection accuracy as well as faster training and testing processes.

  11. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    PubMed

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  12. Joint L2,1 Norm and Fisher Discrimination Constrained Feature Selection for Rational Synthesis of Microporous Aluminophosphates.

    PubMed

    Qi, Miao; Wang, Ting; Yi, Yugen; Gao, Na; Kong, Jun; Wang, Jianzhong

    2017-04-01

    Feature selection has been regarded as an effective tool to help researchers understand the generating process of data. For mining the synthesis mechanism of microporous AlPOs, this paper proposes a novel feature selection method by joint l 2,1 norm and Fisher discrimination constraints (JNFDC). In order to obtain more effective feature subset, the proposed method can be achieved in two steps. The first step is to rank the features according to sparse and discriminative constraints. The second step is to establish predictive model with the ranked features, and select the most significant features in the light of the contribution of improving the predictive accuracy. To the best of our knowledge, JNFDC is the first work which employs the sparse representation theory to explore the synthesis mechanism of six kinds of pore rings. Numerical simulations demonstrate that our proposed method can select significant features affecting the specified structural property and improve the predictive accuracy. Moreover, comparison results show that JNFDC can obtain better predictive performances than some other state-of-the-art feature selection methods. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. The Capillary Flow Experiments Aboard the International Space Station: Increments 9-15

    NASA Technical Reports Server (NTRS)

    Jenson, Ryan M.; Weislogel, Mark M.; Tavan, Noel T.; Chen, Yongkang; Semerjian, Ben; Bunnell, Charles T.; Collicott, Steven H.; Klatte, Jorg; dreyer, Michael E.

    2009-01-01

    This report provides a summary of the experimental, analytical, and numerical results of the Capillary Flow Experiment (CFE) performed aboard the International Space Station (ISS). The experiments were conducted in space beginning with Increment 9 through Increment 16, beginning August 2004 and ending December 2007. Both primary and extra science experiments were conducted during 19 operations performed by 7 astronauts including: M. Fincke, W. McArthur, J. Williams, S. Williams, M. Lopez-Alegria, C. Anderson, and P. Whitson. CFE consists of 6 approximately 1 to 2 kg handheld experiment units designed to investigate a selection of capillary phenomena of fundamental and applied importance, such as large length scale contact line dynamics (CFE-Contact Line), critical wetting in discontinuous structures (CFE-Vane Gap), and capillary flows and passive phase separations in complex containers (CFE-Interior Corner Flow). Highly quantitative video from the simply performed flight experiments provide data helpful in benchmarking numerical methods, confirming theoretical models, and guiding new model development. In an extensive executive summary, a brief history of the experiment is reviewed before introducing the science investigated. A selection of experimental results and comparisons with both analytic and numerical predictions is given. The subsequent chapters provide additional details of the experimental and analytical methods developed and employed. These include current presentations of the state of the data reduction which we anticipate will continue throughout the year and culminate in several more publications. An extensive appendix is used to provide support material such as an experiment history, dissemination items to date (CFE publication, etc.), detailed design drawings, and crew procedures. Despite the simple nature of the experiments and procedures, many of the experimental results may be practically employed to enhance the design of spacecraft engineering systems involving capillary interface dynamics.

  14. Cost-effectiveness of adding granulocyte colony-stimulating factor to primary prophylaxis with antibiotics in small-cell lung cancer.

    PubMed

    Timmer-Bonte, Johanna N H; Adang, Eddy M M; Smit, Hans J M; Biesma, Bonne; Wilschut, Frank A; Bootsma, Gerben P; de Boo, Theo M; Tjan-Heijnen, Vivianne C G

    2006-07-01

    Recently, a Dutch, randomized, phase III trial demonstrated that, in small-cell lung cancer patients at risk of chemotherapy-induced febrile neutropenia (FN), the addition of granulocyte colony-stimulating factor (GCSF) to prophylactic antibiotics significantly reduced the incidence of FN in cycle 1 (24% v 10%; P = .01). We hypothesized that selecting patients at risk of FN might increase the cost-effectiveness of GCSF prophylaxis. Economic analysis was conducted alongside the clinical trial and was focused on the health care perspective. Primary outcome was the difference in mean total costs per patient in cycle 1 between both prophylactic strategies. Cost-effectiveness was expressed as costs per percent-FN-prevented. For the first cycle, the mean incremental costs of adding GCSF amounted to 681 euro (95% CI, -36 to 1,397 euro) per patient. For the entire treatment period, the mean incremental costs were substantial (5,123 euro; 95% CI, 3,908 to 6,337 euro), despite a significant reduction in the incidence of FN and related savings in medical care consumption. The incremental cost-effectiveness ratio was 50 euro per percent decrease of the probability of FN (95% CI, -2 to 433 euro) in cycle 1, and the acceptability for this willingness to pay was approximately 50%. Despite the selection of patients at risk of FN, the addition of GCSF to primary antibiotic prophylaxis did not result in cost savings. If policy makers are willing to pay 240 euro for each percent gain in effect (ie, 3,360 euro for a 14% reduction in FN), the addition of GCSF can be considered cost effective.

  15. Investigating a memory-based account of negative priming: support for selection-feature mismatch.

    PubMed

    MacDonald, P A; Joordens, S

    2000-08-01

    Using typical and modified negative priming tasks, the selection-feature mismatch account of negative priming was tested. In the modified task, participants performed selections on the basis of a semantic feature (e.g., referent size). This procedure has been shown to enhance negative priming (P. A. MacDonald, S. Joordens, & K. N. Seergobin, 1999). Across 3 experiments, negative priming occurred only when the repeated item mismatched in terms of the feature used as the basis for selections. When the repeated item was congruent on the selection feature across the prime and probe displays, positive priming arose. This pattern of results appeared in both the ignored- and the attended-repetition conditions. Negative priming does not result from previously ignoring an item. These findings strongly support the selection-feature mismatch account of negative priming and refute both the distractor inhibition and the episodic-retrieval explanations.

  16. Incremental Lexical Learning in Speech Production: A Computational Model and Empirical Evaluation

    ERIC Educational Resources Information Center

    Oppenheim, Gary Michael

    2011-01-01

    Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have…

  17. The cosmic infrared backgroung at 1.25 and 2.2 and microns using DIRBE and 2mass: a contribution not due to galaxies?

    NASA Technical Reports Server (NTRS)

    Cambresy, L.; Reach, W.; Beichman, C.; Jarrett, T.

    2001-01-01

    Using the 2MASS second incremental data release and the zodiacal subtracted mission average maps of COBE/DIRBE, the authors estimate the cosmic background in the J (1.25 mu m) and K (2.2 mu m) bands using selected areas representing 550 deg/sup 2/ of sky.

  18. Assembly of greek marble inscriptions by isotopic methods.

    PubMed

    Herz, N; Wenner, D B

    1978-03-10

    Classical Greek inscriptions cut in marble, whose association as original stelai by archeological methods was debatable, were selected for study. Using traditional geological techniques and determinations of the per mil increments in carbon-13 and oxygen-18, it was determined that fragments could be positively assigned to three stelai, but that fragments from three other stelai had been incorrectly associated.

  19. Growth classification systems for red fir and white fir in northern California

    Treesearch

    George T. Ferrell

    1983-01-01

    Selected crown and bole characteristics were predictor variables in growth classification equations developed for California red fir, Shasta red fir, and white fir in northern California. Individual firs were classified on the basis of percent basal area increment (PCTBAI ) as Class 1 (≤ 1 pct), Class 2 (> 1 pct and ≤ 3 pct), or Class 3 (> 3...

  20. Adaptation of FIBER for Forest Inventory and Analysis growth projections in the State of Maine

    Treesearch

    Dale S. Solomon; Thomas B. Brann; Lawrence E. Caldwell

    2000-01-01

    The model FIBER (Forest Increment Based on Ecological Rationale) was selected by the Technical Review Team on the Sustainability of Timber Supplies in the State of Maine to construct yield information using USDA Forest Service Forest Inventory and Analysis (FIA) field data. FIBER habitat criteria based on soil characteristics, species composition of overstory, and...

  1. Prediction of practical performance in preclinical laboratory courses - the return of wire bending for admission of dental students in Hamburg.

    PubMed

    Kothe, Christian; Hissbach, Johanna; Hampe, Wolfgang

    2014-01-01

    Although some recent studies concluded that dexterity is not a reliable predictor of performance in preclinical laboratory courses in dentistry, they could not disprove earlier findings which confirmed the worth of manual dexterity tests in dental admission. We developed a wire bending test (HAM-Man) which was administered during dental freshmen's first week in 2008, 2009, and 2010. The purpose of our study was to evaluate if the HAM-Man is a useful selection criterion additional to the high school grade point average (GPA) in dental admission. Regression analysis revealed that GPA only accounted for a maximum of 9% of students' performance in preclinical laboratory courses, in six out of eight models the explained variance was below 2%. The HAM-Man incrementally explained up to 20.5% of preclinical practical performance over GPA. In line with findings from earlier studies the HAM-Man test of manual dexterity showed satisfactory incremental validity. While GPA has a focus on cognitive abilities, the HAM-Man reflects learning of unfamiliar psychomotor skills, spatial relationships, and dental techniques needed in preclinical laboratory courses. The wire bending test HAM-Man is a valuable additional selection instrument for applicants of dental schools.

  2. A SOA broker solution for standard discovery and access services: the GI-cat framework

    NASA Astrophysics Data System (ADS)

    Boldrini, Enrico

    2010-05-01

    GI-cat ideal users are data providers or service providers within the geoscience community. The former have their data already available through an access service (e.g. an OGC Web Service) and would have it published through a standard catalog service, in a seamless way. The latter would develop a catalog broker and let users query and access different geospatial resources through one or more standard interfaces and Application Profiles (AP) (e.g. OGC CSW ISO AP, CSW ebRIM/EO AP, etc.). GI-cat actually implements a broker components (i.e. a middleware service) which carries out distribution and mediation functionalities among "well-adopted" catalog interfaces and data access protocols. GI-cat also publishes different discovery interfaces: the OGC CSW ISO and ebRIM Application Profiles (the latter coming with support for the EO and CIM extension packages) and two different OpenSearch interfaces developed in order to explore Web 2.0 possibilities. An extended interface is also available to exploit all available GI-cat features, such as interruptible incremental queries and queries feedback. Interoperability tests performed in the context of different projects have also pointed out the importance to enforce compatibility with existing and wide-spread tools of the open source community (e.g. GeoNetwork and Deegree catalogs), which was then achieved. Based on a service-oriented framework of modular components, GI-cat can effectively be customized and tailored to support different deployment scenarios. In addition to the distribution functionality an harvesting approach has been lately experimented, allowing the user to switch between a distributed and a local search giving thus more possibilities to support different deployment scenarios. A configurator tool is available in order to enable an effective high level configuration of the broker service. A specific geobrowser was also naturally developed, for demonstrating the advanced GI-cat functionalities. This client, called GI-go, is an example of the possible applications which may be built on top of the GI-cat broker component. GI-go allows discovering and browsing of the available datasets, retrieving and evaluating their description and performing distributed queries according to any combination of the following criteria: geographic area, temporal interval, topic of interest (free-text and/or keyword selection are allowed) and data source (i.e. where, when, what, who). The results set of a query (e.g. datasets metadata) are then displayed in an incremental way leveraging the asynchronous interactions approach implemented by GI-cat. This feature allows the user to access the intermediate query results. Query interruption and feedback features are also provided to the user. Alternatively, user may perform a browsing task by selecting a catalog resource from the current configuration and navigate through its aggregated and/or leaf datasets. In both cases datasets metadata, expressed according to ISO 19139 (and also Dublin Core and ebRIM if available), are displayed for download, along with a resource portrayal and actual data access (when this is meaningful and possible). The GI-cat distributed catalog service has been successfully deployed and experimented in the framework of different projects and initiative, including the SeaDataNet FP6 project, GEOSS IP3 (Interoperability Process Pilot Project), GEOSS AIP-2 (Architectural Implementation Project - Phase 2), FP7 GENESI-DR, CNR GIIDA, FP7 EUROGEOSS and ESA HMA project.

  3. Ordinal feature selection for iris and palmprint recognition.

    PubMed

    Sun, Zhenan; Wang, Libin; Tan, Tieniu

    2014-09-01

    Ordinal measures have been demonstrated as an effective feature representation model for iris and palmprint recognition. However, ordinal measures are a general concept of image analysis and numerous variants with different parameter settings, such as location, scale, orientation, and so on, can be derived to construct a huge feature space. This paper proposes a novel optimization formulation for ordinal feature selection with successful applications to both iris and palmprint recognition. The objective function of the proposed feature selection method has two parts, i.e., misclassification error of intra and interclass matching samples and weighted sparsity of ordinal feature descriptors. Therefore, the feature selection aims to achieve an accurate and sparse representation of ordinal measures. And, the optimization subjects to a number of linear inequality constraints, which require that all intra and interclass matching pairs are well separated with a large margin. Ordinal feature selection is formulated as a linear programming (LP) problem so that a solution can be efficiently obtained even on a large-scale feature pool and training database. Extensive experimental results demonstrate that the proposed LP formulation is advantageous over existing feature selection methods, such as mRMR, ReliefF, Boosting, and Lasso for biometric recognition, reporting state-of-the-art accuracy on CASIA and PolyU databases.

  4. Prospects for Breakthrough Propulsion From Physics

    NASA Technical Reports Server (NTRS)

    Millis, Marc G.

    2004-01-01

    "Space drives", "Warp drives", and "Wormholes:" these concepts may sound like science fiction, but they are being written about in reputable journals. To assess the implications of these emerging prospects for future spaceflight, NASA supported the Breakthrough Propulsion Physics Project from 1996 through 2002. This Project has three grand challenges: (1) Discover propulsion that eliminates the need for propellant; (2) Discover methods to achieve hyper-fast travel; and (3) Discover breakthrough methods to power spacecraft. Because these challenges are presumably far from fruition, and perhaps even impossible, a special emphasis is placed on selecting incremental and affordable research that addresses the critical issues behind these challenges. Of 16 incremental research tasks completed by the project and from other sponsors, about a third were found not to be viable, a quarter have clear opportunities for sequels, and the rest remain unresolved.

  5. Stratified cost-utility analysis of C-Leg versus mechanical knees: Findings from an Italian sample of transfemoral amputees.

    PubMed

    Cutti, Andrea Giovanni; Lettieri, Emanuele; Del Maestro, Martina; Radaelli, Giovanni; Luchetti, Martina; Verni, Gennero; Masella, Cristina

    2017-06-01

    The fitting rate of the C-Leg electronic knee (Otto-Bock, D) has increased steadily over the last 15 years. Current cost-utility studies, however, have not considered the patients' characteristics. To complete a cost-utility analysis involving C-Leg and mechanical knee users; "age at the time of enrollment," "age at the time of first prosthesis," and "experience with the current type of prosthesis" are assumed as non-nested stratification parameters. Cohort retrospective. In all, 70 C-Leg and 57 mechanical knee users were selected. For each stratification criteria, we evaluated the cost-utility of C-Leg versus mechanical knees by computing the incremental cost-utility ratio, that is, the ratio of the "difference in cost" and the "difference in utility" of the two technologies. Cost consisted of acquisition, maintenance, transportation, and lodging expenses. Utility was measured in terms of quality-adjusted life years, computed on the basis of participants' answers to the EQ-5D questionnaire. Patients over 40 years at the time of first prosthesis were the only group featuring an incremental cost-utility ratio (88,779 €/quality-adjusted life year) above the National Institute for Health and Care Excellence practical cost-utility threshold (54,120 €/quality-adjusted live year): C-Leg users experience a significant improvement of "mobility," but limited outcomes on "usual activities," "self-care," "depression/anxiety," and reduction of "pain/discomfort." The stratified cost-utility results have relevant clinical implications and provide useful information for practitioners in tailoring interventions. Clinical relevance A cost-utility analysis that considered patients characteristics provided insights on the "affordability" of C-Leg compared to mechanical knees. In particular, results suggest that C-Leg has a significant impact on "mobility" for first-time prosthetic users over 40 years, but implementation of specific low-cost physical/psychosocial interventions is required to retun within cost-utility thresholds.

  6. Alcohol and the risk for latent autoimmune diabetes in adults: results based on Swedish ESTRID study.

    PubMed

    Rasouli, Bahareh; Andersson, Tomas; Carlsson, Per-Ola; Dorkhan, Mozhgan; Grill, Valdemar; Groop, Leif; Martinell, Mats; Tuomi, Tiinamaja; Carlsson, Sofia

    2014-11-01

    Moderate alcohol consumption is associated with a reduced risk of type 2 diabetes. Our aim was to investigate whether alcohol consumption is associated with the risk of latent autoimmune diabetes in adults (LADA), an autoimmune form of diabetes with features of type 2 diabetes. A population-based case-control study was carried out to investigate the association of alcohol consumption and the risk of LADA. We used data from the ESTRID case-control study carried out between 2010 and 2013, including 250 incident cases of LADA (glutamic acid decarboxylase antibodies (GADAs) positive) and 764 cases of type 2 diabetes (GADA negative), and 1012 randomly selected controls aged ≥35. Logistic regression was used to estimate the odds ratios (ORs) of diabetes in relation to alcohol intake, adjusted for age, sex, BMI, family history of diabetes, smoking, and education. Alcohol consumption was inversely associated with the risk of type 2 diabetes (OR 0.95, 95% CI 0.92-0.99 for every 5-g increment in daily intake). Similar results were observed for LADA, but stratification by median GADA levels revealed that the results only pertained to LADA with low GADA levels (OR 0.85, 95% CI 0.76-0.94/5 g alcohol per day), whereas no association was observed with LADA having high GADA levels (OR 1.00, 95% CI 0.94-1.06/5 g per day). Every 5-g increment of daily alcohol intake was associated with a 10% increase in GADA levels (P=0.0312), and a 10% reduction in homeostasis model assessment of insulin resistance (P=0.0418). Our findings indicate that alcohol intake may reduce the risk of type 2 diabetes and type 2-like LADA, but has no beneficial effects on diabetes-related autoimmunity. © 2014 The authors.

  7. Economic indicators selection for crime rates forecasting using cooperative feature selection

    NASA Astrophysics Data System (ADS)

    Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Salleh Sallehuddin, Roselina

    2013-04-01

    Features selection in multivariate forecasting model is very important to ensure that the model is accurate. The purpose of this study is to apply the Cooperative Feature Selection method for features selection. The features are economic indicators that will be used in crime rate forecasting model. The Cooperative Feature Selection combines grey relational analysis and artificial neural network to establish a cooperative model that can rank and select the significant economic indicators. Grey relational analysis is used to select the best data series to represent each economic indicator and is also used to rank the economic indicators according to its importance to the crime rate. After that, the artificial neural network is used to select the significant economic indicators for forecasting the crime rates. In this study, we used economic indicators of unemployment rate, consumer price index, gross domestic product and consumer sentiment index, as well as data rates of property crime and violent crime for the United States. Levenberg-Marquardt neural network is used in this study. From our experiments, we found that consumer price index is an important economic indicator that has a significant influence on the violent crime rate. While for property crime rate, the gross domestic product, unemployment rate and consumer price index are the influential economic indicators. The Cooperative Feature Selection is also found to produce smaller errors as compared to Multiple Linear Regression in forecasting property and violent crime rates.

  8. Feature Selection for Ridge Regression with Provable Guarantees.

    PubMed

    Paul, Saurabh; Drineas, Petros

    2016-04-01

    We introduce single-set spectral sparsification as a deterministic sampling-based feature selection technique for regularized least-squares classification, which is the classification analog to ridge regression. The method is unsupervised and gives worst-case guarantees of the generalization power of the classification function after feature selection with respect to the classification function obtained using all features. We also introduce leverage-score sampling as an unsupervised randomized feature selection method for ridge regression. We provide risk bounds for both single-set spectral sparsification and leverage-score sampling on ridge regression in the fixed design setting and show that the risk in the sampled space is comparable to the risk in the full-feature space. We perform experiments on synthetic and real-world data sets; a subset of TechTC-300 data sets, to support our theory. Experimental results indicate that the proposed methods perform better than the existing feature selection methods.

  9. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    PubMed

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  10. Feature Selection Using Information Gain for Improved Structural-Based Alert Correlation

    PubMed Central

    Siraj, Maheyzah Md; Zainal, Anazida; Elshoush, Huwaida Tagelsir; Elhaj, Fatin

    2016-01-01

    Grouping and clustering alerts for intrusion detection based on the similarity of features is referred to as structurally base alert correlation and can discover a list of attack steps. Previous researchers selected different features and data sources manually based on their knowledge and experience, which lead to the less accurate identification of attack steps and inconsistent performance of clustering accuracy. Furthermore, the existing alert correlation systems deal with a huge amount of data that contains null values, incomplete information, and irrelevant features causing the analysis of the alerts to be tedious, time-consuming and error-prone. Therefore, this paper focuses on selecting accurate and significant features of alerts that are appropriate to represent the attack steps, thus, enhancing the structural-based alert correlation model. A two-tier feature selection method is proposed to obtain the significant features. The first tier aims at ranking the subset of features based on high information gain entropy in decreasing order. The‏ second tier extends additional features with a better discriminative ability than the initially ranked features. Performance analysis results show the significance of the selected features in terms of the clustering accuracy using 2000 DARPA intrusion detection scenario-specific dataset. PMID:27893821

  11. A New Direction of Cancer Classification: Positive Effect of Low-Ranking MicroRNAs.

    PubMed

    Li, Feifei; Piao, Minghao; Piao, Yongjun; Li, Meijing; Ryu, Keun Ho

    2014-10-01

    Many studies based on microRNA (miRNA) expression profiles showed a new aspect of cancer classification. Because one characteristic of miRNA expression data is the high dimensionality, feature selection methods have been used to facilitate dimensionality reduction. The feature selection methods have one shortcoming thus far: they just consider the problem of where feature to class is 1:1 or n:1. However, because one miRNA may influence more than one type of cancer, human miRNA is considered to be ranked low in traditional feature selection methods and are removed most of the time. In view of the limitation of the miRNA number, low-ranking miRNAs are also important to cancer classification. We considered both high- and low-ranking features to cover all problems (1:1, n:1, 1:n, and m:n) in cancer classification. First, we used the correlation-based feature selection method to select the high-ranking miRNAs, and chose the support vector machine, Bayes network, decision tree, k-nearest-neighbor, and logistic classifier to construct cancer classification. Then, we chose Chi-square test, information gain, gain ratio, and Pearson's correlation feature selection methods to build the m:n feature subset, and used the selected miRNAs to determine cancer classification. The low-ranking miRNA expression profiles achieved higher classification accuracy compared with just using high-ranking miRNAs in traditional feature selection methods. Our results demonstrate that the m:n feature subset made a positive impression of low-ranking miRNAs in cancer classification.

  12. Artificial bee colony algorithm for single-trial electroencephalogram analysis.

    PubMed

    Hsu, Wei-Yen; Hu, Ya-Ping

    2015-04-01

    In this study, we propose an analysis system combined with feature selection to further improve the classification accuracy of single-trial electroencephalogram (EEG) data. Acquiring event-related brain potential data from the sensorimotor cortices, the system comprises artifact and background noise removal, feature extraction, feature selection, and feature classification. First, the artifacts and background noise are removed automatically by means of independent component analysis and surface Laplacian filter, respectively. Several potential features, such as band power, autoregressive model, and coherence and phase-locking value, are then extracted for subsequent classification. Next, artificial bee colony (ABC) algorithm is used to select features from the aforementioned feature combination. Finally, selected subfeatures are classified by support vector machine. Comparing with and without artifact removal and feature selection, using a genetic algorithm on single-trial EEG data for 6 subjects, the results indicate that the proposed system is promising and suitable for brain-computer interface applications. © EEG and Clinical Neuroscience Society (ECNS) 2014.

  13. Application of quantum-behaved particle swarm optimization to motor imagery EEG classification.

    PubMed

    Hsu, Wei-Yen

    2013-12-01

    In this study, we propose a recognition system for single-trial analysis of motor imagery (MI) electroencephalogram (EEG) data. Applying event-related brain potential (ERP) data acquired from the sensorimotor cortices, the system chiefly consists of automatic artifact elimination, feature extraction, feature selection and classification. In addition to the use of independent component analysis, a similarity measure is proposed to further remove the electrooculographic (EOG) artifacts automatically. Several potential features, such as wavelet-fractal features, are then extracted for subsequent classification. Next, quantum-behaved particle swarm optimization (QPSO) is used to select features from the feature combination. Finally, selected sub-features are classified by support vector machine (SVM). Compared with without artifact elimination, feature selection using a genetic algorithm (GA) and feature classification with Fisher's linear discriminant (FLD) on MI data from two data sets for eight subjects, the results indicate that the proposed method is promising in brain-computer interface (BCI) applications.

  14. Optimum location of external markers using feature selection algorithms for real‐time tumor tracking in external‐beam radiotherapy: a virtual phantom study

    PubMed Central

    Nankali, Saber; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-01

    In external‐beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation‐based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two “Genetic” and “Ranker” searching procedures. The performance of these algorithms has been evaluated using four‐dimensional extended cardiac‐torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro‐fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F‐test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation‐based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers. PACS numbers: 87.55.km, 87.56.Fc PMID:26894358

  15. Optimum location of external markers using feature selection algorithms for real-time tumor tracking in external-beam radiotherapy: a virtual phantom study.

    PubMed

    Nankali, Saber; Torshabi, Ahmad Esmaili; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-08

    In external-beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation-based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two "Genetic" and "Ranker" searching procedures. The performance of these algorithms has been evaluated using four-dimensional extended cardiac-torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro-fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F-test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation-based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers.

  16. Relating DSM-5 section III personality traits to section II personality disorder diagnoses.

    PubMed

    Morey, L C; Benson, K T; Skodol, A E

    2016-02-01

    The DSM-5 Personality and Personality Disorders Work Group formulated a hybrid dimensional/categorical model that represented personality disorders as combinations of core impairments in personality functioning with specific configurations of problematic personality traits. Specific clusters of traits were selected to serve as indicators for six DSM categorical diagnoses to be retained in this system - antisocial, avoidant, borderline, narcissistic, obsessive-compulsive and schizotypal personality disorders. The goal of the current study was to describe the empirical relationships between the DSM-5 section III pathological traits and DSM-IV/DSM-5 section II personality disorder diagnoses. Data were obtained from a sample of 337 clinicians, each of whom rated one of his or her patients on all aspects of the DSM-IV and DSM-5 proposed alternative model. Regression models were constructed to examine trait-disorder relationships, and the incremental validity of core personality dysfunctions (i.e. criterion A features for each disorder) was examined in combination with the specified trait clusters. Findings suggested that the trait assignments specified by the Work Group tended to be substantially associated with corresponding DSM-IV concepts, and the criterion A features provided additional diagnostic information in all but one instance. Although the DSM-5 section III alternative model provided a substantially different taxonomic structure for personality disorders, the associations between this new approach and the traditional personality disorder concepts in DSM-5 section II make it possible to render traditional personality disorder concepts using alternative model traits in combination with core impairments in personality functioning.

  17. Automatic migraine classification via feature selection committee and machine learning techniques over imaging and questionnaire data.

    PubMed

    Garcia-Chimeno, Yolanda; Garcia-Zapirain, Begonya; Gomez-Beldarrain, Marian; Fernandez-Ruanova, Begonya; Garcia-Monco, Juan Carlos

    2017-04-13

    Feature selection methods are commonly used to identify subsets of relevant features to facilitate the construction of models for classification, yet little is known about how feature selection methods perform in diffusion tensor images (DTIs). In this study, feature selection and machine learning classification methods were tested for the purpose of automating diagnosis of migraines using both DTIs and questionnaire answers related to emotion and cognition - factors that influence of pain perceptions. We select 52 adult subjects for the study divided into three groups: control group (15), subjects with sporadic migraine (19) and subjects with chronic migraine and medication overuse (18). These subjects underwent magnetic resonance with diffusion tensor to see white matter pathway integrity of the regions of interest involved in pain and emotion. The tests also gather data about pathology. The DTI images and test results were then introduced into feature selection algorithms (Gradient Tree Boosting, L1-based, Random Forest and Univariate) to reduce features of the first dataset and classification algorithms (SVM (Support Vector Machine), Boosting (Adaboost) and Naive Bayes) to perform a classification of migraine group. Moreover we implement a committee method to improve the classification accuracy based on feature selection algorithms. When classifying the migraine group, the greatest improvements in accuracy were made using the proposed committee-based feature selection method. Using this approach, the accuracy of classification into three types improved from 67 to 93% when using the Naive Bayes classifier, from 90 to 95% with the support vector machine classifier, 93 to 94% in boosting. The features that were determined to be most useful for classification included are related with the pain, analgesics and left uncinate brain (connected with the pain and emotions). The proposed feature selection committee method improved the performance of migraine diagnosis classifiers compared to individual feature selection methods, producing a robust system that achieved over 90% accuracy in all classifiers. The results suggest that the proposed methods can be used to support specialists in the classification of migraines in patients undergoing magnetic resonance imaging.

  18. Joint Feature Selection and Classification for Multilabel Learning.

    PubMed

    Huang, Jun; Li, Guorong; Huang, Qingming; Wu, Xindong

    2018-03-01

    Multilabel learning deals with examples having multiple class labels simultaneously. It has been applied to a variety of applications, such as text categorization and image annotation. A large number of algorithms have been proposed for multilabel learning, most of which concentrate on multilabel classification problems and only a few of them are feature selection algorithms. Current multilabel classification models are mainly built on a single data representation composed of all the features which are shared by all the class labels. Since each class label might be decided by some specific features of its own, and the problems of classification and feature selection are often addressed independently, in this paper, we propose a novel method which can perform joint feature selection and classification for multilabel learning, named JFSC. Different from many existing methods, JFSC learns both shared features and label-specific features by considering pairwise label correlations, and builds the multilabel classifier on the learned low-dimensional data representations simultaneously. A comparative study with state-of-the-art approaches manifests a competitive performance of our proposed method both in classification and feature selection for multilabel learning.

  19. A hybrid feature selection method using multiclass SVM for diagnosis of erythemato-squamous disease

    NASA Astrophysics Data System (ADS)

    Maryam, Setiawan, Noor Akhmad; Wahyunggoro, Oyas

    2017-08-01

    The diagnosis of erythemato-squamous disease is a complex problem and difficult to detect in dermatology. Besides that, it is a major cause of skin cancer. Data mining implementation in the medical field helps expert to diagnose precisely, accurately, and inexpensively. In this research, we use data mining technique to developed a diagnosis model based on multiclass SVM with a novel hybrid feature selection method to diagnose erythemato-squamous disease. Our hybrid feature selection method, named ChiGA (Chi Square and Genetic Algorithm), uses the advantages from filter and wrapper methods to select the optimal feature subset from original feature. Chi square used as filter method to remove redundant features and GA as wrapper method to select the ideal feature subset with SVM used as classifier. Experiment performed with 10 fold cross validation on erythemato-squamous diseases dataset taken from University of California Irvine (UCI) machine learning database. The experimental result shows that the proposed model based multiclass SVM with Chi Square and GA can give an optimum feature subset. There are 18 optimum features with 99.18% accuracy.

  20. Selective attention to temporal features on nested time scales.

    PubMed

    Henry, Molly J; Herrmann, Björn; Obleser, Jonas

    2015-02-01

    Meaningful auditory stimuli such as speech and music often vary simultaneously along multiple time scales. Thus, listeners must selectively attend to, and selectively ignore, separate but intertwined temporal features. The current study aimed to identify and characterize the neural network specifically involved in this feature-selective attention to time. We used a novel paradigm where listeners judged either the duration or modulation rate of auditory stimuli, and in which the stimulation, working memory demands, response requirements, and task difficulty were held constant. A first analysis identified all brain regions where individual brain activation patterns were correlated with individual behavioral performance patterns, which thus supported temporal judgments generically. A second analysis then isolated those brain regions that specifically regulated selective attention to temporal features: Neural responses in a bilateral fronto-parietal network including insular cortex and basal ganglia decreased with degree of change of the attended temporal feature. Critically, response patterns in these regions were inverted when the task required selectively ignoring this feature. The results demonstrate how the neural analysis of complex acoustic stimuli with multiple temporal features depends on a fronto-parietal network that simultaneously regulates the selective gain for attended and ignored temporal features. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Cost Attributable to Nosocomial Bacteremia. Analysis According to Microorganism and Antimicrobial Sensitivity in a University Hospital in Barcelona.

    PubMed

    Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Garcia-Alzorriz, Enric; Castells, Xavier; Grau, Santiago; Cots, Francesc

    2016-01-01

    To calculate the incremental cost of nosocomial bacteremia caused by the most common organisms, classified by their antimicrobial susceptibility. We selected patients who developed nosocomial bacteremia caused by Staphylococcus aureus, Escherichia coli, Klebsiella pneumoniae, or Pseudomonas aeruginosa. These microorganisms were analyzed because of their high prevalence and they frequently present multidrug resistance. A control group consisted of patients classified within the same all-patient refined-diagnosis related group without bacteremia. Our hospital has an established cost accounting system (full-costing) that uses activity-based criteria to analyze cost distribution. A logistic regression model was fitted to estimate the probability of developing bacteremia for each admission (propensity score) and was used for propensity score matching adjustment. Subsequently, the propensity score was included in an econometric model to adjust the incremental cost of patients who developed bacteremia, as well as differences in this cost, depending on whether the microorganism was multidrug-resistant or multidrug-sensitive. A total of 571 admissions with bacteremia matched the inclusion criteria and 82,022 were included in the control group. The mean cost was € 25,891 for admissions with bacteremia and € 6,750 for those without bacteremia. The mean incremental cost was estimated at € 15,151 (CI, € 11,570 to € 18,733). Multidrug-resistant P. aeruginosa bacteremia had the highest mean incremental cost, € 44,709 (CI, € 34,559 to € 54,859). Antimicrobial-susceptible E. coli nosocomial bacteremia had the lowest mean incremental cost, € 10,481 (CI, € 8,752 to € 12,210). Despite their lower cost, episodes of antimicrobial-susceptible E. coli nosocomial bacteremia had a major impact due to their high frequency. Adjustment of hospital cost according to the organism causing bacteremia and antibiotic sensitivity could improve prevention strategies and allow their prioritization according to their overall impact and costs. Infection reduction is a strategy to reduce resistance.

  2. Cost Attributable to Nosocomial Bacteremia. Analysis According to Microorganism and Antimicrobial Sensitivity in a University Hospital in Barcelona

    PubMed Central

    Riu, Marta; Chiarello, Pietro; Terradas, Roser; Sala, Maria; Garcia-Alzorriz, Enric; Castells, Xavier; Grau, Santiago; Cots, Francesc

    2016-01-01

    Aim To calculate the incremental cost of nosocomial bacteremia caused by the most common organisms, classified by their antimicrobial susceptibility. Methods We selected patients who developed nosocomial bacteremia caused by Staphylococcus aureus, Escherichia coli, Klebsiella pneumoniae, or Pseudomonas aeruginosa. These microorganisms were analyzed because of their high prevalence and they frequently present multidrug resistance. A control group consisted of patients classified within the same all-patient refined-diagnosis related group without bacteremia. Our hospital has an established cost accounting system (full-costing) that uses activity-based criteria to analyze cost distribution. A logistic regression model was fitted to estimate the probability of developing bacteremia for each admission (propensity score) and was used for propensity score matching adjustment. Subsequently, the propensity score was included in an econometric model to adjust the incremental cost of patients who developed bacteremia, as well as differences in this cost, depending on whether the microorganism was multidrug-resistant or multidrug-sensitive. Results A total of 571 admissions with bacteremia matched the inclusion criteria and 82,022 were included in the control group. The mean cost was € 25,891 for admissions with bacteremia and € 6,750 for those without bacteremia. The mean incremental cost was estimated at € 15,151 (CI, € 11,570 to € 18,733). Multidrug-resistant P. aeruginosa bacteremia had the highest mean incremental cost, € 44,709 (CI, € 34,559 to € 54,859). Antimicrobial-susceptible E. coli nosocomial bacteremia had the lowest mean incremental cost, € 10,481 (CI, € 8,752 to € 12,210). Despite their lower cost, episodes of antimicrobial-susceptible E. coli nosocomial bacteremia had a major impact due to their high frequency. Conclusions Adjustment of hospital cost according to the organism causing bacteremia and antibiotic sensitivity could improve prevention strategies and allow their prioritization according to their overall impact and costs. Infection reduction is a strategy to reduce resistance. PMID:27055117

  3. International Space Station Increment-2 Microgravity Environment Summary Report

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Hrovat, Kenneth; Kelly, Eric; McPherson, Kevin; Reckart, Timothy

    2002-01-01

    This summary report presents the results of some of the processed acceleration data, collected aboard the International Space Station during the period of May to August 2001, the Increment-2 phase of the station. Two accelerometer systems were used to measure the acceleration levels during activities that took place during the Increment-2 segment. However, not all of the activities were analyzed for this report due to time constraints, lack of precise information regarding some payload operations and other station activities. The National Aeronautics and Space Administration sponsors the Microgravity Acceleration Measurement System and the Space Acceleration Microgravity System to support microgravity science experiments, which require microgravity acceleration measurements. On April 19, 2001, both the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System units were launched on STS-100 from the Kennedy Space Center for installation on the International Space Station. The Microgravity Acceleration Measurement System unit was flown to the station in support of science experiments requiring quasi-steady acceleration measurements, while the Space Acceleration Measurement System unit was flown to support experiments requiring vibratory acceleration measurement. Both acceleration systems are also used in support of vehicle microgravity requirements verification. The International Space Station Increment-2 reduced gravity environment analysis presented in this report uses acceleration data collected by both sets of accelerometer systems: 1) The Microgravity Acceleration Measurement System, which consists of two sensors: the Orbital Acceleration Research Experiment Sensor Subsystem, a low frequency range sensor (up to 1 Hz), is used to characterize the quasi-steady environment for payloads and the vehicle, and the High Resolution Accelerometer Package, which is used to characterize the vibratory environment up to 100 Hz. 2) The Space Acceleration Measurement System, which is a high frequency sensor, measures vibratory acceleration data in the range of 0.01 to 300 Hz. This summary report presents analysis of some selected quasisteady and vibratory activities measured by these accelerometers during Increment-2 from May to August 20, 2001.

  4. A combinatorial feature selection approach to describe the QSAR of dual site inhibitors of acetylcholinesterase.

    PubMed

    Asadabadi, Ebrahim Barzegari; Abdolmaleki, Parviz; Barkooie, Seyyed Mohsen Hosseini; Jahandideh, Samad; Rezaei, Mohammad Ali

    2009-12-01

    Regarding the great potential of dual binding site inhibitors of acetylcholinesterase as the future potent drugs of Alzheimer's disease, this study was devoted to extraction of the most effective structural features of these inhibitors from among a large number of quantitative descriptors. To do this, we adopted a unique approach in quantitative structure-activity relationships. An efficient feature selection method was emphasized in such an approach, using the confirmative results of different routine and novel feature selection methods. The proposed methods generated quite consistent results ensuring the effectiveness of the selected structural features.

  5. Cost-effectiveness of drug-eluting coronary stents in Quebec, Canada.

    PubMed

    Brophy, James M; Erickson, Lonny J

    2005-01-01

    The aim of this investigation was to assess the incremental cost-effectiveness of replacing bare metal coronary stents (BMS) with drug-eluting stents (DES) in the Province of Quebec, Canada. The strategy used was a cost-effectiveness analysis from the perspective of the health-care provider, in the province of Quebec, Canada (population 7.5 million). The main outcome measure was the cost per avoided revascularization intervention. Based on the annual Quebec rate of 14,000 angioplasties with an average of 1.7 stents per procedure and a purchase cost of $2,600 Canadian dollar (CDN) for DES, 100 percent substitution of BMS with DES would require an additional $45.1 million CDN of funding. After the benefits of reduced repeat revascularization interventions are included, the incremental cost would be $35.2 million CDN. The cost per avoided revascularization intervention (18 percent coronary artery bypass graft, 82 percent percutaneous coronary intervention [PCI]) would be $23,067 CDN. If DES were offered selectively to higher risk populations, for example, a 20 percent subgroup with a relative restenosis risk of 2.5 times the current bare metal rate, the incremental cost of the program would be $4.9 million CDN at a cost of $7,800 per avoided revascularization procedure. Break-even costs for the program would occur at DES purchase cost of $1,161 for 100 percent DES use and $1,627 for selective 20 percent DES use for high-risk patients for restenosis (RR = 2.5). Univariate and Monte Carlo sensitivity analyses indicate that the parameters most affecting the analysis are the capacity to select patients at high risk of restenosis, the average number of stents used per PCI, baseline restenosis rates for BMS, the effectiveness ratio of restenosis prevention for DES versus BMS, the cost of DES, and the revascularization rate after initial PCI. Sensitivity analyses suggest little additional health benefits but escalating cost-effectiveness ratios once a DES penetration of 40 percent has been attained. Under current conditions in Quebec, Canada, selective use of DES in high-risk patients is the most acceptable strategy in terms of cost-effectiveness. Results of such an analysis would be expected to be similar in other countries with key model parameters similar to those used in this model. This model provides an example of how to evaluate the cost-effectiveness of selective use of a new technology in high-risk patients.

  6. A combined Fisher and Laplacian score for feature selection in QSAR based drug design using compounds with known and unknown activities.

    PubMed

    Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah

    2018-02-01

    Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.

  7. A combined Fisher and Laplacian score for feature selection in QSAR based drug design using compounds with known and unknown activities

    NASA Astrophysics Data System (ADS)

    Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah

    2018-02-01

    Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.

  8. Irrelevant reward and selection histories have different influences on task-relevant attentional selection.

    PubMed

    MacLean, Mary H; Giesbrecht, Barry

    2015-07-01

    Task-relevant and physically salient features influence visual selective attention. In the present study, we investigated the influence of task-irrelevant and physically nonsalient reward-associated features on visual selective attention. Two hypotheses were tested: One predicts that the effects of target-defining task-relevant and task-irrelevant features interact to modulate visual selection; the other predicts that visual selection is determined by the independent combination of relevant and irrelevant feature effects. These alternatives were tested using a visual search task that contained multiple targets, placing a high demand on the need for selectivity, and that was data-limited and required unspeeded responses, emphasizing early perceptual selection processes. One week prior to the visual search task, participants completed a training task in which they learned to associate particular colors with a specific reward value. In the search task, the reward-associated colors were presented surrounding targets and distractors, but were neither physically salient nor task-relevant. In two experiments, the irrelevant reward-associated features influenced performance, but only when they were presented in a task-relevant location. The costs induced by the irrelevant reward-associated features were greater when they oriented attention to a target than to a distractor. In a third experiment, we examined the effects of selection history in the absence of reward history and found that the interaction between task relevance and selection history differed, relative to when the features had previously been associated with reward. The results indicate that under conditions that demand highly efficient perceptual selection, physically nonsalient task-irrelevant and task-relevant factors interact to influence visual selective attention.

  9. Cost-effectiveness of screening for asymptomatic carotid atherosclerotic disease.

    PubMed

    Derdeyn, C P; Powers, W J

    1996-11-01

    The value of screening for asymptomatic carotid stenosis has become an important issue with the recently reported beneficial effect of endarterectomy. The purpose of this study is to evaluate the cost-effectiveness of using Doppler ultrasound as a screening tool to select subjects for arteriography and subsequent surgery. A computer model was developed to simulate the cost-effectiveness of screening a cohort of 1000 men during a 20-year period. The primary outcome measure was incremental present-value dollar expenditures for screening and treatment per incremental present-value quality-adjusted life-year (QALY) saved. Estimates of disease prevalence and arteriographic and surgical complication rates were obtained from the literature. Probabilities of stroke and death with surgical and medical treatment were obtained from published clinical trials. Doppler ultrasound sensitivity and specificity were obtained through review of local experience. Estimates of costs were obtained from local Medicare reimbursement data. A one-time screening program of a population with a high prevalence (20%) of > or = 60% stenosis cost $35130 per incremental QALY gained. Decreased surgical benefit or increased annual discount rate was detrimental, resulting in lost QALYs. Annual screening cost $457773 per incremental QALY gained. In a low-prevalence (4%) population, one-time screening cost $52588 per QALY gained, while annual screening was detrimental. The cost-effectiveness of a one-time screening program for an asymptomatic population with a high prevalence of carotid stenosis may be cost-effective. Annual screening is detrimental. The most sensitive variables in this simulation model were long-term stroke risk reduction after surgery and annual discount rate for accumulated costs and QALYs.

  10. Multi-level gene/MiRNA feature selection using deep belief nets and active learning.

    PubMed

    Ibrahim, Rania; Yousri, Noha A; Ismail, Mohamed A; El-Makky, Nagwa M

    2014-01-01

    Selecting the most discriminative genes/miRNAs has been raised as an important task in bioinformatics to enhance disease classifiers and to mitigate the dimensionality curse problem. Original feature selection methods choose genes/miRNAs based on their individual features regardless of how they perform together. Considering group features instead of individual ones provides a better view for selecting the most informative genes/miRNAs. Recently, deep learning has proven its ability in representing the data in multiple levels of abstraction, allowing for better discrimination between different classes. However, the idea of using deep learning for feature selection is not widely used in the bioinformatics field yet. In this paper, a novel multi-level feature selection approach named MLFS is proposed for selecting genes/miRNAs based on expression profiles. The approach is based on both deep and active learning. Moreover, an extension to use the technique for miRNAs is presented by considering the biological relation between miRNAs and genes. Experimental results show that the approach was able to outperform classical feature selection methods in hepatocellular carcinoma (HCC) by 9%, lung cancer by 6% and breast cancer by around 10% in F1-measure. Results also show the enhancement in F1-measure of our approach over recently related work in [1] and [2].

  11. Fourier transform infrared spectroscopy microscopic imaging classification based on spatial-spectral features

    NASA Astrophysics Data System (ADS)

    Liu, Lian; Yang, Xiukun; Zhong, Mingliang; Liu, Yao; Jing, Xiaojun; Yang, Qin

    2018-04-01

    The discrete fractional Brownian incremental random (DFBIR) field is used to describe the irregular, random, and highly complex shapes of natural objects such as coastlines and biological tissues, for which traditional Euclidean geometry cannot be used. In this paper, an anisotropic variable window (AVW) directional operator based on the DFBIR field model is proposed for extracting spatial characteristics of Fourier transform infrared spectroscopy (FTIR) microscopic imaging. Probabilistic principal component analysis first extracts spectral features, and then the spatial features of the proposed AVW directional operator are combined with the former to construct a spatial-spectral structure, which increases feature-related information and helps a support vector machine classifier to obtain more efficient distribution-related information. Compared to Haralick’s grey-level co-occurrence matrix, Gabor filters, and local binary patterns (e.g. uniform LBPs, rotation-invariant LBPs, uniform rotation-invariant LBPs), experiments on three FTIR spectroscopy microscopic imaging datasets show that the proposed AVW directional operator is more advantageous in terms of classification accuracy, particularly for low-dimensional spaces of spatial characteristics.

  12. A Hierarchical Feature and Sample Selection Framework and Its Application for Alzheimer’s Disease Diagnosis

    NASA Astrophysics Data System (ADS)

    An, Le; Adeli, Ehsan; Liu, Mingxia; Zhang, Jun; Lee, Seong-Whan; Shen, Dinggang

    2017-03-01

    Classification is one of the most important tasks in machine learning. Due to feature redundancy or outliers in samples, using all available data for training a classifier may be suboptimal. For example, the Alzheimer’s disease (AD) is correlated with certain brain regions or single nucleotide polymorphisms (SNPs), and identification of relevant features is critical for computer-aided diagnosis. Many existing methods first select features from structural magnetic resonance imaging (MRI) or SNPs and then use those features to build the classifier. However, with the presence of many redundant features, the most discriminative features are difficult to be identified in a single step. Thus, we formulate a hierarchical feature and sample selection framework to gradually select informative features and discard ambiguous samples in multiple steps for improved classifier learning. To positively guide the data manifold preservation process, we utilize both labeled and unlabeled data during training, making our method semi-supervised. For validation, we conduct experiments on AD diagnosis by selecting mutually informative features from both MRI and SNP, and using the most discriminative samples for training. The superior classification results demonstrate the effectiveness of our approach, as compared with the rivals.

  13. Associative memory for online learning in noisy environments using self-organizing incremental neural network.

    PubMed

    Sudo, Akihito; Sato, Akihiro; Hasegawa, Osamu

    2009-06-01

    Associative memory operating in a real environment must perform well in online incremental learning and be robust to noisy data because noisy associative patterns are presented sequentially in a real environment. We propose a novel associative memory that satisfies these requirements. Using the proposed method, new associative pairs that are presented sequentially can be learned accurately without forgetting previously learned patterns. The memory size of the proposed method increases adaptively with learning patterns. Therefore, it suffers neither redundancy nor insufficiency of memory size, even in an environment in which the maximum number of associative pairs to be presented is unknown before learning. Noisy inputs in real environments are classifiable into two types: noise-added original patterns and faultily presented random patterns. The proposed method deals with two types of noise. To our knowledge, no conventional associative memory addresses noise of both types. The proposed associative memory performs as a bidirectional one-to-many or many-to-one associative memory and deals not only with bipolar data, but also with real-valued data. Results demonstrate that the proposed method's features are important for application to an intelligent robot operating in a real environment. The originality of our work consists of two points: employing a growing self-organizing network for an associative memory, and discussing what features are necessary for an associative memory for an intelligent robot and proposing an associative memory that satisfies those requirements.

  14. An improved wrapper-based feature selection method for machinery fault diagnosis

    PubMed Central

    2017-01-01

    A major issue of machinery fault diagnosis using vibration signals is that it is over-reliant on personnel knowledge and experience in interpreting the signal. Thus, machine learning has been adapted for machinery fault diagnosis. The quantity and quality of the input features, however, influence the fault classification performance. Feature selection plays a vital role in selecting the most representative feature subset for the machine learning algorithm. In contrast, the trade-off relationship between capability when selecting the best feature subset and computational effort is inevitable in the wrapper-based feature selection (WFS) method. This paper proposes an improved WFS technique before integration with a support vector machine (SVM) model classifier as a complete fault diagnosis system for a rolling element bearing case study. The bearing vibration dataset made available by the Case Western Reserve University Bearing Data Centre was executed using the proposed WFS and its performance has been analysed and discussed. The results reveal that the proposed WFS secures the best feature subset with a lower computational effort by eliminating the redundancy of re-evaluation. The proposed WFS has therefore been found to be capable and efficient to carry out feature selection tasks. PMID:29261689

  15. Asymmetric bagging and feature selection for activities prediction of drug molecules.

    PubMed

    Li, Guo-Zheng; Meng, Hao-Hua; Lu, Wen-Cong; Yang, Jack Y; Yang, Mary Qu

    2008-05-28

    Activities of drug molecules can be predicted by QSAR (quantitative structure activity relationship) models, which overcomes the disadvantages of high cost and long cycle by employing the traditional experimental method. With the fact that the number of drug molecules with positive activity is rather fewer than that of negatives, it is important to predict molecular activities considering such an unbalanced situation. Here, asymmetric bagging and feature selection are introduced into the problem and asymmetric bagging of support vector machines (asBagging) is proposed on predicting drug activities to treat the unbalanced problem. At the same time, the features extracted from the structures of drug molecules affect prediction accuracy of QSAR models. Therefore, a novel algorithm named PRIFEAB is proposed, which applies an embedded feature selection method to remove redundant and irrelevant features for asBagging. Numerical experimental results on a data set of molecular activities show that asBagging improve the AUC and sensitivity values of molecular activities and PRIFEAB with feature selection further helps to improve the prediction ability. Asymmetric bagging can help to improve prediction accuracy of activities of drug molecules, which can be furthermore improved by performing feature selection to select relevant features from the drug molecules data sets.

  16. System Complexity Reduction via Feature Selection

    ERIC Educational Resources Information Center

    Deng, Houtao

    2011-01-01

    This dissertation transforms a set of system complexity reduction problems to feature selection problems. Three systems are considered: classification based on association rules, network structure learning, and time series classification. Furthermore, two variable importance measures are proposed to reduce the feature selection bias in tree…

  17. Classification of motor imagery tasks for BCI with multiresolution analysis and multiobjective feature selection.

    PubMed

    Ortega, Julio; Asensio-Cubero, Javier; Gan, John Q; Ortiz, Andrés

    2016-07-15

    Brain-computer interfacing (BCI) applications based on the classification of electroencephalographic (EEG) signals require solving high-dimensional pattern classification problems with such a relatively small number of training patterns that curse of dimensionality problems usually arise. Multiresolution analysis (MRA) has useful properties for signal analysis in both temporal and spectral analysis, and has been broadly used in the BCI field. However, MRA usually increases the dimensionality of the input data. Therefore, some approaches to feature selection or feature dimensionality reduction should be considered for improving the performance of the MRA based BCI. This paper investigates feature selection in the MRA-based frameworks for BCI. Several wrapper approaches to evolutionary multiobjective feature selection are proposed with different structures of classifiers. They are evaluated by comparing with baseline methods using sparse representation of features or without feature selection. The statistical analysis, by applying the Kolmogorov-Smirnoff and Kruskal-Wallis tests to the means of the Kappa values evaluated by using the test patterns in each approach, has demonstrated some advantages of the proposed approaches. In comparison with the baseline MRA approach used in previous studies, the proposed evolutionary multiobjective feature selection approaches provide similar or even better classification performances, with significant reduction in the number of features that need to be computed.

  18. Feature Selection for Speech Emotion Recognition in Spanish and Basque: On the Use of Machine Learning to Improve Human-Computer Interaction

    PubMed Central

    Arruti, Andoni; Cearreta, Idoia; Álvarez, Aitor; Lazkano, Elena; Sierra, Basilio

    2014-01-01

    Study of emotions in human–computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested. PMID:25279686

  19. Inhomogeneous kinetic effects related to intermittent magnetic discontinuities

    NASA Astrophysics Data System (ADS)

    Greco, A.; Valentini, F.; Servidio, S.; Matthaeus, W. H.

    2012-12-01

    A connection between kinetic processes and two-dimensional intermittent plasma turbulence is observed using direct numerical simulations of a hybrid Vlasov-Maxwell model, in which the Vlasov equation is solved for protons, while the electrons are described as a massless fluid. During the development of turbulence, the proton distribution functions depart from the typical configuration of local thermodynamic equilibrium, displaying statistically significant non-Maxwellian features. In particular, temperature anisotropy and distortions are concentrated near coherent structures, generated as the result of the turbulent cascade, such as current sheets, which are nonuniformly distributed in space. Here, the partial variance of increments (PVI) method has been employed to identify high magnetic stress regions within a two-dimensional turbulent pattern. A quantitative association between non-Maxwellian features and coherent structures is established.

  20. The impact of feature selection on one and two-class classification performance for plant microRNAs.

    PubMed

    Khalifa, Waleed; Yousef, Malik; Saçar Demirci, Müşerref Duygu; Allmer, Jens

    2016-01-01

    MicroRNAs (miRNAs) are short nucleotide sequences that form a typical hairpin structure which is recognized by a complex enzyme machinery. It ultimately leads to the incorporation of 18-24 nt long mature miRNAs into RISC where they act as recognition keys to aid in regulation of target mRNAs. It is involved to determine miRNAs experimentally and, therefore, machine learning is used to complement such endeavors. The success of machine learning mostly depends on proper input data and appropriate features for parameterization of the data. Although, in general, two-class classification (TCC) is used in the field; because negative examples are hard to come by, one-class classification (OCC) has been tried for pre-miRNA detection. Since both positive and negative examples are currently somewhat limited, feature selection can prove to be vital for furthering the field of pre-miRNA detection. In this study, we compare the performance of OCC and TCC using eight feature selection methods and seven different plant species providing positive pre-miRNA examples. Feature selection was very successful for OCC where the best feature selection method achieved an average accuracy of 95.6%, thereby being ∼29% better than the worst method which achieved 66.9% accuracy. While the performance is comparable to TCC, which performs up to 3% better than OCC, TCC is much less affected by feature selection and its largest performance gap is ∼13% which only occurs for two of the feature selection methodologies. We conclude that feature selection is crucially important for OCC and that it can perform on par with TCC given the proper set of features.

  1. FSR: feature set reduction for scalable and accurate multi-class cancer subtype classification based on copy number.

    PubMed

    Wong, Gerard; Leckie, Christopher; Kowalczyk, Adam

    2012-01-15

    Feature selection is a key concept in machine learning for microarray datasets, where features represented by probesets are typically several orders of magnitude larger than the available sample size. Computational tractability is a key challenge for feature selection algorithms in handling very high-dimensional datasets beyond a hundred thousand features, such as in datasets produced on single nucleotide polymorphism microarrays. In this article, we present a novel feature set reduction approach that enables scalable feature selection on datasets with hundreds of thousands of features and beyond. Our approach enables more efficient handling of higher resolution datasets to achieve better disease subtype classification of samples for potentially more accurate diagnosis and prognosis, which allows clinicians to make more informed decisions in regards to patient treatment options. We applied our feature set reduction approach to several publicly available cancer single nucleotide polymorphism (SNP) array datasets and evaluated its performance in terms of its multiclass predictive classification accuracy over different cancer subtypes, its speedup in execution as well as its scalability with respect to sample size and array resolution. Feature Set Reduction (FSR) was able to reduce the dimensions of an SNP array dataset by more than two orders of magnitude while achieving at least equal, and in most cases superior predictive classification performance over that achieved on features selected by existing feature selection methods alone. An examination of the biological relevance of frequently selected features from FSR-reduced feature sets revealed strong enrichment in association with cancer. FSR was implemented in MATLAB R2010b and is available at http://ww2.cs.mu.oz.au/~gwong/FSR.

  2. Selective processing of multiple features in the human brain: effects of feature type and salience.

    PubMed

    McGinnis, E Menton; Keil, Andreas

    2011-02-09

    Identifying targets in a stream of items at a given constant spatial location relies on selection of aspects such as color, shape, or texture. Such attended (target) features of a stimulus elicit a negative-going event-related brain potential (ERP), termed Selection Negativity (SN), which has been used as an index of selective feature processing. In two experiments, participants viewed a series of Gabor patches in which targets were defined as a specific combination of color, orientation, and shape. Distracters were composed of different combinations of color, orientation, and shape of the target stimulus. This design allows comparisons of items with and without specific target features. Consistent with previous ERP research, SN deflections extended between 160-300 ms. Data from the subsequent P3 component (300-450 ms post-stimulus) were also examined, and were regarded as an index of target processing. In Experiment A, predominant effects of target color on SN and P3 amplitudes were found, along with smaller ERP differences in response to variations of orientation and shape. Manipulating color to be less salient while enhancing the saliency of the orientation of the Gabor patch (Experiment B) led to delayed color selection and enhanced orientation selection. Topographical analyses suggested that the location of SN on the scalp reliably varies with the nature of the to-be-attended feature. No interference of non-target features on the SN was observed. These results suggest that target feature selection operates by means of electrocortical facilitation of feature-specific sensory processes, and that selective electrocortical facilitation is more effective when stimulus saliency is heightened.

  3. Feature selection for the classification of traced neurons.

    PubMed

    López-Cabrera, José D; Lorenzo-Ginori, Juan V

    2018-06-01

    The great availability of computational tools to calculate the properties of traced neurons leads to the existence of many descriptors which allow the automated classification of neurons from these reconstructions. This situation determines the necessity to eliminate irrelevant features as well as making a selection of the most appropriate among them, in order to improve the quality of the classification obtained. The dataset used contains a total of 318 traced neurons, classified by human experts in 192 GABAergic interneurons and 126 pyramidal cells. The features were extracted by means of the L-measure software, which is one of the most used computational tools in neuroinformatics to quantify traced neurons. We review some current feature selection techniques as filter, wrapper, embedded and ensemble methods. The stability of the feature selection methods was measured. For the ensemble methods, several aggregation methods based on different metrics were applied to combine the subsets obtained during the feature selection process. The subsets obtained applying feature selection methods were evaluated using supervised classifiers, among which Random Forest, C4.5, SVM, Naïve Bayes, Knn, Decision Table and the Logistic classifier were used as classification algorithms. Feature selection methods of types filter, embedded, wrappers and ensembles were compared and the subsets returned were tested in classification tasks for different classification algorithms. L-measure features EucDistanceSD, PathDistanceSD, Branch_pathlengthAve, Branch_pathlengthSD and EucDistanceAve were present in more than 60% of the selected subsets which provides evidence about their importance in the classification of this neurons. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Color-selective attention need not be mediated by spatial attention.

    PubMed

    Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A

    2009-06-08

    It is well-established that attention can select stimuli for preferential processing on the basis of non-spatial features such as color, orientation, or direction of motion. Evidence is mixed, however, as to whether feature-selective attention acts by increasing the signal strength of to-be-attended features irrespective of their spatial locations or whether it acts by guiding the spotlight of spatial attention to locations containing the relevant feature. To address this question, we designed a task in which feature-selective attention could not be mediated by spatial selection. Participants observed a display of intermingled dots of two colors, which rapidly and unpredictably changed positions, with the task of detecting brief intervals of reduced luminance of 20% of the dots of one or the other color. Both behavioral indices and electrophysiological measures of steady-state visual evoked potentials showed selectively enhanced processing of the attended-color items. The results demonstrate that feature-selective attention produces a sensory gain enhancement at early levels of the visual cortex that occurs without mediation by spatial attention.

  5. RESIDENTIAL RADON RESISTANT CONSTRUCTION FEATURE SELECTION SYSTEM

    EPA Science Inventory

    The report describes a proposed residential radon resistant construction feature selection system. The features consist of engineered barriers to reduce radon entry and accumulation indoors. The proposed Florida standards require radon resistant features in proportion to regional...

  6. Object-based selection from spatially-invariant representations: evidence from a feature-report task.

    PubMed

    Matsukura, Michi; Vecera, Shaun P

    2011-02-01

    Attention selects objects as well as locations. When attention selects an object's features, observers identify two features from a single object more accurately than two features from two different objects (object-based effect of attention; e.g., Duncan, Journal of Experimental Psychology: General, 113, 501-517, 1984). Several studies have demonstrated that object-based attention can operate at a late visual processing stage that is independent of objects' spatial information (Awh, Dhaliwal, Christensen, & Matsukura, Psychological Science, 12, 329-334, 2001; Matsukura & Vecera, Psychonomic Bulletin & Review, 16, 529-536, 2009; Vecera, Journal of Experimental Psychology: General, 126, 14-18, 1997; Vecera & Farah, Journal of Experimental Psychology: General, 123, 146-160, 1994). In the present study, we asked two questions regarding this late object-based selection mechanism. In Part I, we investigated how observers' foreknowledge of to-be-reported features allows attention to select objects, as opposed to individual features. Using a feature-report task, a significant object-based effect was observed when to-be-reported features were known in advance but not when this advance knowledge was absent. In Part II, we examined what drives attention to select objects rather than individual features in the absence of observers' foreknowledge of to-be-reported features. Results suggested that, when there was no opportunity for observers to direct their attention to objects that possess to-be-reported features at the time of stimulus presentation, these stimuli must retain strong perceptual cues to establish themselves as separate objects.

  7. Experimental Investigation of Inlet Distortion in a Multistage Axial Compressor

    NASA Astrophysics Data System (ADS)

    Rusu, Razvan

    The primary objective of this research is to present results and methodologies used to study total pressure inlet distortion in a multi-stage axial compressor environment. The study was performed at the Purdue 3-Stage Axial Compressor Facility (P3S) which models the final three stages of a production turbofan engine's high-pressure compressor (HPC). The goal of this study was twofold; first, to design, implement, and validate a circumferentially traversable total pressure inlet distortion generation system, and second, to demonstrate data acquisition methods to characterize the inter-stage total pressure flow fields to study the propagation and attenuation of a one-per-rev total pressure distortion. The datasets acquired for this study are intended to support the development and validation of novel computational tools and flow physics models for turbomachinery flow analysis. Total pressure inlet distortion was generated using a series of low-porosity wire gauze screens placed upstream of the compressor in the inlet duct. The screens are mounted to a rotatable duct section that can be precisely controlled. The P3S compressor features fixed instrumentation stations located at the aerodynamic interface plane (AIP) and downstream and upstream of each vane row. Furthermore, the compressor features individually indexable stator vanes which can be traverse by up to two vane passages. Using a series of coordinated distortion and vane traverses, the total pressure flow field at the AIP and subsequent inter-stage stations was characterized with a high circumferential resolution. The uniformity of the honeycomb carrier was demonstrated by characterizing the flow field at the AIP while no distortion screens where installed. Next, the distortion screen used for this study was selected following three iterations of porosity reduction. The selected screen consisted of a series of layered screens with a 100% radial extent and a 120° circumferential extent. A detailed total pressure flow field characterization of the AIP was performed using the selected screen at nominal, low, and high compressor loading. Thermal anemometry was used to characterize the spatial variation in turbulence intensity at the AIP in an effort to further define inlet boundary conditions for future computational investigations. Two data acquisition methods for the study of distortion propagation and attenuation were utilized in this study. The first method approximated the bulk flow through each vane passage using a single rake measurement positioned near the center of the passage. All vane passages were measured virtually by rotating the distortion upstream by an increment equal to one vane passage. This method proved successful in tracking the distortion propagation and attenuation from the AIP up until the compressor exit. A second, more detailed, inter-stage flow field characterization method was used that generated a total pressure field with a circumferential resolution of 880 increments, or one every 0.41°. The resulting fields demonstrated the importance of secondary flows in the propagation of a total pressure distortion at the different loading conditions investigated. A second objective of this research was to document proposals and design efforts to outfit the existing P3S research compressor with a strain gage telemetry system. The purpose of this system is to validate and supplement existing blade tip timing data on the embedded rotor stage to support the development and validation of novel aeromechanical analysis tools. Integration strategies and telemetry considerations are discussed based on proposals and consultation provided by suppliers.

  8. The Dark Side of Incremental Learning: A Model of Cumulative Semantic Interference during Lexical Access in Speech Production

    ERIC Educational Resources Information Center

    Oppenheim, Gary M.; Dell, Gary S.; Schwartz, Myrna F.

    2010-01-01

    Naming a picture of a dog primes the subsequent naming of a picture of a dog (repetition priming) and interferes with the subsequent naming of a picture of a cat (semantic interference). Behavioral studies suggest that these effects derive from persistent changes in the way that words are activated and selected for production, and some have…

  9. 12 strategies for managing capital projects.

    PubMed

    Stoudt, Richard L

    2013-05-01

    To reduce the amount of time and cost associated with capital projects, healthcare leaders should: Begin the project with a clear objective and a concise master facilities plan. Select qualified team members who share the vision of the owner. Base the size of the project on a conservative business plan. Minimize incremental program requirements. Evaluate the cost impact of the building footprint. Consider alternative delivery methods.

  10. 77 FR 3712 - Approval and Promulgation of Air Quality Implementation Plans; Ohio; Regional Haze

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... equal incremental change in visibility perceived by the human eye. Most people can detect a change in... desulfurization (FGD), semi-dry FGD, and over-fire air (OFA) with a sorbent injection system (SIS). Ohio and MPRO... on up to 7 days a year. Ohio selected semi-dry FGD as the BART SO 2 control, which is expected to...

  11. Defense Acquisitions: Assessments of Selected Weapon Programs

    DTIC Science & Technology

    2017-03-01

    PAC-3 MSE) 81 Warfighter Information Network-Tactical (WIN-T) Increment 2 83 Improved Turbine Engine Program (ITEP) 85 Long Range Precision Fires...incorporated certain 2010 acquisition reform initiatives. DOD and Congress have previously addressed some of the challenges and problems in the defense...additional quantities. While that does represent a cost increase, it does not necessarily indicate acquisition problems or a loss of buying power

  12. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  13. Predication of different stages of Alzheimer's disease using neighborhood component analysis and ensemble decision tree.

    PubMed

    Jin, Mingwu; Deng, Weishu

    2018-05-15

    There is a spectrum of the progression from healthy control (HC) to mild cognitive impairment (MCI) without conversion to Alzheimer's disease (AD), to MCI with conversion to AD (cMCI), and to AD. This study aims to predict the different disease stages using brain structural information provided by magnetic resonance imaging (MRI) data. The neighborhood component analysis (NCA) is applied to select most powerful features for prediction. The ensemble decision tree classifier is built to predict which group the subject belongs to. The best features and model parameters are determined by cross validation of the training data. Our results show that 16 out of a total of 429 features were selected by NCA using 240 training subjects, including MMSE score and structural measures in memory-related regions. The boosting tree model with NCA features can achieve prediction accuracy of 56.25% on 160 test subjects. Principal component analysis (PCA) and sequential feature selection (SFS) are used for feature selection, while support vector machine (SVM) is used for classification. The boosting tree model with NCA features outperforms all other combinations of feature selection and classification methods. The results suggest that NCA be a better feature selection strategy than PCA and SFS for the data used in this study. Ensemble tree classifier with boosting is more powerful than SVM to predict the subject group. However, more advanced feature selection and classification methods or additional measures besides structural MRI may be needed to improve the prediction performance. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  15. Application of machine learning on brain cancer multiclass classification

    NASA Astrophysics Data System (ADS)

    Panca, V.; Rustam, Z.

    2017-07-01

    Classification of brain cancer is a problem of multiclass classification. One approach to solve this problem is by first transforming it into several binary problems. The microarray gene expression dataset has the two main characteristics of medical data: extremely many features (genes) and only a few number of samples. The application of machine learning on microarray gene expression dataset mainly consists of two steps: feature selection and classification. In this paper, the features are selected using a method based on support vector machine recursive feature elimination (SVM-RFE) principle which is improved to solve multiclass classification, called multiple multiclass SVM-RFE. Instead of using only the selected features on a single classifier, this method combines the result of multiple classifiers. The features are divided into subsets and SVM-RFE is used on each subset. Then, the selected features on each subset are put on separate classifiers. This method enhances the feature selection ability of each single SVM-RFE. Twin support vector machine (TWSVM) is used as the method of the classifier to reduce computational complexity. While ordinary SVM finds single optimum hyperplane, the main objective Twin SVM is to find two non-parallel optimum hyperplanes. The experiment on the brain cancer microarray gene expression dataset shows this method could classify 71,4% of the overall test data correctly, using 100 and 1000 genes selected from multiple multiclass SVM-RFE feature selection method. Furthermore, the per class results show that this method could classify data of normal and MD class with 100% accuracy.

  16. Local Feature Selection for Data Classification.

    PubMed

    Armanfard, Narges; Reilly, James P; Komeili, Majid

    2016-06-01

    Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.

  17. Decision Variants for the Automatic Determination of Optimal Feature Subset in RF-RFE.

    PubMed

    Chen, Qi; Meng, Zhaopeng; Liu, Xinyi; Jin, Qianguo; Su, Ran

    2018-06-15

    Feature selection, which identifies a set of most informative features from the original feature space, has been widely used to simplify the predictor. Recursive feature elimination (RFE), as one of the most popular feature selection approaches, is effective in data dimension reduction and efficiency increase. A ranking of features, as well as candidate subsets with the corresponding accuracy, is produced through RFE. The subset with highest accuracy (HA) or a preset number of features (PreNum) are often used as the final subset. However, this may lead to a large number of features being selected, or if there is no prior knowledge about this preset number, it is often ambiguous and subjective regarding final subset selection. A proper decision variant is in high demand to automatically determine the optimal subset. In this study, we conduct pioneering work to explore the decision variant after obtaining a list of candidate subsets from RFE. We provide a detailed analysis and comparison of several decision variants to automatically select the optimal feature subset. Random forest (RF)-recursive feature elimination (RF-RFE) algorithm and a voting strategy are introduced. We validated the variants on two totally different molecular biology datasets, one for a toxicogenomic study and the other one for protein sequence analysis. The study provides an automated way to determine the optimal feature subset when using RF-RFE.

  18. Biofiltration of air polluted with methane at concentration levels similar to swine slurry emissions: influence of ammonium concentration.

    PubMed

    Veillette, Marc; Avalos Ramirez, Antonio; Heitz, Michèle

    2012-01-01

    An evaluation of the effect of ammonium on the performance of two up-flow inorganic packed bed biofilters treating methane was conducted. The air flow rate was set to 3.0 L min(-1) for an empty bed residence time of 6.0 min. The biofilter was fed with a methane concentration of 0.30% (v/v). The ammonium concentration in the nutrient solution was increased by small increments (from 0.01 to 0.025 gN-NH(4) (+) L(-1)) for one biofilter and by large increments of 0.05 gN-NH(4) (+) L(-1) in the other biofilter. The total concentration of nitrogen was kept constant at 0.5 gN-NH(4) (+) L(-1) throughout the experiment by balancing ammonium with nitrate. For both biofilters, the methane elimination capacity, carbon dioxide production, nitrogen bed retention and biomass content decreased with the ammonium concentration in the nutrient solution. The biofilter with smaller ammonium increments featured a higher elimination capacity and carbon dioxide production rate, which varied from 4.9 to 14.3 g m(-3) h(-1) and from 11.5 to 30 g m(-3) h(-1), respectively. Denitrification was observed as some values of the nitrate production rate were negative for ammonium concentrations below 0.2 gN-NH(4) (+) L(-1). A Michalelis-Menten-type model fitted the ammonium elimination rate and the nitrate production rate.

  19. Individual subject classification for Alzheimer's disease based on incremental learning using a spatial frequency representation of cortical thickness data.

    PubMed

    Cho, Youngsang; Seong, Joon-Kyung; Jeong, Yong; Shin, Sung Yong

    2012-02-01

    Patterns of brain atrophy measured by magnetic resonance structural imaging have been utilized as significant biomarkers for diagnosis of Alzheimer's disease (AD). However, brain atrophy is variable across patients and is non-specific for AD in general. Thus, automatic methods for AD classification require a large number of structural data due to complex and variable patterns of brain atrophy. In this paper, we propose an incremental method for AD classification using cortical thickness data. We represent the cortical thickness data of a subject in terms of their spatial frequency components, employing the manifold harmonic transform. The basis functions for this transform are obtained from the eigenfunctions of the Laplace-Beltrami operator, which are dependent only on the geometry of a cortical surface but not on the cortical thickness defined on it. This facilitates individual subject classification based on incremental learning. In general, methods based on region-wise features poorly reflect the detailed spatial variation of cortical thickness, and those based on vertex-wise features are sensitive to noise. Adopting a vertex-wise cortical thickness representation, our method can still achieve robustness to noise by filtering out high frequency components of the cortical thickness data while reflecting their spatial variation. This compromise leads to high accuracy in AD classification. We utilized MR volumes provided by Alzheimer's Disease Neuroimaging Initiative (ADNI) to validate the performance of the method. Our method discriminated AD patients from Healthy Control (HC) subjects with 82% sensitivity and 93% specificity. It also discriminated Mild Cognitive Impairment (MCI) patients, who converted to AD within 18 months, from non-converted MCI subjects with 63% sensitivity and 76% specificity. Moreover, it showed that the entorhinal cortex was the most discriminative region for classification, which is consistent with previous pathological findings. In comparison with other classification methods, our method demonstrated high classification performance in both categories, which supports the discriminative power of our method in both AD diagnosis and AD prediction. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. A near-optimum procedure for selecting stations in a streamgaging network

    USGS Publications Warehouse

    Lanfear, Kenneth J.

    2005-01-01

    Two questions are fundamental to Federal government goals for a network of streamgages which are operated by the U.S. Geological Survey: (1) how well does the present network of streamagaging stations meet defined Federal goals and (2) what is the optimum set of stations to add or reactivate to support remaining goals? The solution involves an incremental-stepping procedure that is based on Basic Feasible Incremental Solutions (BFIS?s) where each BFIS satisfies at least one Federal streamgaging goal. A set of minimum Federal goals for streamgaging is defined to include water measurements for legal compacts and decrees, flooding, water budgets, regionalization of streamflow characteristics, and water quality. Fully satisfying all these goals by using the assumptions outlined in this paper would require adding 887 new streamgaging stations to the U.S. Geological Survey network and reactivating an additional 857 stations that are currently inactive.

  1. Fracture processes and mechanisms of crack growth resistance in human enamel

    NASA Astrophysics Data System (ADS)

    Bajaj, Devendra; Park, Saejin; Quinn, George D.; Arola, Dwayne

    2010-07-01

    Human enamel has a complex micro-structure that varies with distance from the tooth’s outer surface. But contributions from the microstructure to the fracture toughness and the mechanisms of crack growth resistance have not been explored in detail. In this investigation the apparent fracture toughness of human enamel and the mechanisms of crack growth resistance were evaluated using the indentation fracture approach and an incremental crack growth technique. Indentation cracks were introduced on polished surfaces of enamel at selected distances from the occlusal surface. In addition, an incremental crack growth approach using compact tension specimens was used to quantify the crack growth resistance as a Junction of distance from the occlusal surface. There were significant differences in the apparent toughness estimated using the two approaches, which was attributed to the active crack length and corresponding scale of the toughening mechanisms.

  2. Analysis of residual stress state in sheet metal parts processed by single point incremental forming

    NASA Astrophysics Data System (ADS)

    Maaß, F.; Gies, S.; Dobecki, M.; Brömmelhoff, K.; Tekkaya, A. E.; Reimers, W.

    2018-05-01

    The mechanical properties of formed metal components are highly affected by the prevailing residual stress state. A selective induction of residual compressive stresses in the component, can improve the product properties such as the fatigue strength. By means of single point incremental forming (SPIF), the residual stress state can be influenced by adjusting the process parameters during the manufacturing process. To achieve a fundamental understanding of the residual stress formation caused by the SPIF process, a valid numerical process model is essential. Within the scope of this paper the significance of kinematic hardening effects on the determined residual stress state is presented based on numerical simulations. The effect of the unclamping step after the manufacturing process is also analyzed. An average deviation of the residual stress amplitudes in the clamped and unclamped condition of 18 % reveals, that the unclamping step needs to be considered to reach a high numerical prediction quality.

  3. Is the use of esomeprazole in gastroesophageal reflux disease a cost-effective option in Poland?

    PubMed

    Petryszyn, Pawel; Staniak, Aleksandra; Grzegrzolka, Jedrzej

    2016-03-01

    To compare the cost-effectiveness of therapy of gastroesophageal reflux disease with esomeprazole and other proton pump inhibitors (PPIs) in Poland. Studies comparing esomeprazole with other PPIs in the treatment of erosive esophagitis, non-erosive reflux disease and gastroesophageal reflux disease maintenance therapy were systematically reviewed. 9 randomized clinical trials were selected, meta-analyses were conducted. Cost data derived from Polish Ministry of Health and Pharmacies in Wroclaw. In the treatment of erosive esophagitis esomeprazole was significantly more effective than other PPIs. Both for 4- and 8-week therapy respective incremental cost-effectiveness ratio values were acceptably low. Differences in effectiveness of non-erosive reflux disease therapy were not significant. The replacement of pantoprazole 20 mg with more effective esomeprazole 20 mg in the 6-month maintenance therapy was associated with a substantially high incremental cost-effectiveness ratio.

  4. EFFECT OF COHERENT STRUCTURES ON ENERGETIC PARTICLE INTENSITY IN THE SOLAR WIND AT 1 AU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tessein, Jeffrey A.; Matthaeus, William H.; Wan, Minping

    2015-10-10

    We present results from an analysis of Advanced Composition Explorer (ACE) observations of energetic particles in the 0.047–4.78 MeV range associated with shocks and discontinuities in the solar wind. Previous work found a strong correlation between coherent structures and energetic particles measured by ACE/EPAM. Coherent structures are identified using the Partial Variance of Increments (PVI) method, which is essentially a normalized vector increment. The correlation was based on a superposed epoch analysis using over 12 years of data. Here, we examine many individual high-PVI events to better understand this association emphasizing intervals selected from data with shock neighborhoods removed. Wemore » find that in many cases the local maximum in PVI is in a region of rising or falling energetic particle intensity, which suggests that magnetic discontinuities may act as barriers inhibiting the motion of energetic particles across them.« less

  5. System for controlling a hybrid energy system

    DOEpatents

    Hoff, Brian D.; Akasam, Sivaprasad

    2013-01-29

    A method includes identifying a first operating sequence of a repeated operation of at least one non-traction load. The method also includes determining first and second parameters respectively indicative of a requested energy and output energy of the at least one non-traction load and comparing the determined first and second parameters at a plurality of time increments of the first operating sequence. The method also includes determining a third parameter of the hybrid energy system indicative of energy regenerated from the at least one non-traction load and monitoring the third parameter at the plurality of time increments of the first operating sequence. The method also includes determining at least one of an energy deficiency or an energy surplus associated with the non-traction load of the hybrid energy system and selectively adjusting energy stored within the storage device during at least a portion of a second operating sequence.

  6. An evaluation of the impact of flooring types on exposures to fine and coarse particles within the residential micro-environment using CONTAM.

    PubMed

    Bramwell, Lisa; Qian, Jing; Howard-Reed, Cynthia; Mondal, Sumona; Ferro, Andrea R

    2016-01-01

    Typical resuspension activities within the home, such as walking, have been estimated to contribute up to 25% of personal exposures to PM10. Chamber studies have shown that for moderate walking intensities, flooring type can impact the rate at which particles are re-entrained into the air. For this study, the impact of residential flooring type on incremental average daily (24 h) time-averaged exposure was investigated. Distributions of incremental time-averaged daily exposures to fine and coarse PM while walking within the residential micro-environment were predicted using CONTAM, the multizone airflow and contaminant transport program of the National Institute of Standards and Technology. Knowledge of when and where a person was walking was determined by randomly selecting 490 daily diaries from the EPA's consolidated human activity database (CHAD). On the basis of the results of this study, residential flooring type can significantly impact incremental time-averaged daily exposures to coarse and fine particles (α=0.05, P<0.05, N=490, Kruskal-Wallis test) with high-density cut pile carpeting resulting in the highest exposures. From this study, resuspension from walking within the residential micro-environment contributed 6-72% of time-averaged daily exposures to PM10.

  7. Tempo-Spatial Variations of Ambient Ozone-Mortality Associations in the USA: Results from the NMMAPS Data.

    PubMed

    Liu, Tao; Zeng, Weilin; Lin, Hualiang; Rutherford, Shannon; Xiao, Jianpeng; Li, Xing; Li, Zhihao; Qian, Zhengmin; Feng, Baixiang; Ma, Wenjun

    2016-08-26

    Although the health effects of ambient ozone have been widely assessed, their tempo-spatial variations remain unclear. We selected 20 communities (ten each from southern and northern USA) based on the US National Morbidity, Mortality, and Air Pollution Study (NMMAPS) dataset. A generalized linear model (GLM) was used to estimate the season-specific association between each 10 ppb (lag0-2 day average) increment in daily 8 h maximum ozone concentration and mortality in every community. The results showed that in the southern communities, a 10 ppb increment in ozone was linked to an increment of mortality of -0.07%, -0.17%, 0.40% and 0.27% in spring, summer, autumn and winter, respectively. For the northern communities, the excess risks (ERs) were 0.74%, 1.21%, 0.52% and -0.65% in the spring, summer, autumn and winter seasons, respectively. City-specific ozone-related mortality effects were positively related with latitude, but negatively related with seasonal average temperature in the spring, summer and autumn seasons. However, a reverse relationship was found in the winter. We concluded that there were different seasonal patterns of ozone effects on mortality between southern and northern US communities. Latitude and seasonal average temperature were identified as modifiers of the ambient ozone-related mortality risks.

  8. Tempo-Spatial Variations of Ambient Ozone-Mortality Associations in the USA: Results from the NMMAPS Data

    PubMed Central

    Liu, Tao; Zeng, Weilin; Lin, Hualiang; Rutherford, Shannon; Xiao, Jianpeng; Li, Xing; Li, Zhihao; Qian, Zhengmin; Feng, Baixiang; Ma, Wenjun

    2016-01-01

    Although the health effects of ambient ozone have been widely assessed, their tempo-spatial variations remain unclear. We selected 20 communities (ten each from southern and northern USA) based on the US National Morbidity, Mortality, and Air Pollution Study (NMMAPS) dataset. A generalized linear model (GLM) was used to estimate the season-specific association between each 10 ppb (lag0-2 day average) increment in daily 8 h maximum ozone concentration and mortality in every community. The results showed that in the southern communities, a 10 ppb increment in ozone was linked to an increment of mortality of −0.07%, −0.17%, 0.40% and 0.27% in spring, summer, autumn and winter, respectively. For the northern communities, the excess risks (ERs) were 0.74%, 1.21%, 0.52% and −0.65% in the spring, summer, autumn and winter seasons, respectively. City-specific ozone-related mortality effects were positively related with latitude, but negatively related with seasonal average temperature in the spring, summer and autumn seasons. However, a reverse relationship was found in the winter. We concluded that there were different seasonal patterns of ozone effects on mortality between southern and northern US communities. Latitude and seasonal average temperature were identified as modifiers of the ambient ozone-related mortality risks. PMID:27571094

  9. Remote measurement methods for 3-D modeling purposes using BAE Systems' Software

    NASA Astrophysics Data System (ADS)

    Walker, Stewart; Pietrzak, Arleta

    2015-06-01

    Efficient, accurate data collection from imagery is the key to an economical generation of useful geospatial products. Incremental developments of traditional geospatial data collection and the arrival of new image data sources cause new software packages to be created and existing ones to be adjusted to enable such data to be processed. In the past, BAE Systems' digital photogrammetric workstation, SOCET SET®, met fin de siècle expectations in data processing and feature extraction. Its successor, SOCET GXP®, addresses today's photogrammetric requirements and new data sources. SOCET GXP is an advanced workstation for mapping and photogrammetric tasks, with automated functionality for triangulation, Digital Elevation Model (DEM) extraction, orthorectification and mosaicking, feature extraction and creation of 3-D models with texturing. BAE Systems continues to add sensor models to accommodate new image sources, in response to customer demand. New capabilities added in the latest version of SOCET GXP facilitate modeling, visualization and analysis of 3-D features.

  10. A table of intensity increments.

    DOT National Transportation Integrated Search

    1966-01-01

    Small intensity increments can be produced by adding larger intensity increments. A table is presented covering the range of small intensity increments from 0.008682 through 6.020 dB in 60 large intensity increments of 1 dB.

  11. Influence of time and length size feature selections for human activity sequences recognition.

    PubMed

    Fang, Hongqing; Chen, Long; Srinivasan, Raghavendiran

    2014-01-01

    In this paper, Viterbi algorithm based on a hidden Markov model is applied to recognize activity sequences from observed sensors events. Alternative features selections of time feature values of sensors events and activity length size feature values are tested, respectively, and then the results of activity sequences recognition performances of Viterbi algorithm are evaluated. The results show that the selection of larger time feature values of sensor events and/or smaller activity length size feature values will generate relatively better results on the activity sequences recognition performances. © 2013 ISA Published by ISA All rights reserved.

  12. Adaptive runtime for a multiprocessing API

    DOEpatents

    Antao, Samuel F.; Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.

    2016-11-15

    A computer-implemented method includes selecting a runtime for executing a program. The runtime includes a first combination of feature implementations, where each feature implementation implements a feature of an application programming interface (API). Execution of the program is monitored, and the execution uses the runtime. Monitor data is generated based on the monitoring. A second combination of feature implementations are selected, by a computer processor, where the selection is based at least in part on the monitor data. The runtime is modified by activating the second combination of feature implementations to replace the first combination of feature implementations.

  13. Adaptive runtime for a multiprocessing API

    DOEpatents

    Antao, Samuel F.; Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.

    2016-10-11

    A computer-implemented method includes selecting a runtime for executing a program. The runtime includes a first combination of feature implementations, where each feature implementation implements a feature of an application programming interface (API). Execution of the program is monitored, and the execution uses the runtime. Monitor data is generated based on the monitoring. A second combination of feature implementations are selected, by a computer processor, where the selection is based at least in part on the monitor data. The runtime is modified by activating the second combination of feature implementations to replace the first combination of feature implementations.

  14. A prospective randomized multicentre study of the impact of gallium-68 prostate-specific membrane antigen (PSMA) PET/CT imaging for staging high-risk prostate cancer prior to curative-intent surgery or radiotherapy (proPSMA study): clinical trial protocol.

    PubMed

    Hofman, Michael S; Murphy, Declan G; Williams, Scott G; Nzenza, Tatenda; Herschtal, Alan; Lourenco, Richard De Abreu; Bailey, Dale L; Budd, Ray; Hicks, Rodney J; Francis, Roslyn J; Lawrentschuk, Nathan

    2018-05-03

    Accurate staging of patients with prostate cancer (PCa) is important for therapeutic decision-making. Relapse after surgery or radiotherapy of curative intent is not uncommon and, in part, represents a failure of staging with current diagnostic imaging techniques to detect disease spread. Prostate-specific membrane antigen (PSMA) positron-emission tomography (PET)/computed tomography (CT) is a new whole-body scanning technique that enables visualization of PCa with high contrast. The hypotheses of this study are that: (i) PSMA-PET/CT has improved diagnostic performance compared with conventional imaging; (ii) PSMA-PET/CT should be used as a first-line diagnostic test for staging; (iii) the improved diagnostic performance of PSMA-PET/CT will result in significant management impact; and (iv) there are economic benefits if PSMA-PET/CT is incorporated into the management algorithm. The proPSMA trial is a prospective, multicentre study in which patients with untreated high-risk PCa will be randomized to gallium-68-PSMA-11 PET/CT or conventional imaging, consisting of CT of the abdomen/pelvis and bone scintigraphy with single-photon emission CT/CT. Patients eligible for inclusion are those with newly diagnosed PCa with select high-risk features, defined as International Society of Urological Pathology grade group ≥3 (primary Gleason grade 4, or any Gleason grade 5), prostate-specific antigen level ≥20 ng/mL or clinical stage ≥T3. Patients with negative, equivocal or oligometastatic disease on first line-imaging will cross over to receive the other imaging arm. The primary objective is to compare the accuracy of PSMA-PET/CT with that of conventional imaging for detecting nodal or distant metastatic disease. Histopathological, imaging and clinical follow-up at 6 months will define the primary endpoint according to a predefined scoring system. Secondary objectives include comparing management impact, the number of equivocal studies, the incremental value of second-line imaging in patients who cross over, the cost of each imaging strategy, radiation exposure, inter-observer agreement and safety of PSMA-PET/CT. Longer-term follow-up will also assess the prognostic value of a negative PSMA-PET/CT. This trial will provide data to establish whether PSMA-PET/CT should replace conventional imaging in the primary staging of select high-risk localized PCa, or whether it should be used to provide incremental diagnostic information in selected cases. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.

  15. Informative Feature Selection for Object Recognition via Sparse PCA

    DTIC Science & Technology

    2011-04-07

    constraint on images collected from low-power camera net- works instead of high-end photography is that establishing wide-baseline feature correspondence of...variable selection tool for selecting informative features in the object images captured from low-resolution cam- era sensor networks. Firstly, we...More examples can be found in Figure 4 later. 3. Identifying Informative Features Classical PCA is a well established tool for the analysis of high

  16. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  17. Iris Crypts Influence Dynamic Changes of Iris Volume.

    PubMed

    Chua, Jacqueline; Thakku, Sri Gowtham; Tun, Tin A; Nongpiur, Monisha E; Tan, Marcus Chiang Lee; Girard, Michael J A; Wong, Tien Yin; Quah, Joanne Hui Min; Aung, Tin; Cheng, Ching-Yu

    2016-10-01

    To determine the association of iris surface features with iris volume change after physiologic pupil dilation in adults. Cross-sectional observational study. Chinese adults aged ≥ 50 years without ocular diseases. Digital iris photographs were taken from eyes of each participant and graded for crypts (by number and size) and furrows (by number and circumferential extent) following a standardized grading scheme. Iris color was measured objectively, using the Commission Internationale de l'Eclairage (CIE) L* color parameter (higher value denoting lighter iris). The anterior segment was imaged by swept-source optical coherence tomography (SS-OCT) (Casia; Tomey, Nagoya, Japan) under bright light and dark room conditions. Iris volumes in light and dark conditions were measured with custom semiautomated software, and the change in iris volume was quantified. Associations of the change in iris volume after pupil dilation with underlying iris surface features in right eyes were assessed using linear regression analysis. Iris volume change after physiologic pupil dilation from light to dark condition. A total of 65 Chinese participants (mean age, 59.8±5.7 years) had gradable data for iris surface features. In light condition, higher iris crypt grade was associated independently with smaller iris volume (β [change in iris volume in millimeters per crypt grade increment] = -1.43, 95% confidence interval [CI], -2.26 to -0.59; P = 0.001) and greater reduction of iris volume on pupil dilation (β [change in iris volume in millimeters per crypt grade increment] = 0.23, 95% CI, 0.06-0.40; P = 0.010), adjusting for age, gender, presence of corneal arcus, and change in pupil size. Iris furrows and iris color were not associated with iris volume in light condition or change in iris volume (all P > 0.05). Although few Chinese persons have multiple crypts on their irides, irides with more crypts were significantly thinner and lost more volume on pupil dilation. In view that the latter feature is known to be protective for acute angle-closure attack, it is likely that the macroscopic and microscopic composition of the iris is a contributing feature to angle-closure disease. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  18. Cost-effectiveness of motivational intervention with significant others for patients with alcohol misuse.

    PubMed

    Shepard, Donald S; Lwin, Aung K; Barnett, Nancy P; Mastroleo, Nadine; Colby, Suzanne M; Gwaltney, Chad; Monti, Peter M

    2016-05-01

    To estimate the incremental cost, cost-effectiveness and benefit-cost ratio of incorporating a significant other (SO) into motivational intervention for alcohol misuse. We obtained economic data from the one year with the intervention in full operation for patients in a recent randomized trial. The underlying trial took place at a major urban hospital in the United States. The trial randomized 406 (68.7% male) eligible hazardous drinkers (196 during the economic study) admitted to the emergency department or trauma unit. The motivational interview condition consisted of one in-person session featuring personalized normative feedback. The significant other motivational interview condition comprised one joint session with the participant and SO in which the SO's perspective and support were elicited. We ascertained activities across 445 representative time segments through work sampling (including staff idle time), calculated the incremental cost in per patient of incorporating an SO, expressed the results in 2014 US$, incorporated quality and mortality effects from a closely related trial and derived the cost per quality-adjusted life-year (QALY) gained. From a health system perspective, the incremental cost per patient of adding an SO was $341.09 [95% confidence interval (CI) = $244.44-437.74]. The incremental cost per year per hazardous drinker averted was $3623 (CI = $1777-22,709), the cost per QALY gained $32,200 (CI = $15,800-201,700), and the benefit-cost ratio was 4.73 (95% CI = 0.7-9.66). If adding an SO into the intervention strategy were concentrated during the hours with highest risk or in a trauma unit, it would become even more cost-beneficial. Using criteria established by the World Health Organization (cost-effectiveness below the country's gross domestic product per capita), incorporating a significant other into a patient's motivational intervention for alcohol misuse is highly cost-effective. © 2015 Society for the Study of Addiction.

  19. Input-output-controlled nonlinear equation solvers

    NASA Technical Reports Server (NTRS)

    Padovan, Joseph

    1988-01-01

    To upgrade the efficiency and stability of the successive substitution (SS) and Newton-Raphson (NR) schemes, the concept of input-output-controlled solvers (IOCS) is introduced. By employing the formal properties of the constrained version of the SS and NR schemes, the IOCS algorithm can handle indefiniteness of the system Jacobian, can maintain iterate monotonicity, and provide for separate control of load incrementation and iterate excursions, as well as having other features. To illustrate the algorithmic properties, the results for several benchmark examples are presented. These define the associated numerical efficiency and stability of the IOCS.

  20. MATERIALS SCIENCE: New Tigers in the Fuel Cell Tank.

    PubMed

    Service, R F

    2000-06-16

    After decades of incremental advances, a spurt of findings suggests that fuel cells that run on good old fossil fuels are almost ready for prime time. Although conventional ceramic cells, known as solid oxide fuel cells, require expensive heat-resistant materials, a new generation of SOFCs, including one featured on page 2031, converts hydrocarbons directly into electricity at lower temperatures. And a recent demonstration of a system of standard SOFCs large enough to light up more than 200 homes showed that it is the most efficient large-scale electrical generator ever designed.

  1. Reenacting the birth of an intron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellsten, Uffe; Aspden, Julie L.; Rio, Donald C.

    2011-07-01

    An intron is an extended genomic feature whose function requires multiple constrained positions - donor and acceptor splice sites, a branch point, a polypyrimidine tract and suitable splicing enhancers - that may be distributed over hundreds or thousands of nucleotides. New introns are therefore unlikely to emerge by incremental accumulation of functional sub-elements. Here we demonstrate that a functional intron can be created de novo in a single step by a segmental genomic duplication. This experiment recapitulates in vivo the birth of an intron that arose in the ancestral jawed vertebrate lineage nearly half a billion years ago.

  2. Real-Time Acquisition and Processing System (RTAPS) Version 1.1 Installation and User’s Manual.

    DTIC Science & Technology

    1986-08-01

    The language is incrementally compiled and procedure-oriented. It is run on an 8088 processor with 56K of available user RAM. The master board features...RTAPS/PC computers. The wiring configuration is shown in figure 10. Switch Modem Port MAC P5 or P6* 2, B4 3 B8 1%7 1 B10 *P6 recommended Figure 10. $MAC...activated switch. The AXAC output port is physically connected to the modem input on the switch. The subchannels are the labeled terminal connections

  3. Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers

    NASA Astrophysics Data System (ADS)

    Weinmann, Martin; Jutzi, Boris; Hinz, Stefan; Mallet, Clément

    2015-07-01

    3D scene analysis in terms of automatically assigning 3D points a respective semantic label has become a topic of great importance in photogrammetry, remote sensing, computer vision and robotics. In this paper, we address the issue of how to increase the distinctiveness of geometric features and select the most relevant ones among these for 3D scene analysis. We present a new, fully automated and versatile framework composed of four components: (i) neighborhood selection, (ii) feature extraction, (iii) feature selection and (iv) classification. For each component, we consider a variety of approaches which allow applicability in terms of simplicity, efficiency and reproducibility, so that end-users can easily apply the different components and do not require expert knowledge in the respective domains. In a detailed evaluation involving 7 neighborhood definitions, 21 geometric features, 7 approaches for feature selection, 10 classifiers and 2 benchmark datasets, we demonstrate that the selection of optimal neighborhoods for individual 3D points significantly improves the results of 3D scene analysis. Additionally, we show that the selection of adequate feature subsets may even further increase the quality of the derived results while significantly reducing both processing time and memory consumption.

  4. Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2014-07-01

    Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.

  5. A study of metaheuristic algorithms for high dimensional feature selection on microarray data

    NASA Astrophysics Data System (ADS)

    Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna

    2017-11-01

    Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.

  6. Joint Transmit Antenna Selection and Power Allocation for ISDF Relaying Mobile-to-Mobile Sensor Networks

    PubMed Central

    Xu, Lingwei; Zhang, Hao; Gulliver, T. Aaron

    2016-01-01

    The outage probability (OP) performance of multiple-relay incremental-selective decode-and-forward (ISDF) relaying mobile-to-mobile (M2M) sensor networks with transmit antenna selection (TAS) over N-Nakagami fading channels is investigated. Exact closed-form OP expressions for both optimal and suboptimal TAS schemes are derived. The power allocation problem is formulated to determine the optimal division of transmit power between the broadcast and relay phases. The OP performance under different conditions is evaluated via numerical simulation to verify the analysis. These results show that the optimal TAS scheme has better OP performance than the suboptimal scheme. Further, the power allocation parameter has a significant influence on the OP performance. PMID:26907282

  7. Traffic routing for multicomputer networks with virtual cut-through capability

    NASA Technical Reports Server (NTRS)

    Kandlur, Dilip D.; Shin, Kang G.

    1992-01-01

    Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.

  8. The incremental costs of recommended therapy versus real world therapy in type 2 diabetes patients

    PubMed Central

    Crivera, C.; Suh, D. C.; Huang, E. S.; Cagliero, E.; Grant, R. W.; Vo, L.; Shin, H. C.; Meigs, J. B.

    2008-01-01

    Background The goals of diabetes management have evolved over the past decade to become the attainment of near-normal glucose and cardiovascular risk factor levels. Improved metabolic control is achieved through optimized medication regimens, but costs specifically associated with such optimization have not been examined. Objective To estimate the incremental medication cost of providing optimal therapy to reach recommended goals versus actual therapy in patients with type 2 diabetes. Methods We randomly selected the charts of 601 type 2 diabetes patients receiving care from the outpatient clinics of Massachusetts General Hospital March 1, 1996–August 31, 1997 and abstracted clinical and medication data. We applied treatment algorithms based on 2004 clinical practice guidelines for hyperglycemia, hyperlipidemia, and hypertension to patients’ current medication therapy to determine how current medication regimens could be improved to attain recommended treatment goals. Four clinicians and three pharmacists independently applied the algorithms and reached consensus on recommended therapies. Mean incremental medication costs, the cost differences between current and recommended therapies, per patient (expressed in 2004 dollars) were calculated with 95% bootstrap confidence intervals (CIs). Results Mean patient age was 65 years old, mean duration of diabetes was 7.7 years, 32% had ideal glucose control, 25% had ideal systolic blood pressure, and 24% had ideal low-density lipoprotein cholesterol. Care for these diabetes patients was similar to that observed in recent national studies. If treatment algorithm recommendations were applied, the average annual medication cost/patient would increase from $1525 to $2164. Annual incremental costs/patient increased by $168 (95% CI $133–$206) for antihyperglycemic medications, $75 ($57–$93) for antihypertensive medications, $392 ($354–$434) for antihyperlipidemic medications, and $3 ($3–$4) for aspirin prophylaxis. Yearly incremental cost of recommended laboratory testing ranged from $77–$189/patient. Limitations Although baseline data come from the clinics of a single academic institution, collected in 1997, the care of these diabetes patients was remarkably similar to care recently observed nationally. In addition, the data are dependent on the medical record and may not accurately reflect patients’ actual experiences. Conclusion Average yearly incremental cost of optimizing drug regimens to achieve recommended treatment goals for type 2 diabetes was approximately $600/patient. These results provide valuable input for assessing the cost-effectiveness of improving comprehensive diabetes care. PMID:17076990

  9. Improved sparse decomposition based on a smoothed L0 norm using a Laplacian kernel to select features from fMRI data.

    PubMed

    Zhang, Chuncheng; Song, Sutao; Wen, Xiaotong; Yao, Li; Long, Zhiying

    2015-04-30

    Feature selection plays an important role in improving the classification accuracy of multivariate classification techniques in the context of fMRI-based decoding due to the "few samples and large features" nature of functional magnetic resonance imaging (fMRI) data. Recently, several sparse representation methods have been applied to the voxel selection of fMRI data. Despite the low computational efficiency of the sparse representation methods, they still displayed promise for applications that select features from fMRI data. In this study, we proposed the Laplacian smoothed L0 norm (LSL0) approach for feature selection of fMRI data. Based on the fast sparse decomposition using smoothed L0 norm (SL0) (Mohimani, 2007), the LSL0 method used the Laplacian function to approximate the L0 norm of sources. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of LSL0 for the sparse source estimation and feature selection. Simulated results indicated that LSL0 produced more accurate source estimation than SL0 at high noise levels. The classification accuracy using voxels that were selected by LSL0 was higher than that by SL0 in both simulated and real fMRI experiment. Moreover, both LSL0 and SL0 showed higher classification accuracy and required less time than ICA and t-test for the fMRI decoding. LSL0 outperformed SL0 in sparse source estimation at high noise level and in feature selection. Moreover, LSL0 and SL0 showed better performance than ICA and t-test for feature selection. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Feature-Selective Attention Adaptively Shifts Noise Correlations in Primary Auditory Cortex.

    PubMed

    Downer, Joshua D; Rapone, Brittany; Verhein, Jessica; O'Connor, Kevin N; Sutter, Mitchell L

    2017-05-24

    Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations ( r noise ) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on r noise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in r noise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments. SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations ( r noise ) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on r noise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the distractor feature as well. We suggest that these effects represent an efficient process by which sensory cortex simultaneously enhances relevant information and suppresses irrelevant information. Copyright © 2017 the authors 0270-6474/17/375378-15$15.00/0.

  11. Feature-Selective Attention Adaptively Shifts Noise Correlations in Primary Auditory Cortex

    PubMed Central

    2017-01-01

    Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations (rnoise) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on rnoise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in rnoise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments. SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations (rnoise) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on rnoise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the distractor feature as well. We suggest that these effects represent an efficient process by which sensory cortex simultaneously enhances relevant information and suppresses irrelevant information. PMID:28432139

  12. [Cost-effectiveness analysis of celecoxib versus non-selective non-steroidal anti-inflammatory drug therapy for the treatment of osteoarthritis in Spain: A current perspective].

    PubMed

    De Lossada, A; Oteo-Álvaro, Á; Giménez, S; Oyagüez, I; Rejas, J

    2016-01-01

    To assess the cost-effectiveness of celecoxib and non-selective non-steroidal anti-inflammatory drugs for the treatment of osteoarthritis in clinical practice in Spain. A decision-tree model using distribution, doses, treatment duration and incidence of GI and CV events observed in the pragmatic PROBE-designed «GI-Reasons» trial was used for cost-effectiveness. Effectiveness was expressed in terms of event averted and quality-adjusted life-years (QALY) gained. QALY were calculated based on utility decrement in case of any adverse events reported in GI-Reasons trial. The National Health System perspective in Spain was applied; cost calculations included current prices of drugs plus cost of adverse events occurred. The analysis was expressed as an incremental cost-effectiveness ratio per QALY gained and per event averted. One-way and probabilistic analyses were performed. Compared with non-selective non-steroidal anti-inflammatory drugs, at current prices, celecoxib treatment had higher overall treatment costs €201 and €157, respectively. However, celecoxib was associated with a slight increase in QALY gain and significantly lower incidence of gastrointestinal events (p<.001), with mean incremental cost-effectiveness ratio of €13,286 per QALY gained and €4,471 per event averted. Sensitivity analyses were robust, and confirmed the results of the base case. Celecoxib at current price may be considered as a cost-effective alternative vs. non-selective non-steroidal anti-inflammatory drugs in the treatment of osteoarthritis in daily practice in the Spanish NHS. Copyright © 2015 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.

    PubMed

    Donoho, David; Jin, Jiashun

    2008-09-30

    In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value associated with the ith feature Z-score and pi((i)) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p - pi((i)))/sqrt{i/p(1-i/p)}. We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT.

  14. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak

    PubMed Central

    Donoho, David; Jin, Jiashun

    2008-01-01

    In important application fields today—genomics and proteomics are examples—selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, …, p, let πi denote the two-sided P-value associated with the ith feature Z-score and π(i) denote the ith order statistic of the collection of P-values. The HC threshold is the absolute Z-score corresponding to the P-value maximizing the HC objective (i/p − π(i))/i/p(1−i/p). We consider a rare/weak (RW) feature model, where the fraction of useful features is small and the useful features are each too weak to be of much use on their own. HC thresholding (HCT) has interesting behavior in this setting, with an intimate link between maximizing the HC objective and minimizing the error rate of the designed classifier, and very different behavior from popular threshold selection procedures such as false discovery rate thresholding (FDRT). In the most challenging RW settings, HCT uses an unconventionally low threshold; this keeps the missed-feature detection rate under better control than FDRT and yields a classifier with improved misclassification performance. Replacing cross-validated threshold selection in the popular Shrunken Centroid classifier with the computationally less expensive and simpler HCT reduces the variance of the selected threshold and the error rate of the constructed classifier. Results on standard real datasets and in asymptotic theory confirm the advantages of HCT. PMID:18815365

  15. Application-Dedicated Selection of Filters (ADSF) using covariance maximization and orthogonal projection.

    PubMed

    Hadoux, Xavier; Kumar, Dinesh Kant; Sarossy, Marc G; Roger, Jean-Michel; Gorretta, Nathalie

    2016-05-19

    Visible and near-infrared (Vis-NIR) spectra are generated by the combination of numerous low resolution features. Spectral variables are thus highly correlated, which can cause problems for selecting the most appropriate ones for a given application. Some decomposition bases such as Fourier or wavelet generally help highlighting spectral features that are important, but are by nature constraint to have both positive and negative components. Thus, in addition to complicating the selected features interpretability, it impedes their use for application-dedicated sensors. In this paper we have proposed a new method for feature selection: Application-Dedicated Selection of Filters (ADSF). This method relaxes the shape constraint by enabling the selection of any type of user defined custom features. By considering only relevant features, based on the underlying nature of the data, high regularization of the final model can be obtained, even in the small sample size context often encountered in spectroscopic applications. For larger scale deployment of application-dedicated sensors, these predefined feature constraints can lead to application specific optical filters, e.g., lowpass, highpass, bandpass or bandstop filters with positive only coefficients. In a similar fashion to Partial Least Squares, ADSF successively selects features using covariance maximization and deflates their influences using orthogonal projection in order to optimally tune the selection to the data with limited redundancy. ADSF is well suited for spectroscopic data as it can deal with large numbers of highly correlated variables in supervised learning, even with many correlated responses. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. 48 CFR 3452.232-71 - Incremental funding.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 7 2013-10-01 2012-10-01 true Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...

  17. 48 CFR 3452.232-71 - Incremental funding.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 7 2014-10-01 2014-10-01 false Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...

  18. 48 CFR 3452.232-71 - Incremental funding.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 7 2012-10-01 2012-10-01 false Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...

  19. 48 CFR 3452.232-71 - Incremental funding.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 7 2011-10-01 2011-10-01 false Incremental funding. 3452....232-71 Incremental funding. As prescribed in 3432.705-2, insert the following provision in solicitations if a cost-reimbursement contract using incremental funding is contemplated: Incremental Funding...

  20. Health care resource use, health care expenditures and absenteeism costs associated with osteoarthritis in US healthcare system.

    PubMed

    Menon, J; Mishra, P

    2018-04-01

    We determined incremental health care resource utilization, incremental health care expenditures, incremental absenteeism, and incremental absenteeism costs associated with osteoarthritis. Medical Expenditure Panel Survey (MEPS) for 2011 was used as data source. Individuals 18 years or older and employed during 2011 were eligible for inclusion in the sample for analyses. Individuals with osteoarthritis were identified based on ICD-9-CM codes. Incremental health care resource utilization included annual hospitalization, hospital days, emergency room visits and outpatient visits. Incremental health expenditures included annual inpatient, outpatient, emergency room, medications, miscellaneous and annual total expenditures. Of the total sample, 1354 were diagnosed with osteoarthritis, and compared to non osteoarthritis individuals. Incremental resource utilization, expenditures, absenteeism and absenteeism costs were estimated using regression models, adjusting for age, gender, sex, region, marital status, insurance coverage, comorbidities, anxiety, asthma, hypertension and hyperlipidemia. Regression models revealed incremental mean annual resource use associated with osteoarthritis of 0.07 hospitalizations, equal to 70 additional hospitalizations per 100 osteoarthritic patients annually, and 3.63 outpatient visits, equal to 363 additional visits per 100 osteoarthritic patients annually. Mean annual incremental total expenditures associated with osteoarthritis were $2046. Annually, mean incremental expenditures were largest for inpatient expenditures at $826, followed by mean incremental outpatient expenditures of $659, and mean incremental medication expenditures of $325. Mean annual incremental absenteeism was 2.2 days and mean annual incremental absenteeism costs were $715.74. Total direct expenditures were estimated at $41.7 billion. Osteoarthritis was associated with significant incremental health care resource utilization, expenditures, absenteeism and absenteeism costs. Copyright © 2017 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  1. Study on Plastic Deformation Characteristics of Shot Peening of Ni-Based Superalloy GH4079

    NASA Astrophysics Data System (ADS)

    Zhong, L. Q.; Liang, Y. L.; Hu, H.

    2017-09-01

    In this paper, the X-ray stress diffractometer, surface roughness tester, field emission scanning electron microscope(SEM), dynamic ultra-small microhardness tester were used to measure the surface residual stress and roughness, topography and surface hardness changes of GH4079 superalloy, which was processed by metallographic grinding, turning, metallographic grinding +shot peening and turning + shot peening. Analysized the effects of shot peening parameters on shot peening plastic deformation features; and the effects of the surface state before shot peening on shot peening plastic deformation characteristics. Results show that: the surface residual compressive stress, surface roughness and surface hardness of GH4079 superalloy were increased by shot peening, in addition, the increment of the surface residual compressive stress, surface roughness and surface hardness induced by shot peening increased with increasing shot peening intensity, shot peening time, shot peening pressure and shot hardness, but harden layer depth was not affected considerably. The more plastic deformation degree of before shot peening surface state, the less increment of the surface residual compressive stress, surface roughness and surface hardness induced by shot peening.

  2. Real-time probabilistic covariance tracking with efficient model update.

    PubMed

    Wu, Yi; Cheng, Jian; Wang, Jinqiao; Lu, Hanqing; Wang, Jun; Ling, Haibin; Blasch, Erik; Bai, Li

    2012-05-01

    The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only O(1) computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.

  3. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    PubMed

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  4. Building Healthcare Capacity in Pediatric Neurosurgery and Psychiatry in a Post-Soviet System: Ukraine.

    PubMed

    Romach, Myroslava K; Rutka, James T

    2018-03-01

    Many academic centers in North America are initiating global partnerships to build physician capacity in resource-poor countries. An opportunity arose to develop a pediatric program (Ukraine Paediatric Fellowship Program, UPFP) in Ukraine, a large European country in transition from a Soviet/communist political and social system. This entailed dealing with a centralized and rigid healthcare system based on the Semashko model of the former Soviet Union. Our capacity-building model has several key features: endowed philanthropic funding for sustainability, bilateral exchange of knowledge, a focus primarily on pediatric brain disorders, and team building. Centers for partnering are selected on the basis of need, receptivity to change, and participants' fluency in English. Ukrainian physicians attend month-long observerships in Toronto, and biannual teaching visits are conducted by Canadian clinicians. Over 5 years, 7 teaching visits have taken place, and 20 physicians have trained at SickKids Hospital in Toronto. Six Ukrainian children's hospitals are now collaborating with UPFP. New surgical procedures have been introduced, such as endoscopic ventriculostomy and corpus callosotomy. Patient referrals to regional institutions have increased, and new projects that affect fetal and infant neurodevelopment have been initiated (e.g., treatment of perinatal maternal depression and folic acid fortification of flour). Ukrainian participants rate the program highly in their evaluations. In a short time, UPFP has had considerable success in increasing physician capacity for improved pediatric care in regions of Ukraine. The keys to success have included focusing locally, selecting trustable partners, building incrementally, and creating interspecialty synergies. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Multisensor-based real-time quality monitoring by means of feature extraction, selection and modeling for Al alloy in arc welding

    NASA Astrophysics Data System (ADS)

    Zhang, Zhifen; Chen, Huabin; Xu, Yanling; Zhong, Jiyong; Lv, Na; Chen, Shanben

    2015-08-01

    Multisensory data fusion-based online welding quality monitoring has gained increasing attention in intelligent welding process. This paper mainly focuses on the automatic detection of typical welding defect for Al alloy in gas tungsten arc welding (GTAW) by means of analzing arc spectrum, sound and voltage signal. Based on the developed algorithms in time and frequency domain, 41 feature parameters were successively extracted from these signals to characterize the welding process and seam quality. Then, the proposed feature selection approach, i.e., hybrid fisher-based filter and wrapper was successfully utilized to evaluate the sensitivity of each feature and reduce the feature dimensions. Finally, the optimal feature subset with 19 features was selected to obtain the highest accuracy, i.e., 94.72% using established classification model. This study provides a guideline for feature extraction, selection and dynamic modeling based on heterogeneous multisensory data to achieve a reliable online defect detection system in arc welding.

  6. SVM-RFE based feature selection and Taguchi parameters optimization for multiclass SVM classifier.

    PubMed

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W M; Li, R K; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases.

  7. SVM-RFE Based Feature Selection and Taguchi Parameters Optimization for Multiclass SVM Classifier

    PubMed Central

    Huang, Mei-Ling; Hung, Yung-Hsiang; Lee, W. M.; Li, R. K.; Jiang, Bo-Ru

    2014-01-01

    Recently, support vector machine (SVM) has excellent performance on classification and prediction and is widely used on disease diagnosis or medical assistance. However, SVM only functions well on two-group classification problems. This study combines feature selection and SVM recursive feature elimination (SVM-RFE) to investigate the classification accuracy of multiclass problems for Dermatology and Zoo databases. Dermatology dataset contains 33 feature variables, 1 class variable, and 366 testing instances; and the Zoo dataset contains 16 feature variables, 1 class variable, and 101 testing instances. The feature variables in the two datasets were sorted in descending order by explanatory power, and different feature sets were selected by SVM-RFE to explore classification accuracy. Meanwhile, Taguchi method was jointly combined with SVM classifier in order to optimize parameters C and γ to increase classification accuracy for multiclass classification. The experimental results show that the classification accuracy can be more than 95% after SVM-RFE feature selection and Taguchi parameter optimization for Dermatology and Zoo databases. PMID:25295306

  8. Filament wound data base development, revision 1, appendix A

    NASA Technical Reports Server (NTRS)

    Sharp, R. Scott; Braddock, William F.

    1985-01-01

    Data are presented in tabular form for the High Performance Nozzle Increments, Filament Wound Case (FWC) Systems Tunnel Increments, Steel Case Systems Tunnel Increments, FWC Stiffener Rings Increments, Steel Case Stiffener Rings Increments, FWC External Tank (ET) Attach Ring Increments, Steel Case ET Attach Ring Increments, and Data Tape 8. The High Performance Nozzle are also presented in graphical form. The tabular data consist of six-component force and moment coefficients as they vary with angle of attack at a specific Mach number and roll angle. The six coefficients are normal force, pitching moment, side force, yawing moment, axial force, and rolling moment. The graphical data for the High Performance Nozzle Increments consist of a plot of a coefficient increment as a function of angle of attack at a specific Mach number and at a roll angle of 0 deg.

  9. Feature selection in feature network models: finding predictive subsets of features with the Positive Lasso.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2008-05-01

    A set of features is the basis for the network representation of proximity data achieved by feature network models (FNMs). Features are binary variables that characterize the objects in an experiment, with some measure of proximity as response variable. Sometimes features are provided by theory and play an important role in the construction of the experimental conditions. In some research settings, the features are not known a priori. This paper shows how to generate features in this situation and how to select an adequate subset of features that takes into account a good compromise between model fit and model complexity, using a new version of least angle regression that restricts coefficients to be non-negative, called the Positive Lasso. It will be shown that features can be generated efficiently with Gray codes that are naturally linked to the FNMs. The model selection strategy makes use of the fact that FNM can be considered as univariate multiple regression model. A simulation study shows that the proposed strategy leads to satisfactory results if the number of objects is less than or equal to 22. If the number of objects is larger than 22, the number of features selected by our method exceeds the true number of features in some conditions.

  10. A Filter Feature Selection Method Based on MFA Score and Redundancy Excluding and It's Application to Tumor Gene Expression Data Analysis.

    PubMed

    Li, Jiangeng; Su, Lei; Pang, Zenan

    2015-12-01

    Feature selection techniques have been widely applied to tumor gene expression data analysis in recent years. A filter feature selection method named marginal Fisher analysis score (MFA score) which is based on graph embedding has been proposed, and it has been widely used mainly because it is superior to Fisher score. Considering the heavy redundancy in gene expression data, we proposed a new filter feature selection technique in this paper. It is named MFA score+ and is based on MFA score and redundancy excluding. We applied it to an artificial dataset and eight tumor gene expression datasets to select important features and then used support vector machine as the classifier to classify the samples. Compared with MFA score, t test and Fisher score, it achieved higher classification accuracy.

  11. An ant colony optimization based feature selection for web page classification.

    PubMed

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  12. Overview of PACS

    NASA Astrophysics Data System (ADS)

    Vanden Brink, John A.

    1995-08-01

    Development of the DICOM standard and incremental developments in workstation, network, compression, archiving, and digital x-ray technology have produced cost effective image communication possibilities for selected medical applications. The emerging markets include modality PACS, mini PACS, and teleradiology. Military and VA programs lead the way in the move to adopt PACS technology. Commercial markets for PACS components and PAC systems are at LR400 million growing to LR500 million in 1996.

  13. Joint Biological Standoff Detection System increment II: Field Demonstration - SINBAHD Performances

    DTIC Science & Technology

    2007-12-01

    of a dispersive element and a range-gated ICCD that limits the spectral information within the selected volume. This technique has showed an...bioaerosols. This LIF signal is spectrally collected by the combination of a dispersive element and a range-gated ICCD that records spectral...2001 in order to underline the robustness of the spectral signature of a particular biomaterial but of different origin, preparation and dispersion

  14. The impact of speciated VOCs on regional ozone increment derived from measurements at the UK EMEP supersites between 1999 and 2012

    NASA Astrophysics Data System (ADS)

    Malley, C. S.; Braban, C. F.; Dumitrean, P.; Cape, J. N.; Heal, M. R.

    2015-07-01

    The impact of 27 volatile organic compounds (VOCs) on the regional O3 increment was investigated using measurements made at the UK EMEP supersites Harwell (1999-2001 and 2010-2012) and Auchencorth (2012). Ozone at these sites is representative of rural O3 in south-east England and northern UK, respectively. The monthly-diurnal regional O3 increment was defined as the difference between the regional and hemispheric background O3 concentrations, respectively, derived from oxidant vs. NOx correlation plots, and cluster analysis of back trajectories arriving at Mace Head, Ireland. At Harwell, which had substantially greater regional O3 increments than Auchencorth, variation in the regional O3 increment mirrored afternoon depletion of anthropogenic VOCs due to photochemistry (after accounting for diurnal changes in boundary layer mixing depth, and weighting VOC concentrations according to their photochemical ozone creation potential). A positive regional O3 increment occurred consistently during the summer, during which time afternoon photochemical depletion was calculated for the majority of measured VOCs, and to the greatest extent for ethene and m+p-xylene. This indicates that, of the measured VOCs, ethene and m+p-xylene emissions reduction would be most effective in reducing the regional O3 increment but that reductions in a larger number of VOCs would be required for further improvement. The VOC diurnal photochemical depletion was linked to anthropogenic sources of the VOC emissions through the integration of gridded anthropogenic VOC emission estimates over 96 h air-mass back trajectories. This demonstrated that one factor limiting the effectiveness of VOC gridded emissions for use in measurement and modelling studies is the highly aggregated nature of the 11 SNAP (Selected Nomenclature for Air Pollution) source sectors in which they are reported, as monthly variation in speciated VOC trajectory emissions did not reflect monthly changes in individual VOC diurnal photochemical depletion. Additionally, the major VOC emission source sectors during elevated regional O3 increment at Harwell were more narrowly defined through disaggregation of the SNAP emissions to 91 NFR (Nomenclature for Reporting) codes (i.e. sectors 3D2 (domestic solvent use), 3D3 (other product use) and 2D2 (food and drink)). However, spatial variation in the contribution of NFR sectors to parent SNAP emissions could only be accounted for at the country level. Hence, the future reporting of gridded VOC emissions in source sectors more highly disaggregated than currently (e.g. to NFR codes) would facilitate a more precise identification of those VOC sources most important for mitigation of the impact of VOCs on O3 formation. In summary, this work presents a clear methodology for achieving a coherent VOC, regional-O3-impact chemical climate using measurement data and explores the effect of limited emission and measurement species on the understanding of the regional VOC contribution to O3 concentrations.

  15. Prediction of practical performance in preclinical laboratory courses – the return of wire bending for admission of dental students in Hamburg

    PubMed Central

    Kothe, Christian; Hissbach, Johanna; Hampe, Wolfgang

    2014-01-01

    Although some recent studies concluded that dexterity is not a reliable predictor of performance in preclinical laboratory courses in dentistry, they could not disprove earlier findings which confirmed the worth of manual dexterity tests in dental admission. We developed a wire bending test (HAM-Man) which was administered during dental freshmen’s first week in 2008, 2009, and 2010. The purpose of our study was to evaluate if the HAM-Man is a useful selection criterion additional to the high school grade point average (GPA) in dental admission. Regression analysis revealed that GPA only accounted for a maximum of 9% of students’ performance in preclinical laboratory courses, in six out of eight models the explained variance was below 2%. The HAM-Man incrementally explained up to 20.5% of preclinical practical performance over GPA. In line with findings from earlier studies the HAM-Man test of manual dexterity showed satisfactory incremental validity. While GPA has a focus on cognitive abilities, the HAM-Man reflects learning of unfamiliar psychomotor skills, spatial relationships, and dental techniques needed in preclinical laboratory courses. The wire bending test HAM-Man is a valuable additional selection instrument for applicants of dental schools. PMID:24872857

  16. Vessel Classification in Cosmo-Skymed SAR Data Using Hierarchical Feature Selection

    NASA Astrophysics Data System (ADS)

    Makedonas, A.; Theoharatos, C.; Tsagaris, V.; Anastasopoulos, V.; Costicoglou, S.

    2015-04-01

    SAR based ship detection and classification are important elements of maritime monitoring applications. Recently, high-resolution SAR data have opened new possibilities to researchers for achieving improved classification results. In this work, a hierarchical vessel classification procedure is presented based on a robust feature extraction and selection scheme that utilizes scale, shape and texture features in a hierarchical way. Initially, different types of feature extraction algorithms are implemented in order to form the utilized feature pool, able to represent the structure, material, orientation and other vessel type characteristics. A two-stage hierarchical feature selection algorithm is utilized next in order to be able to discriminate effectively civilian vessels into three distinct types, in COSMO-SkyMed SAR images: cargos, small ships and tankers. In our analysis, scale and shape features are utilized in order to discriminate smaller types of vessels present in the available SAR data, or shape specific vessels. Then, the most informative texture and intensity features are incorporated in order to be able to better distinguish the civilian types with high accuracy. A feature selection procedure that utilizes heuristic measures based on features' statistical characteristics, followed by an exhaustive research with feature sets formed by the most qualified features is carried out, in order to discriminate the most appropriate combination of features for the final classification. In our analysis, five COSMO-SkyMed SAR data with 2.2m x 2.2m resolution were used to analyse the detailed characteristics of these types of ships. A total of 111 ships with available AIS data were used in the classification process. The experimental results show that this method has good performance in ship classification, with an overall accuracy reaching 83%. Further investigation of additional features and proper feature selection is currently in progress.

  17. Features selection and classification to estimate elbow movements

    NASA Astrophysics Data System (ADS)

    Rubiano, A.; Ramírez, J. L.; El Korso, M. N.; Jouandeau, N.; Gallimard, L.; Polit, O.

    2015-11-01

    In this paper, we propose a novel method to estimate the elbow motion, through the features extracted from electromyography (EMG) signals. The features values are normalized and then compared to identify potential relationships between the EMG signal and the kinematic information as angle and angular velocity. We propose and implement a method to select the best set of features, maximizing the distance between the features that correspond to flexion and extension movements. Finally, we test the selected features as inputs to a non-linear support vector machine in the presence of non-idealistic conditions, obtaining an accuracy of 99.79% in the motion estimation results.

  18. Efficient feature subset selection with probabilistic distance criteria. [pattern recognition

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Recursive expressions are derived for efficiently computing the commonly used probabilistic distance measures as a change in the criteria both when a feature is added to and when a feature is deleted from the current feature subset. A combinatorial algorithm for generating all possible r feature combinations from a given set of s features in (s/r) steps with a change of a single feature at each step is presented. These expressions can also be used for both forward and backward sequential feature selection.

  19. Neural network models for spatial data mining, map production, and cortical direction selectivity

    NASA Astrophysics Data System (ADS)

    Parsons, Olga

    A family of ARTMAP neural networks for incremental supervised learning has been developed over the last decade. The Sensor Exploitation Group of MIT Lincoln Laboratory (LL) has incorporated an early version of this network as the recognition engine of a hierarchical system for fusion and data mining of multiple registered geospatial images. The LL system has been successfully fielded, but it is limited to target vs. non-target identifications and does not produce whole maps. This dissertation expands the capabilities of the LL system so that it learns to identify arbitrarily many target classes at once and can thus produce a whole map. This new spatial data mining system is designed particularly to cope with the highly skewed class distributions of typical mapping problems. Specification of a consistent procedure and a benchmark testbed has permitted the evaluation of candidate recognition networks as well as pre- and post-processing and feature extraction options. The resulting default ARTMAP network and mapping methodology set a standard for a variety of related mapping problems and application domains. The second part of the dissertation investigates the development of cortical direction selectivity. The possible role of visual experience and oculomotor behavior in the maturation of cells in the primary visual cortex is studied. The responses of neurons in the thalamus and cortex of the cat are modeled when natural scenes are scanned by several types of eye movements. Inspired by the Hebbian-like synaptic plasticity, which is based upon correlations between cell activations, the second-order statistical structure of thalamo-cortical activity is examined. In the simulations, patterns of neural activity that lead to a correct refinement of cell responses are observed during visual fixation, when small ocular movements occur, but are not observed in the presence of large saccades. Simulations also replicate experiments in which kittens are reared under stroboscopic illumination. The abnormal fixational eye movements of these cats may account for the puzzling finding of a specific loss of cortical direction selectivity but preservation of orientation selectivity. This work indicates that the oculomotor behavior of visual fixation may play an important role in the refinement of cell response selectivity.

  20. FSMRank: feature selection algorithm for learning to rank.

    PubMed

    Lai, Han-Jiang; Pan, Yan; Tang, Yong; Yu, Rong

    2013-06-01

    In recent years, there has been growing interest in learning to rank. The introduction of feature selection into different learning problems has been proven effective. These facts motivate us to investigate the problem of feature selection for learning to rank. We propose a joint convex optimization formulation which minimizes ranking errors while simultaneously conducting feature selection. This optimization formulation provides a flexible framework in which we can easily incorporate various importance measures and similarity measures of the features. To solve this optimization problem, we use the Nesterov's approach to derive an accelerated gradient algorithm with a fast convergence rate O(1/T(2)). We further develop a generalization bound for the proposed optimization problem using the Rademacher complexities. Extensive experimental evaluations are conducted on the public LETOR benchmark datasets. The results demonstrate that the proposed method shows: 1) significant ranking performance gain compared to several feature selection baselines for ranking, and 2) very competitive performance compared to several state-of-the-art learning-to-rank algorithms.

  1. Gigabit Ethernet Asynchronous Clock Compensation FIFO

    NASA Technical Reports Server (NTRS)

    Duhachek, Jeff

    2012-01-01

    Clock compensation for Gigabit Ethernet is necessary because the clock recovered from the 1.25 Gb/s serial data stream has the potential to be 200 ppm slower or faster than the system clock. The serial data is converted to 10-bit parallel data at a 125 MHz rate on a clock recovered from the serial data stream. This recovered data needs to be processed by a system clock that is also running at a nominal rate of 125 MHz, but not synchronous to the recovered clock. To cross clock domains, an asynchronous FIFO (first-in-first-out) is used, with the write pointer (wprt) in the recovered clock domain and the read pointer (rptr) in the system clock domain. Because the clocks are generated from separate sources, there is potential for FIFO overflow or underflow. Clock compensation in Gigabit Ethernet is possible by taking advantage of the protocol data stream features. There are two distinct data streams that occur in Gigabit Ethernet where identical data is transmitted for a period of time. The first is configuration, which happens during auto-negotiation. The second is idle, which occurs at the end of auto-negotiation and between every packet. The identical data in the FIFO can be repeated by decrementing the read pointer, thus compensating for a FIFO that is draining too fast. The identical data in the FIFO can also be skipped by incrementing the read pointer, which compensates for a FIFO draining too slowly. The unique and novel features of this FIFO are that it works in both the idle stream and the configuration streams. The increment or decrement of the read pointer is different in the idle and compensation streams to preserve disparity. Another unique feature is that the read pointer to write pointer difference range changes between compensation and idle to minimize FIFO latency during packet transmission.

  2. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation.

    PubMed

    Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S

    2017-10-01

    The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Toward high-throughput phenotyping: unbiased automated feature extraction and selection from knowledge sources.

    PubMed

    Yu, Sheng; Liao, Katherine P; Shaw, Stanley Y; Gainer, Vivian S; Churchill, Susanne E; Szolovits, Peter; Murphy, Shawn N; Kohane, Isaac S; Cai, Tianxi

    2015-09-01

    Analysis of narrative (text) data from electronic health records (EHRs) can improve population-scale phenotyping for clinical and genetic research. Currently, selection of text features for phenotyping algorithms is slow and laborious, requiring extensive and iterative involvement by domain experts. This paper introduces a method to develop phenotyping algorithms in an unbiased manner by automatically extracting and selecting informative features, which can be comparable to expert-curated ones in classification accuracy. Comprehensive medical concepts were collected from publicly available knowledge sources in an automated, unbiased fashion. Natural language processing (NLP) revealed the occurrence patterns of these concepts in EHR narrative notes, which enabled selection of informative features for phenotype classification. When combined with additional codified features, a penalized logistic regression model was trained to classify the target phenotype. The authors applied our method to develop algorithms to identify patients with rheumatoid arthritis and coronary artery disease cases among those with rheumatoid arthritis from a large multi-institutional EHR. The area under the receiver operating characteristic curves (AUC) for classifying RA and CAD using models trained with automated features were 0.951 and 0.929, respectively, compared to the AUCs of 0.938 and 0.929 by models trained with expert-curated features. Models trained with NLP text features selected through an unbiased, automated procedure achieved comparable or slightly higher accuracy than those trained with expert-curated features. The majority of the selected model features were interpretable. The proposed automated feature extraction method, generating highly accurate phenotyping algorithms with improved efficiency, is a significant step toward high-throughput phenotyping. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  5. Automatic MeSH term assignment and quality assessment.

    PubMed Central

    Kim, W.; Aronson, A. R.; Wilbur, W. J.

    2001-01-01

    For computational purposes documents or other objects are most often represented by a collection of individual attributes that may be strings or numbers. Such attributes are often called features and success in solving a given problem can depend critically on the nature of the features selected to represent documents. Feature selection has received considerable attention in the machine learning literature. In the area of document retrieval we refer to feature selection as indexing. Indexing has not traditionally been evaluated by the same methods used in machine learning feature selection. Here we show how indexing quality may be evaluated in a machine learning setting and apply this methodology to results of the Indexing Initiative at the National Library of Medicine. PMID:11825203

  6. Frontiers of Two-Dimensional Correlation Spectroscopy. Part 1. New concepts and noteworthy developments

    NASA Astrophysics Data System (ADS)

    Noda, Isao

    2014-07-01

    A comprehensive survey review of new and noteworthy developments, which are advancing forward the frontiers in the field of 2D correlation spectroscopy during the last four years, is compiled. This review covers books, proceedings, and review articles published on 2D correlation spectroscopy, a number of significant conceptual developments in the field, data pretreatment methods and other pertinent topics, as well as patent and publication trends and citation activities. Developments discussed include projection 2D correlation analysis, concatenated 2D correlation, and correlation under multiple perturbation effects, as well as orthogonal sample design, predicting 2D correlation spectra, manipulating and comparing 2D spectra, correlation strategy based on segmented data blocks, such as moving-window analysis, features like determination of sequential order and enhanced spectral resolution, statistical 2D spectroscopy using covariance and other statistical metrics, hetero-correlation analysis, and sample-sample correlation technique. Data pretreatment operations prior to 2D correlation analysis are discussed, including the correction for physical effects, background and baseline subtraction, selection of reference spectrum, normalization and scaling of data, derivatives spectra and deconvolution technique, and smoothing and noise reduction. Other pertinent topics include chemometrics and statistical considerations, peak position shift phenomena, variable sampling increments, computation and software, display schemes, such as color coded format, slice and power spectra, tabulation, and other schemes.

  7. Optimization of breast mass classification using sequential forward floating selection (SFFS) and a support vector machine (SVM) model

    PubMed Central

    Tan, Maxine; Pu, Jiantao; Zheng, Bin

    2014-01-01

    Purpose: Improving radiologists’ performance in classification between malignant and benign breast lesions is important to increase cancer detection sensitivity and reduce false-positive recalls. For this purpose, developing computer-aided diagnosis (CAD) schemes has been attracting research interest in recent years. In this study, we investigated a new feature selection method for the task of breast mass classification. Methods: We initially computed 181 image features based on mass shape, spiculation, contrast, presence of fat or calcifications, texture, isodensity, and other morphological features. From this large image feature pool, we used a sequential forward floating selection (SFFS)-based feature selection method to select relevant features, and analyzed their performance using a support vector machine (SVM) model trained for the classification task. On a database of 600 benign and 600 malignant mass regions of interest (ROIs), we performed the study using a ten-fold cross-validation method. Feature selection and optimization of the SVM parameters were conducted on the training subsets only. Results: The area under the receiver operating characteristic curve (AUC) = 0.805±0.012 was obtained for the classification task. The results also showed that the most frequently-selected features by the SFFS-based algorithm in 10-fold iterations were those related to mass shape, isodensity and presence of fat, which are consistent with the image features frequently used by radiologists in the clinical environment for mass classification. The study also indicated that accurately computing mass spiculation features from the projection mammograms was difficult, and failed to perform well for the mass classification task due to tissue overlap within the benign mass regions. Conclusions: In conclusion, this comprehensive feature analysis study provided new and valuable information for optimizing computerized mass classification schemes that may have potential to be useful as a “second reader” in future clinical practice. PMID:24664267

  8. Checklist/Guide to Selecting a Small Computer.

    ERIC Educational Resources Information Center

    Bennett, Wilma E.

    This 322-point checklist was designed to help executives make an intelligent choice when selecting a small computer for a business. For ease of use the questions have been divided into ten categories: Display Features, Keyboard Features, Printer Features, Controller Features, Software, Word Processing, Service, Training, Miscellaneous, and Costs.…

  9. Feature selection methods for object-based classification of sub-decimeter resolution digital aerial imagery

    USDA-ARS?s Scientific Manuscript database

    Due to the availability of numerous spectral, spatial, and contextual features, the determination of optimal features and class separabilities can be a time consuming process in object-based image analysis (OBIA). While several feature selection methods have been developed to assist OBIA, a robust c...

  10. News video story segmentation method using fusion of audio-visual features

    NASA Astrophysics Data System (ADS)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  11. A comparative analysis of swarm intelligence techniques for feature selection in cancer classification.

    PubMed

    Gunavathi, Chellamuthu; Premalatha, Kandasamy

    2014-01-01

    Feature selection in cancer classification is a central area of research in the field of bioinformatics and used to select the informative genes from thousands of genes of the microarray. The genes are ranked based on T-statistics, signal-to-noise ratio (SNR), and F-test values. The swarm intelligence (SI) technique finds the informative genes from the top-m ranked genes. These selected genes are used for classification. In this paper the shuffled frog leaping with Lévy flight (SFLLF) is proposed for feature selection. In SFLLF, the Lévy flight is included to avoid premature convergence of shuffled frog leaping (SFL) algorithm. The SI techniques such as particle swarm optimization (PSO), cuckoo search (CS), SFL, and SFLLF are used for feature selection which identifies informative genes for classification. The k-nearest neighbour (k-NN) technique is used to classify the samples. The proposed work is applied on 10 different benchmark datasets and examined with SI techniques. The experimental results show that the results obtained from k-NN classifier through SFLLF feature selection method outperform PSO, CS, and SFL.

  12. Feature selection for elderly faller classification based on wearable sensors.

    PubMed

    Howcroft, Jennifer; Kofman, Jonathan; Lemaire, Edward D

    2017-05-30

    Wearable sensors can be used to derive numerous gait pattern features for elderly fall risk and faller classification; however, an appropriate feature set is required to avoid high computational costs and the inclusion of irrelevant features. The objectives of this study were to identify and evaluate smaller feature sets for faller classification from large feature sets derived from wearable accelerometer and pressure-sensing insole gait data. A convenience sample of 100 older adults (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, left and right shanks. Feature selection was performed using correlation-based feature selection (CFS), fast correlation based filter (FCBF), and Relief-F algorithms. Faller classification was performed using multi-layer perceptron neural network, naïve Bayesian, and support vector machine classifiers, with 75:25 single stratified holdout and repeated random sampling. The best performing model was a support vector machine with 78% accuracy, 26% sensitivity, 95% specificity, 0.36 F1 score, and 0.31 MCC and one posterior pelvis accelerometer input feature (left acceleration standard deviation). The second best model achieved better sensitivity (44%) and used a support vector machine with 74% accuracy, 83% specificity, 0.44 F1 score, and 0.29 MCC. This model had ten input features: maximum, mean and standard deviation posterior acceleration; maximum, mean and standard deviation anterior acceleration; mean superior acceleration; and three impulse features. The best multi-sensor model sensitivity (56%) was achieved using posterior pelvis and both shank accelerometers and a naïve Bayesian classifier. The best single-sensor model sensitivity (41%) was achieved using the posterior pelvis accelerometer and a naïve Bayesian classifier. Feature selection provided models with smaller feature sets and improved faller classification compared to faller classification without feature selection. CFS and FCBF provided the best feature subset (one posterior pelvis accelerometer feature) for faller classification. However, better sensitivity was achieved by the second best model based on a Relief-F feature subset with three pressure-sensing insole features and seven head accelerometer features. Feature selection should be considered as an important step in faller classification using wearable sensors.

  13. Select Features in "Finale 2011" for Music Educators

    ERIC Educational Resources Information Center

    Thompson, Douglas Earl

    2011-01-01

    A feature-laden software program such as "Finale" is an overwhelming tool to master--if one hopes to master many features in a short amount of time. Believing that working with a fewer number of features can be a helpful approach, this article looks at a select number of features in "Finale 2011" of obvious use to music educators. These features…

  14. Effects of Spatial and Feature Attention on Disparity-Rendered Structure-From-Motion Stimuli in the Human Visual Cortex

    PubMed Central

    Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.

    2014-01-01

    An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974

  15. IMMAN: free software for information theory-based chemometric analysis.

    PubMed

    Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo

    2015-05-01

    The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software.

  16. Role of opioid tone in the pathophysiology of hyperinsulinemia and insulin resistance in polycystic ovarian disease.

    PubMed

    Fulghesu, A M; Ciampelli, M; Guido, M; Murgia, F; Caruso, A; Mancuso, S; Lanzone, A

    1998-02-01

    Hyperinsulinemia secondary to a poorly characterized disorder of insulin action is a feature of polycystic ovarian disease (PCOD). On the other hand, being generally admitted that opioids may play a role in glycoregulation and that opioid tone is altered in PCOD, an involvement of the opioids in determining the hyperinsulinemia of PCOD patients could be suggested. The aim of this study was to evaluate the effect of a chronic opioid blockade on insulin metabolism and peripheral insulin sensitivity in PCOD hyperinsulinemic patients. Twenty-three women with PCOD were studied. An oral glucose tolerance test (OGTT) and a clamp study were performed at baseline (during the follicular phase) and after 6 weeks of naltrexone administration (50 mg/d orally). Based on the insulinemic response to the OGTT, 16 women were classified as hyperinsulinemic and seven as normoinsulinemic. Naltrexone treatment significantly reduced fasting (P < .05) and area under the curve (AUC) (P < .02) plasma insulin levels only in the hyperinsulinemic group. Moreover, hyperinsulinemic patients showed similar C-peptide incremental areas after naltrexone treatment, whereas in the same patients the fractional hepatic insulin extraction calculated from the incremental areas of insulin and C-peptide was found to be increased after chronic opioid blockade by naltrexone. For peripheral insulin sensitivity, the hyperinsulinemic group showed significantly lower (P < .01) total-body glucose utilization (M) compared with the normoinsulinemic group. No change in the M value was found after treatment in both groups. These data suggest that the insulin sensitivity and hyperinsulinemia after an OGTT are two distinct deranged features of the insulin disorder of PCOD patients.

  17. Master equation for She-Leveque scaling and its classification in terms of other Markov models of developed turbulence

    NASA Astrophysics Data System (ADS)

    Nickelsen, Daniel

    2017-07-01

    The statistics of velocity increments in homogeneous and isotropic turbulence exhibit universal features in the limit of infinite Reynolds numbers. After Kolmogorov’s scaling law from 1941, many turbulence models aim for capturing these universal features, some are known to have an equivalent formulation in terms of Markov processes. We derive the Markov process equivalent to the particularly successful scaling law postulated by She and Leveque. The Markov process is a jump process for velocity increments u(r) in scale r in which the jumps occur randomly but with deterministic width in u. From its master equation we establish a prescription to simulate the She-Leveque process and compare it with Kolmogorov scaling. To put the She-Leveque process into the context of other established turbulence models on the Markov level, we derive a diffusion process for u(r) using two properties of the Navier-Stokes equation. This diffusion process already includes Kolmogorov scaling, extended self-similarity and a class of random cascade models. The fluctuation theorem of this Markov process implies a ‘second law’ that puts a loose bound on the multipliers of the random cascade models. This bound explicitly allows for instances of inverse cascades, which are necessary to satisfy the fluctuation theorem. By adding a jump process to the diffusion process, we go beyond Kolmogorov scaling and formulate the most general scaling law for the class of Markov processes having both diffusion and jump parts. This Markov scaling law includes She-Leveque scaling and a scaling law derived by Yakhot.

  18. Latent fingerprint matching.

    PubMed

    Jain, Anil K; Feng, Jianjiang

    2011-01-01

    Latent fingerprint identification is of critical importance to law enforcement agencies in identifying suspects: Latent fingerprints are inadvertent impressions left by fingers on surfaces of objects. While tremendous progress has been made in plain and rolled fingerprint matching, latent fingerprint matching continues to be a difficult problem. Poor quality of ridge impressions, small finger area, and large nonlinear distortion are the main difficulties in latent fingerprint matching compared to plain or rolled fingerprint matching. We propose a system for matching latent fingerprints found at crime scenes to rolled fingerprints enrolled in law enforcement databases. In addition to minutiae, we also use extended features, including singularity, ridge quality map, ridge flow map, ridge wavelength map, and skeleton. We tested our system by matching 258 latents in the NIST SD27 database against a background database of 29,257 rolled fingerprints obtained by combining the NIST SD4, SD14, and SD27 databases. The minutiae-based baseline rank-1 identification rate of 34.9 percent was improved to 74 percent when extended features were used. In order to evaluate the relative importance of each extended feature, these features were incrementally used in the order of their cost in marking by latent experts. The experimental results indicate that singularity, ridge quality map, and ridge flow map are the most effective features in improving the matching accuracy.

  19. Resuscitation of neonates at 23 weeks' gestational age: a cost-effectiveness analysis.

    PubMed

    Partridge, J Colin; Robertson, Kathryn R; Rogers, Elizabeth E; Landman, Geri Ottaviano; Allen, Allison J; Caughey, Aaron B

    2015-01-01

    Resuscitation of infants at 23 weeks' gestation remains controversial; clinical practices vary. We sought to investigate the cost effectiveness of resuscitation of infants born 23 0/7-23 6/7 weeks' gestation. Decision-analytic modeling comparing universal and selective resuscitation to non-resuscitation for 5176 live births at 23 weeks in a theoretic U.S. cohort. Estimates of death (77%) and disability (64-86%) were taken from the literature. Maternal and combined maternal-neonatal utilities were applied to discounted life expectancy to generate QALYs. Incremental cost-effectiveness ratios were calculated, discounting costs and QALYs. Main outcomes included number of survivors, their outcome status and incremental cost-effectiveness ratios for the three strategies. A cost-effectiveness threshold of $100 000/QALY was utilized. Universal resuscitation would save 1059 infants: 138 severely disabled, 413 moderately impaired and 508 without significant sequelae. Selective resuscitation would save 717 infants: 93 severely disabled, 279 moderately impaired and 343 without significant sequelae. For mothers, non-resuscitation is less expensive ($19.9 million) and more effective (127 844 mQALYs) than universal resuscitation ($1.2 billion; 126 574 mQALYs) or selective resuscitation ($845 million; 125 966 mQALYs). For neonates, both universal and selective resuscitation were cost-effective, resulting in 22 256 and 15 134 nQALYS, respectively, versus 247 nQALYs for non-resuscitation. In sensitivity analyses, universal resuscitation was cost-effective from a maternal perspective only at utilities for neonatal death <0.42. When analyzed from a maternal-neonatal perspective, universal resuscitation was cost-effective when the probability of neonatal death was <0.95. Over wide ranges of probabilities for survival and disability, universal and selective resuscitation strategies were not cost-effective from a maternal perspective. Both strategies were cost-effective from a maternal-neonatal perspective. This study offers a metric for counseling and decision-making for extreme prematurity. Our results could support a more permissive response to parental requests for aggressive intervention at 23 weeks' gestation.

  20. Oculomotor selection underlies feature retention in visual working memory.

    PubMed

    Hanning, Nina M; Jonikaitis, Donatas; Deubel, Heiner; Szinte, Martin

    2016-02-01

    Oculomotor selection, spatial task relevance, and visual working memory (WM) are described as three processes highly intertwined and sustained by similar cortical structures. However, because task-relevant locations always constitute potential saccade targets, no study so far has been able to distinguish between oculomotor selection and spatial task relevance. We designed an experiment that allowed us to dissociate in humans the contribution of task relevance, oculomotor selection, and oculomotor execution to the retention of feature representations in WM. We report that task relevance and oculomotor selection lead to dissociable effects on feature WM maintenance. In a first task, in which an object's location was encoded as a saccade target, its feature representations were successfully maintained in WM, whereas they declined at nonsaccade target locations. Likewise, we observed a similar WM benefit at the target of saccades that were prepared but never executed. In a second task, when an object's location was marked as task relevant but constituted a nonsaccade target (a location to avoid), feature representations maintained at that location did not benefit. Combined, our results demonstrate that oculomotor selection is consistently associated with WM, whereas task relevance is not. This provides evidence for an overlapping circuitry serving saccade target selection and feature-based WM that can be dissociated from processes encoding task-relevant locations. Copyright © 2016 the American Physiological Society.

  1. JCDSA: a joint covariate detection tool for survival analysis on tumor expression profiles.

    PubMed

    Wu, Yiming; Liu, Yanan; Wang, Yueming; Shi, Yan; Zhao, Xudong

    2018-05-29

    Survival analysis on tumor expression profiles has always been a key issue for subsequent biological experimental validation. It is crucial how to select features which closely correspond to survival time. Furthermore, it is important how to select features which best discriminate between low-risk and high-risk group of patients. Common features derived from the two aspects may provide variable candidates for prognosis of cancer. Based on the provided two-step feature selection strategy, we develop a joint covariate detection tool for survival analysis on tumor expression profiles. Significant features, which are not only consistent with survival time but also associated with the categories of patients with different survival risks, are chosen. Using the miRNA expression data (Level 3) of 548 patients with glioblastoma multiforme (GBM) as an example, miRNA candidates for prognosis of cancer are selected. The reliability of selected miRNAs using this tool is demonstrated by 100 simulations. Furthermore, It is discovered that significant covariates are not directly composed of individually significant variables. Joint covariate detection provides a viewpoint for selecting variables which are not individually but jointly significant. Besides, it helps to select features which are not only consistent with survival time but also associated with prognosis risk. The software is available at http://bio-nefu.com/resource/jcdsa .

  2. Design features that affect the maneuverability of wheelchairs and scooters.

    PubMed

    Koontz, Alicia M; Brindle, Eric D; Kankipati, Padmaja; Feathers, David; Cooper, Rory A

    2010-05-01

    To determine the minimum space required for wheeled mobility device users to perform 4 maneuverability tasks and to investigate the impact of selected design attributes on space. Case series. University laboratory, Veterans Affairs research facility, vocational training center, and a national wheelchair sport event. The sample of convenience included manual wheelchair (MWC; n=109), power wheelchair (PWC; n=100), and scooter users (n=14). A mock environment was constructed to create passageways to form an L-turn, 360 degrees -turn in place, and a U-turn with and without a barrier. Passageway openings were increased in 5-cm increments until the user could successfully perform each task without hitting the walls. Structural dimensions of the device and user were collected using an electromechanical probe. Mobility devices were grouped into categories based on design features and compared using 1-way analysis of variance and post hoc pairwise Bonferroni-corrected tests. Minimum passageway widths for the 4 maneuverability tasks. Ultralight MWCs with rear axles posterior to the shoulder had the shortest lengths and required the least amount of space compared with all other types of MWCs (P<.05). Mid-wheel-drive PWCs required the least space for the 360 degrees -turn in place compared with front-wheel-drive and rear-wheel-drive PWCs (P<.01) but performed equally as well as front-wheel-drive models on all other turning tasks. PWCs with seat functions required more space to perform the tasks. Between 10% and 100% of users would not be able to maneuver in spaces that meet current Accessibility Guidelines for Buildings and Facilities specifications. This study provides data that can be used to support wheelchair prescription and home modifications and to update standards to improve the accessibility of public areas.

  3. Improved Impact of Atmospheric Infrared Sounder (AIRS) Radiance Assimilation in Numerical Weather Prediction

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley; Chou, Shih-Hung; Jedlovec, Gary

    2012-01-01

    Improvements to global and regional numerical weather prediction (NWP) have been demonstrated through assimilation of data from NASA s Atmospheric Infrared Sounder (AIRS). Current operational data assimilation systems use AIRS radiances, but impact on regional forecasts has been much smaller than for global forecasts. Retrieved profiles from AIRS contain much of the information that is contained in the radiances and may be able to reveal reasons for this reduced impact. Assimilating AIRS retrieved profiles in an identical analysis configuration to the radiances, tracking the quantity and quality of the assimilated data in each technique, and examining analysis increments and forecast impact from each data type can yield clues as to the reasons for the reduced impact. By doing this with regional scale models individual synoptic features (and the impact of AIRS on these features) can be more easily tracked. This project examines the assimilation of hyperspectral sounder data used in operational numerical weather prediction by comparing operational techniques used for AIRS radiances and research techniques used for AIRS retrieved profiles. Parallel versions of a configuration of the Weather Research and Forecasting (WRF) model with Gridpoint Statistical Interpolation (GSI) that mimics the analysis methodology, domain, and observational datasets for the regional North American Mesoscale (NAM) model run at the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC) are run to examine the impact of each type of AIRS data set. The first configuration will assimilate the AIRS radiance data along with other conventional and satellite data using techniques implemented within the operational system; the second configuration will assimilate AIRS retrieved profiles instead of AIRS radiances in the same manner. Preliminary results of this study will be presented and focus on the analysis impact of the radiances and profiles for selected cases.

  4. Adaptive feature selection using v-shaped binary particle swarm optimization.

    PubMed

    Teng, Xuyang; Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers.

  5. Adaptive feature selection using v-shaped binary particle swarm optimization

    PubMed Central

    Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850

  6. Multiclass feature selection for improved pediatric brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Ahmed, Shaheen; Iftekharuddin, Khan M.

    2012-03-01

    In our previous work, we showed that fractal-based texture features are effective in detection, segmentation and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. We exploited an information theoretic approach such as Kullback-Leibler Divergence (KLD) for feature selection and ranking different texture features. We further incorporated the feature selection technique with segmentation method such as Expectation Maximization (EM) for segmentation of tumor T and non tumor (NT) tissues. In this work, we extend the two class KLD technique to multiclass for effectively selecting the best features for brain tumor (T), cyst (C) and non tumor (NT). We further obtain segmentation robustness for each tissue types by computing Bay's posterior probabilities and corresponding number of pixels for each tissue segments in MRI patient images. We evaluate improved tumor segmentation robustness using different similarity metric for 5 patients in T1, T2 and FLAIR modalities.

  7. Comparison of Different EHG Feature Selection Methods for the Detection of Preterm Labor

    PubMed Central

    Alamedine, D.; Khalil, M.; Marque, C.

    2013-01-01

    Numerous types of linear and nonlinear features have been extracted from the electrohysterogram (EHG) in order to classify labor and pregnancy contractions. As a result, the number of available features is now very large. The goal of this study is to reduce the number of features by selecting only the relevant ones which are useful for solving the classification problem. This paper presents three methods for feature subset selection that can be applied to choose the best subsets for classifying labor and pregnancy contractions: an algorithm using the Jeffrey divergence (JD) distance, a sequential forward selection (SFS) algorithm, and a binary particle swarm optimization (BPSO) algorithm. The two last methods are based on a classifier and were tested with three types of classifiers. These methods have allowed us to identify common features which are relevant for contraction classification. PMID:24454536

  8. HIV-1 protease cleavage site prediction based on two-stage feature selection method.

    PubMed

    Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong

    2013-03-01

    Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.

  9. Long-term information and distributed neural activation are relevant for the "internal features advantage" in face processing: electrophysiological and source reconstruction evidence.

    PubMed

    Olivares, Ela I; Saavedra, Cristina; Trujillo-Barreto, Nelson J; Iglesias, Jaime

    2013-01-01

    In face processing tasks, prior presentation of internal facial features, when compared with external ones, facilitates the recognition of subsequently displayed familiar faces. In a previous ERP study (Olivares & Iglesias, 2010) we found a visibly larger N400-like effect when identity mismatch familiar faces were preceded by internal features, as compared to prior presentation of external ones. In the present study we contrasted the processing of familiar and unfamiliar faces in the face-feature matching task to assess whether the so-called "internal features advantage" relies mainly on the use of stored face-identity-related information or if it might operate independently from stimulus familiarity. Our participants (N = 24) achieved better performance with internal features as primes and, significantly, with familiar faces. Importantly, ERPs elicited by identity mismatch complete faces displayed a negativity around 300-600 msec which was clearly enhanced for familiar faces primed by internal features when compared with the other experimental conditions. Source reconstruction showed incremented activity elicited by familiar stimuli in both posterior (ventral occipitotemporal) and more anterior (parahippocampal (ParaHIP) and orbitofrontal) brain regions. The activity elicited by unfamiliar stimuli was, in general, located in more posterior regions. Our findings suggest that the activation of multiple neural codes is required for optimal individuation in face-feature matching and that a cortical network related to long-term information for face-identity processing seems to support the internal feature effect. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients.

    PubMed

    Capela, Nicole A; Lemaire, Edward D; Baddour, Natalie

    2015-01-01

    Human activity recognition (HAR), using wearable sensors, is a growing area with the potential to provide valuable information on patient mobility to rehabilitation specialists. Smartphones with accelerometer and gyroscope sensors are a convenient, minimally invasive, and low cost approach for mobility monitoring. HAR systems typically pre-process raw signals, segment the signals, and then extract features to be used in a classifier. Feature selection is a crucial step in the process to reduce potentially large data dimensionality and provide viable parameters to enable activity classification. Most HAR systems are customized to an individual research group, including a unique data set, classes, algorithms, and signal features. These data sets are obtained predominantly from able-bodied participants. In this paper, smartphone accelerometer and gyroscope sensor data were collected from populations that can benefit from human activity recognition: able-bodied, elderly, and stroke patients. Data from a consecutive sequence of 41 mobility tasks (18 different tasks) were collected for a total of 44 participants. Seventy-six signal features were calculated and subsets of these features were selected using three filter-based, classifier-independent, feature selection methods (Relief-F, Correlation-based Feature Selection, Fast Correlation Based Filter). The feature subsets were then evaluated using three generic classifiers (Naïve Bayes, Support Vector Machine, j48 Decision Tree). Common features were identified for all three populations, although the stroke population subset had some differences from both able-bodied and elderly sets. Evaluation with the three classifiers showed that the feature subsets produced similar or better accuracies than classification with the entire feature set. Therefore, since these feature subsets are classifier-independent, they should be useful for developing and improving HAR systems across and within populations.

  11. Feature Selection for Wearable Smartphone-Based Human Activity Recognition with Able bodied, Elderly, and Stroke Patients

    PubMed Central

    2015-01-01

    Human activity recognition (HAR), using wearable sensors, is a growing area with the potential to provide valuable information on patient mobility to rehabilitation specialists. Smartphones with accelerometer and gyroscope sensors are a convenient, minimally invasive, and low cost approach for mobility monitoring. HAR systems typically pre-process raw signals, segment the signals, and then extract features to be used in a classifier. Feature selection is a crucial step in the process to reduce potentially large data dimensionality and provide viable parameters to enable activity classification. Most HAR systems are customized to an individual research group, including a unique data set, classes, algorithms, and signal features. These data sets are obtained predominantly from able-bodied participants. In this paper, smartphone accelerometer and gyroscope sensor data were collected from populations that can benefit from human activity recognition: able-bodied, elderly, and stroke patients. Data from a consecutive sequence of 41 mobility tasks (18 different tasks) were collected for a total of 44 participants. Seventy-six signal features were calculated and subsets of these features were selected using three filter-based, classifier-independent, feature selection methods (Relief-F, Correlation-based Feature Selection, Fast Correlation Based Filter). The feature subsets were then evaluated using three generic classifiers (Naïve Bayes, Support Vector Machine, j48 Decision Tree). Common features were identified for all three populations, although the stroke population subset had some differences from both able-bodied and elderly sets. Evaluation with the three classifiers showed that the feature subsets produced similar or better accuracies than classification with the entire feature set. Therefore, since these feature subsets are classifier-independent, they should be useful for developing and improving HAR systems across and within populations. PMID:25885272

  12. Gene features selection for three-class disease classification via multiple orthogonal partial least square discriminant analysis and S-plot using microarray data.

    PubMed

    Yang, Mingxing; Li, Xiumin; Li, Zhibin; Ou, Zhimin; Liu, Ming; Liu, Suhuan; Li, Xuejun; Yang, Shuyu

    2013-01-01

    DNA microarray analysis is characterized by obtaining a large number of gene variables from a small number of observations. Cluster analysis is widely used to analyze DNA microarray data to make classification and diagnosis of disease. Because there are so many irrelevant and insignificant genes in a dataset, a feature selection approach must be employed in data analysis. The performance of cluster analysis of this high-throughput data depends on whether the feature selection approach chooses the most relevant genes associated with disease classes. Here we proposed a new method using multiple Orthogonal Partial Least Squares-Discriminant Analysis (mOPLS-DA) models and S-plots to select the most relevant genes to conduct three-class disease classification and prediction. We tested our method using Golub's leukemia microarray data. For three classes with subtypes, we proposed hierarchical orthogonal partial least squares-discriminant analysis (OPLS-DA) models and S-plots to select features for two main classes and their subtypes. For three classes in parallel, we employed three OPLS-DA models and S-plots to choose marker genes for each class. The power of feature selection to classify and predict three-class disease was evaluated using cluster analysis. Further, the general performance of our method was tested using four public datasets and compared with those of four other feature selection methods. The results revealed that our method effectively selected the most relevant features for disease classification and prediction, and its performance was better than that of the other methods.

  13. A feature selection approach towards progressive vector transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Miao, Ru; Song, Jia; Feng, Min

    2017-09-01

    WebGIS has been applied for visualizing and sharing geospatial information popularly over the Internet. In order to improve the efficiency of the client applications, the web-based progressive vector transmission approach is proposed. Important features should be selected and transferred firstly, and the methods for measuring the importance of features should be further considered in the progressive transmission. However, studies on progressive transmission for large-volume vector data have mostly focused on map generalization in the field of cartography, but rarely discussed on the selection of geographic features quantitatively. This paper applies information theory for measuring the feature importance of vector maps. A measurement model for the amount of information of vector features is defined based upon the amount of information for dealing with feature selection issues. The measurement model involves geometry factor, spatial distribution factor and thematic attribute factor. Moreover, a real-time transport protocol (RTP)-based progressive transmission method is then presented to improve the transmission of vector data. To clearly demonstrate the essential methodology and key techniques, a prototype for web-based progressive vector transmission is presented, and an experiment of progressive selection and transmission for vector features is conducted. The experimental results indicate that our approach clearly improves the performance and end-user experience of delivering and manipulating large vector data over the Internet.

  14. A new approach to modeling the influence of image features on fixation selection in scenes

    PubMed Central

    Nuthmann, Antje; Einhäuser, Wolfgang

    2015-01-01

    Which image characteristics predict where people fixate when memorizing natural images? To answer this question, we introduce a new analysis approach that combines a novel scene-patch analysis with generalized linear mixed models (GLMMs). Our method allows for (1) directly describing the relationship between continuous feature value and fixation probability, and (2) assessing each feature's unique contribution to fixation selection. To demonstrate this method, we estimated the relative contribution of various image features to fixation selection: luminance and luminance contrast (low-level features); edge density (a mid-level feature); visual clutter and image segmentation to approximate local object density in the scene (higher-level features). An additional predictor captured the central bias of fixation. The GLMM results revealed that edge density, clutter, and the number of homogenous segments in a patch can independently predict whether image patches are fixated or not. Importantly, neither luminance nor contrast had an independent effect above and beyond what could be accounted for by the other predictors. Since the parcellation of the scene and the selection of features can be tailored to the specific research question, our approach allows for assessing the interplay of various factors relevant for fixation selection in scenes in a powerful and flexible manner. PMID:25752239

  15. A comparison of three feature selection methods for object-based classification of sub-decimeter resolution UltraCam-L imagery

    USDA-ARS?s Scientific Manuscript database

    The availability of numerous spectral, spatial, and contextual features with object-based image analysis (OBIA) renders the selection of optimal features a time consuming and subjective process. While several feature election methods have been used in conjunction with OBIA, a robust comparison of th...

  16. Processing Dynamic Image Sequences from a Moving Sensor.

    DTIC Science & Technology

    1984-02-01

    65 Roadsign Image Sequence ..... ................ ... 70 Roadsign Sequence with Redundant Features .. ........ . 79 Roadsign Subimage...Selected Feature Error Values .. ........ 66 2c. Industrial Image Selected Feature Local Search Values. .. .... 67 3ab. Roadsign Image Error Values...72 3c. Roadsign Image Local Search Values ............. 73 4ab. Roadsign Redundant Feature Error Values. ............ 8 4c. Roadsign

  17. Mammogram classification scheme using 2D-discrete wavelet and local binary pattern for detection of breast cancer

    NASA Astrophysics Data System (ADS)

    Adi Putra, Januar

    2018-04-01

    In this paper, we propose a new mammogram classification scheme to classify the breast tissues as normal or abnormal. Feature matrix is generated using Local Binary Pattern to all the detailed coefficients from 2D-DWT of the region of interest (ROI) of a mammogram. Feature selection is done by selecting the relevant features that affect the classification. Feature selection is used to reduce the dimensionality of data and features that are not relevant, in this paper the F-test and Ttest will be performed to the results of the feature extraction dataset to reduce and select the relevant feature. The best features are used in a Neural Network classifier for classification. In this research we use MIAS and DDSM database. In addition to the suggested scheme, the competent schemes are also simulated for comparative analysis. It is observed that the proposed scheme has a better say with respect to accuracy, specificity and sensitivity. Based on experiments, the performance of the proposed scheme can produce high accuracy that is 92.71%, while the lowest accuracy obtained is 77.08%.

  18. A Generalization Strategy for Discrete Area Feature by Using Stroke Grouping and Polarization Transportation Selection

    NASA Astrophysics Data System (ADS)

    Wang, Xiao; Burghardt, Dirk

    2018-05-01

    This paper presents a new strategy for the generalization of discrete area features by using stroke grouping method and polarization transportation selection. The mentioned stroke is constructed on derive of the refined proximity graph of area features, and the refinement is under the control of four constraints to meet different grouping requirements. The area features which belong to the same stroke are detected into the same group. The stroke-based strategy decomposes the generalization process into two sub-processes by judging whether the area features related to strokes or not. For the area features which belong to the same one stroke, they normally present a linear like pat-tern, and in order to preserve this kind of pattern, typification is chosen as the operator to implement the generalization work. For the remaining area features which are not related by strokes, they are still distributed randomly and discretely, and the selection is chosen to conduct the generalization operation. For the purpose of retaining their original distribution characteristic, a Polarization Transportation (PT) method is introduced to implement the selection operation. Buildings and lakes are selected as the representatives of artificial area feature and natural area feature respectively to take the experiments. The generalized results indicate that by adopting this proposed strategy, the original distribution characteristics of building and lake data can be preserved, and the visual perception is pre-served as before.

  19. Incremental Hemodialysis, Residual Kidney Function, and Mortality Risk in Incident Dialysis Patients: A Cohort Study

    PubMed Central

    Obi, Yoshitsugu; Streja, Elani; Rhee, Connie M.; Ravel, Vanessa; Amin, Alpesh N.; Cupisti, Adamasco; Chen, Jing; Mathew, Anna T.; Kovesdy, Csaba P.; Mehrotra, Rajnish; Kalantar-Zadeh, Kamyar

    2016-01-01

    Background Maintenance hemodialysis is typically prescribed thrice-weekly irrespective of patient's residual kidney function (RKF). We hypothesized that a less frequent schedule at hemodialysis initiation is associated with greater preservation of RKF without compromising survival among patients with substantial RKF. Study Design A longitudinal cohort Setting & Participants 23,645 patients who initiated maintenance hemodialysis in a large dialysis organization in the United States (1/2007–12/2010), who had available RKF data during the first 91 days (or quarter) of dialysis, and who survived the first year. Predictor Incremental (routine twice-weekly for >6 continuous weeks during the first 91 days upon transition to dialysis) versus conventional (thrice-weekly) hemodialysis regimens during the same time. Outcomes Changes in renal urea clearance (KRU) and urine volume (UV) during one year after the first quarter, and survival after the first year. Results Among 23,645 included patients, 51% had substantial KRU (≥3.0 mL/min/1.73m2) at baseline. Compared to 8,068 patients with conventional hemodialysis regimen matched based on baseline KRU, UV, age, gender, diabetes, and central venous catheter use, 351 patients with incremental regimen exhibited 16% (95% CI, 5%-28%) and 15% (95% CI, 2%-30%) more preserved KRU and UV at second quarter, respectively, which remained across the following quarters. Incremental regimen showed higher mortality risk in patients with inadequate baseline KRU (≤3.0 mL/min/1.73m2; HR, 1.61; 95% CI, 1.07-2.44), but not in those with higher baseline KRU (HR, 0.99; 95% CI, 0.76-1.28). Results were similar in subgroup defined by baseline UV of 600 mL/day. Limitations Potential selection bias and wide CIs. Conclusions Among incident hemodialysis patients with substantial RKF, incremental hemodialysis may be a safe treatment regimen and associated with greater preservation of RKF while higher mortality is observed after a year in those with lowest RKF. Clinical trials are needed to examine safety and effectiveness of twice-weekly hemodialysis. PMID:26867814

  20. An Ant Colony Optimization Based Feature Selection for Web Page Classification

    PubMed Central

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods. PMID:25136678

  1. Hypergraph Based Feature Selection Technique for Medical Diagnosis.

    PubMed

    Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar

    2016-11-01

    The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.

  2. Reducing Sweeping Frequencies in Microwave NDT Employing Machine Learning Feature Selection

    PubMed Central

    Moomen, Abdelniser; Ali, Abdulbaset; Ramahi, Omar M.

    2016-01-01

    Nondestructive Testing (NDT) assessment of materials’ health condition is useful for classifying healthy from unhealthy structures or detecting flaws in metallic or dielectric structures. Performing structural health testing for coated/uncoated metallic or dielectric materials with the same testing equipment requires a testing method that can work on metallics and dielectrics such as microwave testing. Reducing complexity and expenses associated with current diagnostic practices of microwave NDT of structural health requires an effective and intelligent approach based on feature selection and classification techniques of machine learning. Current microwave NDT methods in general based on measuring variation in the S-matrix over the entire operating frequency ranges of the sensors. For instance, assessing the health of metallic structures using a microwave sensor depends on the reflection or/and transmission coefficient measurements as a function of the sweeping frequencies of the operating band. The aim of this work is reducing sweeping frequencies using machine learning feature selection techniques. By treating sweeping frequencies as features, the number of top important features can be identified, then only the most influential features (frequencies) are considered when building the microwave NDT equipment. The proposed method of reducing sweeping frequencies was validated experimentally using a waveguide sensor and a metallic plate with different cracks. Among the investigated feature selection techniques are information gain, gain ratio, relief, chi-squared. The effectiveness of the selected features were validated through performance evaluations of various classification models; namely, Nearest Neighbor, Neural Networks, Random Forest, and Support Vector Machine. Results showed good crack classification accuracy rates after employing feature selection algorithms. PMID:27104533

  3. Stabilizing l1-norm prediction models by supervised feature grouping.

    PubMed

    Kamkar, Iman; Gupta, Sunil Kumar; Phung, Dinh; Venkatesh, Svetha

    2016-02-01

    Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. On the use of feature selection to improve the detection of sea oil spills in SAR images

    NASA Astrophysics Data System (ADS)

    Mera, David; Bolon-Canedo, Veronica; Cotos, J. M.; Alonso-Betanzos, Amparo

    2017-03-01

    Fast and effective oil spill detection systems are crucial to ensure a proper response to environmental emergencies caused by hydrocarbon pollution on the ocean's surface. Typically, these systems uncover not only oil spills, but also a high number of look-alikes. The feature extraction is a critical and computationally intensive phase where each detected dark spot is independently examined. Traditionally, detection systems use an arbitrary set of features to discriminate between oil spills and look-alikes phenomena. However, Feature Selection (FS) methods based on Machine Learning (ML) have proved to be very useful in real domains for enhancing the generalization capabilities of the classifiers, while discarding the existing irrelevant features. In this work, we present a generic and systematic approach, based on FS methods, for choosing a concise and relevant set of features to improve the oil spill detection systems. We have compared five FS methods: Correlation-based feature selection (CFS), Consistency-based filter, Information Gain, ReliefF and Recursive Feature Elimination for Support Vector Machine (SVM-RFE). They were applied on a 141-input vector composed of features from a collection of outstanding studies. Selected features were validated via a Support Vector Machine (SVM) classifier and the results were compared with previous works. Test experiments revealed that the classifier trained with the 6-input feature vector proposed by SVM-RFE achieved the best accuracy and Cohen's kappa coefficient (87.1% and 74.06% respectively). This is a smaller feature combination with similar or even better classification accuracy than previous works. The presented finding allows to speed up the feature extraction phase without reducing the classifier accuracy. Experiments also confirmed the significance of the geometrical features since 75.0% of the different features selected by the applied FS methods as well as 66.67% of the proposed 6-input feature vector belong to this category.

  5. [The key parameters of design research and analysis of the Chinese reading visual acuity chart].

    PubMed

    Wang, Chen-xiao; Liu, Zhi-hui; Gao, Ji-tuo; Guo, Ying-xuan; He, Ji-cang; Qu, Jia; Lü, Fan

    2013-06-01

    Reading is a visual function human being used to understand environmental events based on writing materials. This study investigated the feasibility of reading visual acuity chart in assessment of reading ability by analysis of the key factors involved in the design of the visual acuity chart. The reading level was determined as grade 3 primary school with Song as the font and 30 characters included in the sentences. Each of the sentences consisted of 27 commonly-used Chinese characters (9 characters between any two punctuations) and 3 punctuations. There were no contextual clues between the 80 sentences selected. The characters had 13 different sizes with an increment of 0.1 log unit (e.g.1.2589) and 2.5 pt was determined as the critical threshold. Readable test for visual target was followed as (1) 29 candidates with a raw or corrected visual acuity (VA)of at least 1.0 were selected to read 80 selected sentences with the size of characters of 2.5 pt at a distance of 40 cm, (2) the time used for reading with the number of characters wrongly read was recorded, (3) 39 sentences were selected as visual targets based on reading speed, effective reading position and total number of character strokes, (4) The 39 selected sentences were then randomly divided into 3 groups with no significant difference among the groups in the 3 factors listed at (3) with paired t-test. This reading visual chart was at level of Grade 3 primary school with a total stroke number of 165-210(Mean 185 ± 10), 13 font sizes a 0.1 log unit increment, a song pattern and 2.5 pt as the critical threshold. All candidates achieved 100% correct in reading test under 2.5 pt with an effective reading speed as 120.65-162 wpm (Mean 142.93 ± 11.80) and effective reading position as 36.03-61.48(Mean 48.85 ± 6.81). The reading test for the 3 groups of sentences showed effective reading speed as (142.49 ± 12.14) wpm,(142.86 ± 12.55) wpm and (143.44 ± 11.63) wpm respectively(t1-2 = -0.899, t2-3 = -1.295, t1-3 = -1.435). The reading position was 48.55 ± 6.69, 48.99 ± 7.49 and 49.00 ± 6.76, respectively(t1-2 = -1.019, t2-3 = -0.019, t1-3 = -0.816). The total number of character strokes was 185.54 ± 7.55, 187.69 ± 13.76 and 182.62 ± 8.17, respectively(t1-2 = 0.191, t2-3 = 1.385, t1-3 = 1.686). A practical design of the Chinese reading visual chart should consider size, increment, legibility in selection of reading sentences. Reading visual acuity, critical threshold and effective reading speed could be used to express the reading visual function.

  6. Motorized control for mirror mount apparatus

    DOEpatents

    Cutburth, Ronald W.

    1989-01-01

    A motorized control and automatic braking system for adjusting mirror mount apparatus is disclosed. The motor control includes a planetary gear arrangement to provide improved pitch adjustment capability while permitting a small packaged design. The motor control for mirror mount adjustment is suitable for laser beam propagation applications. The brake is a system of constant contact, floating detents which engage the planetary gear at selected between-teeth increments to stop rotation instantaneously when the drive motor stops.

  7. Improved military air traffic controller selection methods as measured by subsequent training performance.

    PubMed

    Carretta, Thomas R; King, Raymond E

    2008-01-01

    Over the past decade, the U.S. military has conducted several studies to evaluate determinants of enlisted air traffic controller (ATC) performance. Research has focused on validation of the Armed Services Vocational Aptitude Battery (ASVAB) and has shown it to be a good predictor of training performance. Despite this, enlisted ATC training and post-training attrition is higher than desirable, prompting interest in alternate selection methods to augment current procedures. The current study examined the utility of the FAA Air Traffic Selection and Training (AT-SAT) battery for incrementing the predictiveness of the ASVAB versus several enlisted ATC training criteria. Subjects were 448 USAF enlisted ATC students who were administered the ASVAB and FAA AT-SAT subtests and subsequently graduated or were eliminated from apprentice-level training. Training criteria were a dichotomous graduation/elimination training score, average ATC fundamentals course score, and FAA certified tower operator test score. Results confirmed the predictive validity of the ASVAB and showed that one of the AT-SAT subtests resembling a low-fidelity ATC work sample significantly improved prediction of training performance beyond the ASVAB alone. Results suggested training attrition could be reduced by raising the current ASVAB minimum qualifying score. However, this approach may make it difficult to identify sufficient numbers of trainees and lead to adverse impact. Although the AT-SAT ATC work sample subtest showed incremental validity to the ASVAB, its length (95 min) may be problematic in operational testing. Recommendations are made for additional studies to address issues affecting operational implementation.

  8. Sex differences in gait utilization and energy metabolism during terrestrial locomotion in two varieties of chicken (Gallus gallus domesticus) selected for different body size

    PubMed Central

    Rose, Kayleigh A.; Nudds, Robert L.; Butler, Patrick J.; Codd, Jonathan R.

    2015-01-01

    ABSTRACT In leghorn chickens (Gallus gallus domesticus) of standard breed (large) and bantam (small) varieties, artificial selection has led to females being permanently gravid and sexual selection has led to male-biased size dimorphism. Using respirometry, videography and morphological measurements, sex and variety differences in metabolic cost of locomotion, gait utilisation and maximum sustainable speed (Umax) were investigated during treadmill locomotion. Males were capable of greater Umax than females and used a grounded running gait at high speeds, which was only observed in a few bantam females and no standard breed females. Body mass accounted for variation in the incremental increase in metabolic power with speed between the varieties, but not the sexes. For the first time in an avian species, a greater mass-specific incremental cost of locomotion, and minimum measured cost of transport (CoTmin) were found in males than in females. Furthermore, in both varieties, the female CoTmin was lower than predicted from interspecific allometry. Even when compared at equivalent speeds (using Froude number), CoT decreased more rapidly in females than in males. These trends were common to both varieties despite a more upright limb in females than in males in the standard breed, and a lack of dimorphism in posture in the bantam variety. Females may possess compensatory adaptations for metabolic efficiency during gravidity (e.g. in muscle specialization/posture/kinematics). Furthermore, the elevated power at faster speeds in males may be linked to their muscle properties being suited to inter-male aggressive combat. PMID:26405047

  9. Selecting Feature Subsets Based on SVM-RFE and the Overlapping Ratio with Applications in Bioinformatics.

    PubMed

    Lin, Xiaohui; Li, Chao; Zhang, Yanhui; Su, Benzhe; Fan, Meng; Wei, Hai

    2017-12-26

    Feature selection is an important topic in bioinformatics. Defining informative features from complex high dimensional biological data is critical in disease study, drug development, etc. Support vector machine-recursive feature elimination (SVM-RFE) is an efficient feature selection technique that has shown its power in many applications. It ranks the features according to the recursive feature deletion sequence based on SVM. In this study, we propose a method, SVM-RFE-OA, which combines the classification accuracy rate and the average overlapping ratio of the samples to determine the number of features to be selected from the feature rank of SVM-RFE. Meanwhile, to measure the feature weights more accurately, we propose a modified SVM-RFE-OA (M-SVM-RFE-OA) algorithm that temporally screens out the samples lying in a heavy overlapping area in each iteration. The experiments on the eight public biological datasets show that the discriminative ability of the feature subset could be measured more accurately by combining the classification accuracy rate with the average overlapping degree of the samples compared with using the classification accuracy rate alone, and shielding the samples in the overlapping area made the calculation of the feature weights more stable and accurate. The methods proposed in this study can also be used with other RFE techniques to define potential biomarkers from big biological data.

  10. Hadoop neural network for parallel and distributed feature selection.

    PubMed

    Hodge, Victoria J; O'Keefe, Simon; Austin, Jim

    2016-06-01

    In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Recurrent personality dimensions in inclusive lexical studies: indications for a big six structure.

    PubMed

    Saucier, Gerard

    2009-10-01

    Previous evidence for both the Big Five and the alternative six-factor model has been drawn from lexical studies with relatively narrow selections of attributes. This study examined factors from previous lexical studies using a wider selection of attributes in 7 languages (Chinese, English, Filipino, Greek, Hebrew, Spanish, and Turkish) and found 6 recurrent factors, each with common conceptual content across most of the studies. The previous narrow-selection-based six-factor model outperformed the Big Five in capturing the content of the 6 recurrent wideband factors. Adjective markers of the 6 recurrent wideband factors showed substantial incremental prediction of important criterion variables over and above the Big Five. Correspondence between wideband 6 and narrowband 6 factors indicate they are variants of a "Big Six" model that is more general across variable-selection procedures and may be more general across languages and populations.

  12. Prediction of protein-protein interactions based on PseAA composition and hybrid feature selection.

    PubMed

    Liu, Liang; Cai, Yudong; Lu, Wencong; Feng, Kaiyan; Peng, Chunrong; Niu, Bing

    2009-03-06

    Based on pseudo amino acid (PseAA) composition and a novel hybrid feature selection frame, this paper presents a computational system to predict the PPIs (protein-protein interactions) using 8796 protein pairs. These pairs are coded by PseAA composition, resulting in 114 features. A hybrid feature selection system, mRMR-KNNs-wrapper, is applied to obtain an optimized feature set by excluding poor-performed and/or redundant features, resulting in 103 remaining features. Using the optimized 103-feature subset, a prediction model is trained and tested in the k-nearest neighbors (KNNs) learning system. This prediction model achieves an overall accurate prediction rate of 76.18%, evaluated by 10-fold cross-validation test, which is 1.46% higher than using the initial 114 features and is 6.51% higher than the 20 features, coded by amino acid compositions. The PPIs predictor, developed for this research, is available for public use at http://chemdata.shu.edu.cn/ppi.

  13. Comparison of Naive Bayes and Decision Tree on Feature Selection Using Genetic Algorithm for Classification Problem

    NASA Astrophysics Data System (ADS)

    Rahmadani, S.; Dongoran, A.; Zarlis, M.; Zakarias

    2018-03-01

    This paper discusses the problem of feature selection using genetic algorithms on a dataset for classification problems. The classification model used is the decicion tree (DT), and Naive Bayes. In this paper we will discuss how the Naive Bayes and Decision Tree models to overcome the classification problem in the dataset, where the dataset feature is selectively selected using GA. Then both models compared their performance, whether there is an increase in accuracy or not. From the results obtained shows an increase in accuracy if the feature selection using GA. The proposed model is referred to as GADT (GA-Decision Tree) and GANB (GA-Naive Bayes). The data sets tested in this paper are taken from the UCI Machine Learning repository.

  14. Fuzzy feature selection based on interval type-2 fuzzy sets

    NASA Astrophysics Data System (ADS)

    Cherif, Sahar; Baklouti, Nesrine; Alimi, Adel; Snasel, Vaclav

    2017-03-01

    When dealing with real world data; noise, complexity, dimensionality, uncertainty and irrelevance can lead to low performance and insignificant judgment. Fuzzy logic is a powerful tool for controlling conflicting attributes which can have similar effects and close meanings. In this paper, an interval type-2 fuzzy feature selection is presented as a new approach for removing irrelevant features and reducing complexity. We demonstrate how can Feature Selection be joined with Interval Type-2 Fuzzy Logic for keeping significant features and hence reducing time complexity. The proposed method is compared with some other approaches. The results show that the number of attributes is proportionally small.

  15. Neural evidence reveals the rapid effects of reward history on selective attention.

    PubMed

    MacLean, Mary H; Giesbrecht, Barry

    2015-05-05

    Selective attention is often framed as being primarily driven by two factors: task-relevance and physical salience. However, factors like selection and reward history, which are neither currently task-relevant nor physically salient, can reliably and persistently influence visual selective attention. The current study investigated the nature of the persistent effects of irrelevant, physically non-salient, reward-associated features. These features affected one of the earliest reliable neural indicators of visual selective attention in humans, the P1 event-related potential, measured one week after the reward associations were learned. However, the effects of reward history were moderated by current task demands. The modulation of visually evoked activity supports the hypothesis that reward history influences the innate salience of reward associated features, such that even when no longer relevant, nor physically salient, these features have a rapid, persistent, and robust effect on early visual selective attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Systematic interrogation of diverse Omic data reveals interpretable, robust, and generalizable transcriptomic features of clinically successful therapeutic targets.

    PubMed

    Rouillard, Andrew D; Hurle, Mark R; Agarwal, Pankaj

    2018-05-01

    Target selection is the first and pivotal step in drug discovery. An incorrect choice may not manifest itself for many years after hundreds of millions of research dollars have been spent. We collected a set of 332 targets that succeeded or failed in phase III clinical trials, and explored whether Omic features describing the target genes could predict clinical success. We obtained features from the recently published comprehensive resource: Harmonizome. Nineteen features appeared to be significantly correlated with phase III clinical trial outcomes, but only 4 passed validation schemes that used bootstrapping or modified permutation tests to assess feature robustness and generalizability while accounting for target class selection bias. We also used classifiers to perform multivariate feature selection and found that classifiers with a single feature performed as well in cross-validation as classifiers with more features (AUROC = 0.57 and AUPR = 0.81). The two predominantly selected features were mean mRNA expression across tissues and standard deviation of expression across tissues, where successful targets tended to have lower mean expression and higher expression variance than failed targets. This finding supports the conventional wisdom that it is favorable for a target to be present in the tissue(s) affected by a disease and absent from other tissues. Overall, our results suggest that it is feasible to construct a model integrating interpretable target features to inform target selection. We anticipate deeper insights and better models in the future, as researchers can reuse the data we have provided to improve methods for handling sample biases and learn more informative features. Code, documentation, and data for this study have been deposited on GitHub at https://github.com/arouillard/omic-features-successful-targets.

  17. Feature selection using probabilistic prediction of support vector regression.

    PubMed

    Yang, Jian-Bo; Ong, Chong-Jin

    2011-06-01

    This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classificationmore » rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.« less

  19. Focus drive mechanism for the IUE scientific instrument

    NASA Technical Reports Server (NTRS)

    Devine, E. J.; Dennis, T. B., Jr.

    1977-01-01

    A compact, lightweight mechanism was developed for in-orbit adjustment of the position of the secondary mirror (focusing) of the International Ultraviolet Explored telescope. This device is a linear drive with small (.0004 in.) and highly repeatable step increments. Extremely close tolerances are also held in tilt and decentering. The unique mechanization is described with attention to the design details that contribute to positional accuracy. Lubrication, materials, thermal considerations, sealing, detenting against launch loads, and other features peculiar to flight hardware are discussed. The methods employed for mounting the low expansion quartz mirror with minimum distortion are also given.

  20. COMPACT CASCADE IMPACTS

    DOEpatents

    Lippmann, M.

    1964-04-01

    A cascade particle impactor capable of collecting particles and distributing them according to size is described. In addition the device is capable of collecting on a pair of slides a series of different samples so that less time is required for the changing of slides. Other features of the device are its compactness and its ruggedness making it useful under field conditions. Essentially the unit consists of a main body with a series of transverse jets discharging on a pair of parallel, spaced glass plates. The plates are capable of being moved incremental in steps to obtain the multiple samples. (AEC)

Top