Sample records for randomly selected based

  1. COMPARISON OF RANDOM AND SYSTEMATIC SITE SELECTION FOR ASSESSING ATTAINMENT OF AQUATIC LIFE USES IN SEGMENTS OF THE OHIO RIVER

    EPA Science Inventory

    This report is a description of field work and data analysis results comparing a design comparable to systematic site selection with one based on random selection of sites. The report is expected to validate the use of random site selection in the bioassessment program for the O...

  2. Foundational errors in the Neutral and Nearly-Neutral theories of evolution in relation to the Synthetic Theory: is a new evolutionary paradigm necessary?

    PubMed

    Valenzuela, Carlos Y

    2013-01-01

    The Neutral Theory of Evolution (NTE) proposes mutation and random genetic drift as the most important evolutionary factors. The most conspicuous feature of evolution is the genomic stability during paleontological eras and lack of variation among taxa; 98% or more of nucleotide sites are monomorphic within a species. NTE explains this homology by random fixation of neutral bases and negative selection (purifying selection) that does not contribute either to evolution or polymorphisms. Purifying selection is insufficient to account for this evolutionary feature and the Nearly-Neutral Theory of Evolution (N-NTE) included negative selection with coefficients as low as mutation rate. These NTE and N-NTE propositions are thermodynamically (tendency to random distributions, second law), biotically (recurrent mutation), logically and mathematically (resilient equilibria instead of fixation by drift) untenable. Recurrent forward and backward mutation and random fluctuations of base frequencies alone in a site make life organization and fixations impossible. Drift is not a directional evolutionary factor, but a directional tendency of matter-energy processes (second law) which threatens the biotic organization. Drift cannot drive evolution. In a site, the mutation rates among bases and selection coefficients determine the resilient equilibrium frequency of bases that genetic drift cannot change. The expected neutral random interaction among nucleotides is zero; however, huge interactions and periodicities were found between bases of dinucleotides separated by 1, 2... and more than 1,000 sites. Every base is co-adapted with the whole genome. Neutralists found that neutral evolution is independent of population size (N); thus neutral evolution should be independent of drift, because drift effect is dependent upon N. Also, chromosome size and shape as well as protein size are far from random.

  3. Genetic improvement in mastitis resistance: comparison of selection criteria from cross-sectional and random regression sire models for somatic cell score.

    PubMed

    Odegård, J; Klemetsdal, G; Heringstad, B

    2005-04-01

    Several selection criteria for reducing incidence of mastitis were developed from a random regression sire model for test-day somatic cell score (SCS). For comparison, sire transmitting abilities were also predicted based on a cross-sectional model for lactation mean SCS. Only first-crop daughters were used in genetic evaluation of SCS, and the different selection criteria were compared based on their correlation with incidence of clinical mastitis in second-crop daughters (measured as mean daughter deviations). Selection criteria were predicted based on both complete and reduced first-crop daughter groups (261 or 65 daughters per sire, respectively). For complete daughter groups, predicted transmitting abilities at around 30 d in milk showed the best predictive ability for incidence of clinical mastitis, closely followed by average predicted transmitting abilities over the entire lactation. Both of these criteria were derived from the random regression model. These selection criteria improved accuracy of selection by approximately 2% relative to a cross-sectional model. However, for reduced daughter groups, the cross-sectional model yielded increased predictive ability compared with the selection criteria based on the random regression model. This result may be explained by the cross-sectional model being more robust, i.e., less sensitive to precision of (co)variance components estimates and effects of data structure.

  4. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  5. Discriminative Projection Selection Based Face Image Hashing

    NASA Astrophysics Data System (ADS)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  6. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  7. High-Tg Polynorbornene-Based Block and Random Copolymers for Butanol Pervaporation Membranes

    NASA Astrophysics Data System (ADS)

    Register, Richard A.; Kim, Dong-Gyun; Takigawa, Tamami; Kashino, Tomomasa; Burtovyy, Oleksandr; Bell, Andrew

    Vinyl addition polymers of substituted norbornene (NB) monomers possess desirably high glass transition temperatures (Tg); however, until very recently, the lack of an applicable living polymerization chemistry has precluded the synthesis of such polymers with controlled architecture, or copolymers with controlled sequence distribution. We have recently synthesized block and random copolymers of NB monomers bearing hydroxyhexafluoroisopropyl and n-butyl substituents (HFANB and BuNB) via living vinyl addition polymerization with Pd-based catalysts. Both series of polymers were cast into the selective skin layers of thin film composite (TFC) membranes, and these organophilic membranes investigated for the isolation of n-butanol from dilute aqueous solution (model fermentation broth) via pervaporation. The block copolymers show well-defined microphase-separated morphologies, both in bulk and as the selective skin layers on TFC membranes, while the random copolymers are homogeneous. Both block and random vinyl addition copolymers are effective as n-butanol pervaporation membranes, with the block copolymers showing a better flux-selectivity balance. While polyHFANB has much higher permeability and n-butanol selectivity than polyBuNB, incorporating BuNB units into the polymer (in either a block or random sequence) limits the swelling of the polyHFANB and thereby improves the n-butanol pervaporation selectivity.

  8. Does time-lapse imaging have favorable results for embryo incubation and selection compared with conventional methods in clinical in vitro fertilization? A meta-analysis and systematic review of randomized controlled trials.

    PubMed

    Chen, Minghao; Wei, Shiyou; Hu, Junyan; Yuan, Jing; Liu, Fenghua

    2017-01-01

    The present study aimed to undertake a review of available evidence assessing whether time-lapse imaging (TLI) has favorable outcomes for embryo incubation and selection compared with conventional methods in clinical in vitro fertilization (IVF). Using PubMed, EMBASE, Cochrane library and ClinicalTrial.gov up to February 2017 to search for randomized controlled trials (RCTs) comparing TLI versus conventional methods. Both studies randomized women and oocytes were included. For studies randomized women, the primary outcomes were live birth and ongoing pregnancy, the secondary outcomes were clinical pregnancy and miscarriage; for studies randomized oocytes, the primary outcome was blastocyst rate, the secondary outcome was good quality embryo on Day 2/3. Subgroup analysis was conducted based on different incubation and embryo selection between groups. Ten RCTs were included, four randomized oocytes and six randomized women. For oocyte-based review, the pool-analysis observed no significant difference between TLI group and control group for blastocyst rate [relative risk (RR) 1.08, 95% CI 0.94-1.25, I2 = 0%, two studies, including 1154 embryos]. The quality of evidence was moderate for all outcomes in oocyte-based review. For woman-based review, only one study provided live birth rate (RR 1,23, 95% CI 1.06-1.44,I2 N/A, one study, including 842 women), the pooled result showed no significant difference in ongoing pregnancy rate (RR 1.04, 95% CI 0.80-1.36, I2 = 59%, four studies, including 1403 women) between two groups. The quality of the evidence was low or very low for all outcomes in woman-based review. Currently there is insufficient evidence to support that TLI is superior to conventional methods for human embryo incubation and selection. In consideration of the limitations and flaws of included studies, more well designed RCTs are still in need to comprehensively evaluate the effectiveness of clinical TLI use.

  9. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk.

    PubMed

    Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth

    2006-07-01

    This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.

  10. Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.

    2010-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  11. Robust portfolio selection based on asymmetric measures of variability of stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  12. Improving ensemble decision tree performance using Adaboost and Bagging

    NASA Astrophysics Data System (ADS)

    Hasan, Md. Rajib; Siraj, Fadzilah; Sainin, Mohd Shamrie

    2015-12-01

    Ensemble classifier systems are considered as one of the most promising in medical data classification and the performance of deceision tree classifier can be increased by the ensemble method as it is proven to be better than single classifiers. However, in a ensemble settings the performance depends on the selection of suitable base classifier. This research employed two prominent esemble s namely Adaboost and Bagging with base classifiers such as Random Forest, Random Tree, j48, j48grafts and Logistic Model Regression (LMT) that have been selected independently. The empirical study shows that the performance varries when different base classifiers are selected and even some places overfitting issue also been noted. The evidence shows that ensemble decision tree classfiers using Adaboost and Bagging improves the performance of selected medical data sets.

  13. US EPA Base Study Standard Operating Procedure for Building Recruiting

    EPA Pesticide Factsheets

    Building recruiting for the BASE study is defined by a random selection of buildings within cities of population exceeding 100,000 inhabitants and located in selected climatic regions of the United States.

  14. Differential privacy-based evaporative cooling feature selection and classification with relief-F and random forests.

    PubMed

    Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A

    2017-09-15

    Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code available at http://insilico.utulsa.edu/software/privateEC . brett-mckinney@utulsa.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  15. Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.

    2012-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  16. A web-based appointment system to reduce waiting for outpatients: a retrospective study.

    PubMed

    Cao, Wenjun; Wan, Yi; Tu, Haibo; Shang, Fujun; Liu, Danhong; Tan, Zhijun; Sun, Caihong; Ye, Qing; Xu, Yongyong

    2011-11-22

    Long waiting times for registration to see a doctor is problematic in China, especially in tertiary hospitals. To address this issue, a web-based appointment system was developed for the Xijing hospital. The aim of this study was to investigate the efficacy of the web-based appointment system in the registration service for outpatients. Data from the web-based appointment system in Xijing hospital from January to December 2010 were collected using a stratified random sampling method, from which participants were randomly selected for a telephone interview asking for detailed information on using the system. Patients who registered through registration windows were randomly selected as a comparison group, and completed a questionnaire on-site. A total of 5641 patients using the online booking service were available for data analysis. Of them, 500 were randomly selected, and 369 (73.8%) completed a telephone interview. Of the 500 patients using the usual queuing method who were randomly selected for inclusion in the study, responses were obtained from 463, a response rate of 92.6%. Between the two registration methods, there were significant differences in age, degree of satisfaction, and total waiting time (P<0.001). However, gender, urban residence, and valid waiting time showed no significant differences (P>0.05). Being ignorant of online registration, not trusting the internet, and a lack of ability to use a computer were three main reasons given for not using the web-based appointment system. The overall proportion of non-attendance was 14.4% for those using the web-based appointment system, and the non-attendance rate was significantly different among different hospital departments, day of the week, and time of the day (P<0.001). Compared to the usual queuing method, the web-based appointment system could significantly increase patient's satisfaction with registration and reduce total waiting time effectively. However, further improvements are needed for broad use of the system.

  17. Day-roost tree selection by northern long-eared bats—What do non-roost tree comparisons and one year of data really tell us?

    USGS Publications Warehouse

    Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.

    2015-01-01

    Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.

  18. Day-roost tree selection by northern long-eared bats - What do non-roost tree comparisons and one year of data really tell us?

    USGS Publications Warehouse

    Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.

    2015-01-01

    Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.

  19. Does time-lapse imaging have favorable results for embryo incubation and selection compared with conventional methods in clinical in vitro fertilization? A meta-analysis and systematic review of randomized controlled trials

    PubMed Central

    Yuan, Jing; Liu, Fenghua

    2017-01-01

    Objective The present study aimed to undertake a review of available evidence assessing whether time-lapse imaging (TLI) has favorable outcomes for embryo incubation and selection compared with conventional methods in clinical in vitro fertilization (IVF). Methods Using PubMed, EMBASE, Cochrane library and ClinicalTrial.gov up to February 2017 to search for randomized controlled trials (RCTs) comparing TLI versus conventional methods. Both studies randomized women and oocytes were included. For studies randomized women, the primary outcomes were live birth and ongoing pregnancy, the secondary outcomes were clinical pregnancy and miscarriage; for studies randomized oocytes, the primary outcome was blastocyst rate, the secondary outcome was good quality embryo on Day 2/3. Subgroup analysis was conducted based on different incubation and embryo selection between groups. Results Ten RCTs were included, four randomized oocytes and six randomized women. For oocyte-based review, the pool-analysis observed no significant difference between TLI group and control group for blastocyst rate [relative risk (RR) 1.08, 95% CI 0.94–1.25, I2 = 0%, two studies, including 1154 embryos]. The quality of evidence was moderate for all outcomes in oocyte-based review. For woman-based review, only one study provided live birth rate (RR 1,23, 95% CI 1.06–1.44,I2 N/A, one study, including 842 women), the pooled result showed no significant difference in ongoing pregnancy rate (RR 1.04, 95% CI 0.80–1.36, I2 = 59%, four studies, including 1403 women) between two groups. The quality of the evidence was low or very low for all outcomes in woman-based review. Conclusions Currently there is insufficient evidence to support that TLI is superior to conventional methods for human embryo incubation and selection. In consideration of the limitations and flaws of included studies, more well designed RCTs are still in need to comprehensively evaluate the effectiveness of clinical TLI use. PMID:28570713

  20. Adaptive consensus of scale-free multi-agent system by randomly selecting links

    NASA Astrophysics Data System (ADS)

    Mou, Jinping; Ge, Huafeng

    2016-06-01

    This paper investigates an adaptive consensus problem for distributed scale-free multi-agent systems (SFMASs) by randomly selecting links, where the degree of each node follows a power-law distribution. The randomly selecting links are based on the assumption that every agent decides to select links among its neighbours according to the received data with a certain probability. Accordingly, a novel consensus protocol with the range of the received data is developed, and each node updates its state according to the protocol. By the iterative method and Cauchy inequality, the theoretical analysis shows that all errors among agents converge to zero, and in the meanwhile, several criteria of consensus are obtained. One numerical example shows the reliability of the proposed methods.

  1. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    PubMed

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  2. Ensemble Feature Learning of Genomic Data Using Support Vector Machine

    PubMed Central

    Anaissi, Ali; Goyal, Madhu; Catchpoole, Daniel R.; Braytee, Ali; Kennedy, Paul J.

    2016-01-01

    The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algorithm were further explored with Singular Value Decomposition (SVD) which reveals significant clusters with the selected data. PMID:27304923

  3. A management-oriented classification of pinyon-juniper woodlands of the Great Basin

    Treesearch

    Neil E. West; Robin J. Tausch; Paul T. Tueller

    1998-01-01

    A hierarchical framework for the classification of Great Basin pinyon-juniper woodlands was based on a systematic sample of 426 stands from a random selection of 66 of the 110 mountain ranges in the region. That is, mountain ranges were randomly selected, but stands were systematically located on mountain ranges. The National Hierarchical Framework of Ecological Units...

  4. Random Time Identity Based Firewall In Mobile Ad hoc Networks

    NASA Astrophysics Data System (ADS)

    Suman, Patel, R. B.; Singh, Parvinder

    2010-11-01

    A mobile ad hoc network (MANET) is a self-organizing network of mobile routers and associated hosts connected by wireless links. MANETs are highly flexible and adaptable but at the same time are highly prone to security risks due to the open medium, dynamically changing network topology, cooperative algorithms, and lack of centralized control. Firewall is an effective means of protecting a local network from network-based security threats and forms a key component in MANET security architecture. This paper presents a review of firewall implementation techniques in MANETs and their relative merits and demerits. A new approach is proposed to select MANET nodes at random for firewall implementation. This approach randomly select a new node as firewall after fixed time and based on critical value of certain parameters like power backup. This approach effectively balances power and resource utilization of entire MANET because responsibility of implementing firewall is equally shared among all the nodes. At the same time it ensures improved security for MANETs from outside attacks as intruder will not be able to find out the entry point in MANET due to the random selection of nodes for firewall implementation.

  5. Using environmental heterogeneity to plan for sea-level rise.

    PubMed

    Hunter, Elizabeth A; Nibbelink, Nathan P

    2017-12-01

    Environmental heterogeneity is increasingly being used to select conservation areas that will provide for future biodiversity under a variety of climate scenarios. This approach, termed conserving nature's stage (CNS), assumes environmental features respond to climate change more slowly than biological communities, but will CNS be effective if the stage were to change as rapidly as the climate? We tested the effectiveness of using CNS to select sites in salt marshes for conservation in coastal Georgia (U.S.A.), where environmental features will change rapidly as sea level rises. We calculated species diversity based on distributions of 7 bird species with a variety of niches in Georgia salt marshes. Environmental heterogeneity was assessed across six landscape gradients (e.g., elevation, salinity, and patch area). We used 2 approaches to select sites with high environmental heterogeneity: site complementarity (environmental diversity [ED]) and local environmental heterogeneity (environmental richness [ER]). Sites selected based on ER predicted present-day species diversity better than randomly selected sites (up to an 8.1% improvement), were resilient to areal loss from SLR (1.0% average areal loss by 2050 compared with 0.9% loss of randomly selected sites), and provided habitat to a threatened species (0.63 average occupancy compared with 0.6 average occupancy of randomly selected sites). Sites selected based on ED predicted species diversity no better or worse than random and were not resilient to SLR (2.9% average areal loss by 2050). Despite the discrepancy between the 2 approaches, CNS is a viable strategy for conservation site selection in salt marshes because the ER approach was successful. It has potential for application in other coastal areas where SLR will affect environmental features, but its performance may depend on the magnitude of geological changes caused by SLR. Our results indicate that conservation planners that had heretofore excluded low-lying coasts from CNS planning could include coastal ecosystems in regional conservation strategies. © 2017 Society for Conservation Biology.

  6. The Effect of Basis Selection on Static and Random Acoustic Response Prediction Using a Nonlinear Modal Simulation

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam

    2005-01-01

    An investigation of the effect of basis selection on geometric nonlinear response prediction using a reduced-order nonlinear modal simulation is presented. The accuracy is dictated by the selection of the basis used to determine the nonlinear modal stiffness. This study considers a suite of available bases including bending modes only, bending and membrane modes, coupled bending and companion modes, and uncoupled bending and companion modes. The nonlinear modal simulation presented is broadly applicable and is demonstrated for nonlinear quasi-static and random acoustic response of flat beam and plate structures with isotropic material properties. Reduced-order analysis predictions are compared with those made using a numerical simulation in physical degrees-of-freedom to quantify the error associated with the selected modal bases. Bending and membrane responses are separately presented to help differentiate the bases.

  7. Limited Effects of a 2-Year School-Based Physical Activity Intervention on Body Composition and Cardiorespiratory Fitness in 7-Year-Old Children

    ERIC Educational Resources Information Center

    Magnusson, Kristjan Thor; Hrafnkelsson, Hannes; Sigurgeirsson, Ingvar; Johannsson, Erlingur; Sveinsson, Thorarinn

    2012-01-01

    The aim of this study was to assess the effects of a 2-year cluster-randomized physical activity and dietary intervention program among 7-year-old (at baseline) elementary school participants on body composition and objectively measured cardiorespiratory fitness. Three pairs of schools were selected and matched, then randomly selected as either an…

  8. Identification of Genes Involved in Breast Cancer Metastasis by Integrating Protein-Protein Interaction Information with Expression Data.

    PubMed

    Tian, Xin; Xin, Mingyuan; Luo, Jian; Liu, Mingyao; Jiang, Zhenran

    2017-02-01

    The selection of relevant genes for breast cancer metastasis is critical for the treatment and prognosis of cancer patients. Although much effort has been devoted to the gene selection procedures by use of different statistical analysis methods or computational techniques, the interpretation of the variables in the resulting survival models has been limited so far. This article proposes a new Random Forest (RF)-based algorithm to identify important variables highly related with breast cancer metastasis, which is based on the important scores of two variable selection algorithms, including the mean decrease Gini (MDG) criteria of Random Forest and the GeneRank algorithm with protein-protein interaction (PPI) information. The new gene selection algorithm can be called PPIRF. The improved prediction accuracy fully illustrated the reliability and high interpretability of gene list selected by the PPIRF approach.

  9. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation.

    PubMed

    Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom

    2015-12-23

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy. In this study, the optimal number of selected atlases used was six, but for definitive conclusions about the optimal number of atlases and to improve the autosegmentation accuracy for clinical use, more atlases need to be included.

  10. Factors Associated with High Use of a Workplace Web-Based Stress Management Program in a Randomized Controlled Intervention Study

    ERIC Educational Resources Information Center

    Hasson, H.; Brown, C.; Hasson, D.

    2010-01-01

    In web-based health promotion programs, large variations in participant engagement are common. The aim was to investigate determinants of high use of a worksite self-help web-based program for stress management. Two versions of the program were offered to randomly selected departments in IT and media companies. A static version of the program…

  11. Roosting habitat use and selection by northern spotted owls during natal dispersal

    USGS Publications Warehouse

    Sovern, Stan G.; Forsman, Eric D.; Dugger, Catherine M.; Taylor, Margaret

    2015-01-01

    We studied habitat selection by northern spotted owls (Strix occidentalis caurina) during natal dispersal in Washington State, USA, at both the roost site and landscape scales. We used logistic regression to obtain parameters for an exponential resource selection function based on vegetation attributes in roost and random plots in 76 forest stands that were used for roosting. We used a similar analysis to evaluate selection of landscape habitat attributes based on 301 radio-telemetry relocations and random points within our study area. We found no evidence of within-stand selection for any of the variables examined, but 78% of roosts were in stands with at least some large (>50 cm dbh) trees. At the landscape scale, owls selected for stands with high canopy cover (>70%). Dispersing owls selected vegetation types that were more similar to habitat selected by adult owls than habitat that would result from following guidelines previously proposed to maintain dispersal habitat. Our analysis indicates that juvenile owls select stands for roosting that have greater canopy cover than is recommended in current agency guidelines.

  12. Potential of Using Mobile Phone Data to Assist in Mission Analysis and Area of Operations Planning

    DTIC Science & Technology

    2015-08-01

    tremendously beneficial especially since a sizeable portion of the population are nomads , changing location based on season. A proper AO...provided: a. User_id: Selected User’s random ID b. Timestamp: 24 h format YYYY-MM-DD-HH:M0:00 (the second digits of the minutes and all the seconds...yearly were selected. This data provided: a. User_id: Selected User’s random ID b. Timestamp: 24 h format YYYY-MM-DD-HH:M0:00 (the second digits

  13. Reform-Based-Instructional Method and Learning Styles on Students' Achievement and Retention in Mathematics: Administrative Implications

    ERIC Educational Resources Information Center

    Modebelu, M. N.; Ogbonna, C. C.

    2014-01-01

    This study aimed at determining the effect of reform-based-instructional method learning styles on students' achievement and retention in mathematics. A sample size of 119 students was randomly selected. The quasiexperimental design comprising pre-test, post-test, and randomized control group were employed. The Collin Rose learning styles…

  14. Two Student Self-Management Techniques Applied to Data-Based Program Modification.

    ERIC Educational Resources Information Center

    Wesson, Caren

    Two student self-management techniques, student charting and student selection of instructional activities, were applied to ongoing data-based program modification. Forty-two elementary school resource room students were assigned randomly (within teacher) to one of three treatment conditions: Teacher Chart-Teacher Select Instructional Activities…

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peressutti, D; Schipaanboord, B; Kadir, T

    Purpose: To investigate the effectiveness of atlas selection methods for improving atlas-based auto-contouring in radiotherapy planning. Methods: 275 H&N clinically delineated cases were employed as an atlas database from which atlases would be selected. A further 40 previously contoured cases were used as test patients against which atlas selection could be performed and evaluated. 26 variations of selection methods proposed in the literature and used in commercial systems were investigated. Atlas selection methods comprised either global or local image similarity measures, computed after rigid or deformable registration, combined with direct atlas search or with an intermediate template image. Workflow Boxmore » (Mirada-Medical, Oxford, UK) was used for all auto-contouring. Results on brain, brainstem, parotids and spinal cord were compared to random selection, a fixed set of 10 “good” atlases, and optimal selection by an “oracle” with knowledge of the ground truth. The Dice score and the average ranking with respect to the “oracle” were employed to assess the performance of the top 10 atlases selected by each method. Results: The fixed set of “good” atlases outperformed all of the atlas-patient image similarity-based selection methods (mean Dice 0.715 c.f. 0.603 to 0.677). In general, methods based on exhaustive comparison of local similarity measures showed better average Dice scores (0.658 to 0.677) compared to the use of either template image (0.655 to 0.672) or global similarity measures (0.603 to 0.666). The performance of image-based selection methods was found to be only slightly better than a random (0.645). Dice scores given relate to the left parotid, but similar results patterns were observed for all organs. Conclusion: Intuitively, atlas selection based on the patient CT is expected to improve auto-contouring performance. However, it was found that published approaches performed marginally better than random and use of a fixed set of representative atlases showed favourable performance. This research was funded via InnovateUK Grant 600277 as part of Eurostars Grant E!9297. DP,BS,MG,TK are employees of Mirada Medical Ltd.« less

  16. DNA-based random number generation in security circuitry.

    PubMed

    Gearheart, Christy M; Arazi, Benjamin; Rouchka, Eric C

    2010-06-01

    DNA-based circuit design is an area of research in which traditional silicon-based technologies are replaced by naturally occurring phenomena taken from biochemistry and molecular biology. This research focuses on further developing DNA-based methodologies to mimic digital data manipulation. While exhibiting fundamental principles, this work was done in conjunction with the vision that DNA-based circuitry, when the technology matures, will form the basis for a tamper-proof security module, revolutionizing the meaning and concept of tamper-proofing and possibly preventing it altogether based on accurate scientific observations. A paramount part of such a solution would be self-generation of random numbers. A novel prototype schema employs solid phase synthesis of oligonucleotides for random construction of DNA sequences; temporary storage and retrieval is achieved through plasmid vectors. A discussion of how to evaluate sequence randomness is included, as well as how these techniques are applied to a simulation of the random number generation circuitry. Simulation results show generated sequences successfully pass three selected NIST random number generation tests specified for security applications.

  17. Selection of examples in case-based computer-aided decision systems

    PubMed Central

    Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.

    2013-01-01

    Case-based computer-aided decision (CB-CAD) systems rely on a database of previously stored, known examples when classifying new, incoming queries. Such systems can be particularly useful since they do not need retraining every time a new example is deposited in the case base. The adaptive nature of case-based systems is well suited to the current trend of continuously expanding digital databases in the medical domain. To maintain efficiency, however, such systems need sophisticated strategies to effectively manage the available evidence database. In this paper, we discuss the general problem of building an evidence database by selecting the most useful examples to store while satisfying existing storage requirements. We evaluate three intelligent techniques for this purpose: genetic algorithm-based selection, greedy selection and random mutation hill climbing. These techniques are compared to a random selection strategy used as the baseline. The study is performed with a previously presented CB-CAD system applied for false positive reduction in screening mammograms. The experimental evaluation shows that when the development goal is to maximize the system’s diagnostic performance, the intelligent techniques are able to reduce the size of the evidence database to 37% of the original database by eliminating superfluous and/or detrimental examples while at the same time significantly improving the CAD system’s performance. Furthermore, if the case-base size is a main concern, the total number of examples stored in the system can be reduced to only 2–4% of the original database without a decrease in the diagnostic performance. Comparison of the techniques shows that random mutation hill climbing provides the best balance between the diagnostic performance and computational efficiency when building the evidence database of the CB-CAD system. PMID:18854606

  18. Unbiased feature selection in learning random forests for high-dimensional data.

    PubMed

    Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi

    2015-01-01

    Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.

  19. Applications of random forest feature selection for fine-scale genetic population assignment.

    PubMed

    Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G

    2018-02-01

    Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.

  20. Midwives Performance in Early Detection of Growth and Development Irregularities of Children Based on Task Commitment

    ERIC Educational Resources Information Center

    Utami, Sri; Nursalam; Hargono, Rachmat; Susilaningrum, Rekawati

    2016-01-01

    The purpose of this study was to analyze the performance of midwives based on the task commitment. This was an observational analytic with cross sectional approach. Multistage random sampling was used to determine the public health center, proportional random sampling to selected participants. The samples were 222 midwives in the public health…

  1. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    NASA Astrophysics Data System (ADS)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  2. Utility and Cost-Effectiveness of Motivational Messaging to Increase Survey Response in Physicians: A Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Chan, Randolph C. H.; Mak, Winnie W. S.; Pang, Ingrid H. Y.; Wong, Samuel Y. S.; Tang, Wai Kwong; Lau, Joseph T. F.; Woo, Jean; Lee, Diana T. F.; Cheung, Fanny M.

    2018-01-01

    The present study examined whether, when, and how motivational messaging can boost the response rate of postal surveys for physicians based on Higgin's regulatory focus theory, accounting for its cost-effectiveness. A three-arm, blinded, randomized controlled design was used. A total of 3,270 doctors were randomly selected from the registration…

  3. The Ecological Effects of Universal and Selective Violence Prevention Programs for Middle School Students: A Randomized Trial

    ERIC Educational Resources Information Center

    Simon, Thomas R.; Ikeda, Robin M.; Smith, Emilie Phillips; Reese, Le'Roy E.; Rabiner, David L.; Miller, Shari; Winn, Donna-Marie; Dodge, Kenneth A.; Asher, Steven R.; Horne, Arthur M.; Orpinas, Pamela; Martin, Roy; Quinn, William H.; Tolan, Patrick H.; Gorman-Smith, Deborah; Henry, David B.; Gay, Franklin N.; Schoeny, Michael; Farrell, Albert D.; Meyer, Aleta L.; Sullivan, Terri N.; Allison, Kevin W.

    2009-01-01

    This study reports the findings of a multisite randomized trial evaluating the separate and combined effects of 2 school-based approaches to reduce violence among early adolescents. A total of 37 schools at 4 sites were randomized to 4 conditions: (1) a universal intervention that involved implementing a student curriculum and teacher training…

  4. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  5. Provider-related barriers to rapid HIV testing in U.S. urban non-profit community clinics, community-based organizations (CBOs) and hospitals.

    PubMed

    Bogart, Laura M; Howerton, Devery; Lange, James; Setodji, Claude Messan; Becker, Kirsten; Klein, David J; Asch, Steven M

    2010-06-01

    We examined provider-reported barriers to rapid HIV testing in U.S. urban non-profit community clinics, community-based organizations (CBOs), and hospitals. 12 primary metropolitan statistical areas (PMSAs; three per region) were sampled randomly, with sampling weights proportional to AIDS case reports. Across PMSAs, all 671 hospitals and a random sample of 738 clinics/CBOs were telephoned for a survey on rapid HIV test availability. Of the 671 hospitals, 172 hospitals were randomly selected for barriers questions, for which 158 laboratory and 136 department staff were eligible and interviewed in 2005. Of the 738 clinics/CBOs, 276 were randomly selected for barriers questions, 206 were reached, and 118 were eligible and interviewed in 2005-2006. In multivariate models, barriers regarding translation of administrative/quality assurance policies into practice were significantly associated with rapid HIV testing availability. For greater rapid testing diffusion, policies are needed to reduce administrative barriers and provide quality assurance training to non-laboratory staff.

  6. Selective intra-dinucleotide interactions and periodicities of bases separated by K sites: a new vision and tool for phylogeny analyses.

    PubMed

    Valenzuela, Carlos Y

    2017-02-13

    Direct tests of the random or non-random distribution of nucleotides on genomes have been devised to test the hypothesis of neutral, nearly-neutral or selective evolution. These tests are based on the direct base distribution and are independent of the functional (coding or non-coding) or structural (repeated or unique sequences) properties of the DNA. The first approach described the longitudinal distribution of bases in tandem repeats under the Bose-Einstein statistics. A huge deviation from randomness was found. A second approach was the study of the base distribution within dinucleotides whose bases were separated by 0, 1, 2… K nucleotides. Again an enormous difference from the random distribution was found with significances out of tables and programs. These test values were periodical and included the 16 dinucleotides. For example a high "positive" (more observed than expected dinucleotides) value, found in dinucleotides whose bases were separated by (3K + 2) sites, was preceded by two smaller "negative" (less observed than expected dinucleotides) values, whose bases were separated by (3K) or (3K + 1) sites. We examined mtDNAs, prokaryote genomes and some eukaryote chromosomes and found that the significant non-random interactions and periodicities were present up to 1000 or more sites of base separation and in human chromosome 21 until separations of more than 10 millions sites. Each nucleotide has its own significant value of its distance to neutrality; this yields 16 hierarchical significances. A three dimensional table with the number of sites of separation between the bases and the 16 significances (the third dimension is the dinucleotide, individual or taxon involved) gives directly an evolutionary state of the analyzed genome that can be used to obtain phylogenies. An example is provided.

  7. Optimizing classification performance in an object-based very-high-resolution land use-land cover urban application

    NASA Astrophysics Data System (ADS)

    Georganos, Stefanos; Grippa, Tais; Vanhuysse, Sabine; Lennert, Moritz; Shimoni, Michal; Wolff, Eléonore

    2017-10-01

    This study evaluates the impact of three Feature Selection (FS) algorithms in an Object Based Image Analysis (OBIA) framework for Very-High-Resolution (VHR) Land Use-Land Cover (LULC) classification. The three selected FS algorithms, Correlation Based Selection (CFS), Mean Decrease in Accuracy (MDA) and Random Forest (RF) based Recursive Feature Elimination (RFE), were tested on Support Vector Machine (SVM), K-Nearest Neighbor, and Random Forest (RF) classifiers. The results demonstrate that the accuracy of SVM and KNN classifiers are the most sensitive to FS. The RF appeared to be more robust to high dimensionality, although a significant increase in accuracy was found by using the RFE method. In terms of classification accuracy, SVM performed the best using FS, followed by RF and KNN. Finally, only a small number of features is needed to achieve the highest performance using each classifier. This study emphasizes the benefits of rigorous FS for maximizing performance, as well as for minimizing model complexity and interpretation.

  8. The role of color and attention-to-color in mirror-symmetry perception.

    PubMed

    Gheorghiu, Elena; Kingdom, Frederick A A; Remkes, Aaron; Li, Hyung-Chul O; Rainville, Stéphane

    2016-07-11

    The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) 'segregated' - symmetric blobs were of one color, random blobs of the other color(s); (2) 'random-segregated' - as above but with the symmetric color randomly selected on each trial; (3) 'non-segregated' - symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) 'anti-symmetric' - symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective.

  9. The role of color and attention-to-color in mirror-symmetry perception

    PubMed Central

    Gheorghiu, Elena; Kingdom, Frederick A. A.; Remkes, Aaron; Li, Hyung-Chul O.; Rainville, Stéphane

    2016-01-01

    The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) ‘segregated’ – symmetric blobs were of one color, random blobs of the other color(s); (2) ‘random-segregated’ – as above but with the symmetric color randomly selected on each trial; (3) ‘non-segregated’ – symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) ‘anti-symmetric’ – symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective. PMID:27404804

  10. Selecting Statistical Quality Control Procedures for Limiting the Impact of Increases in Analytical Random Error on Patient Safety.

    PubMed

    Yago, Martín

    2017-05-01

    QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.

  11. Why the null matters: statistical tests, random walks and evolution.

    PubMed

    Sheets, H D; Mitchell, C E

    2001-01-01

    A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.

  12. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  13. Ray tracing method for simulation of laser beam interaction with random packings of powders

    NASA Astrophysics Data System (ADS)

    Kovalev, O. B.; Kovaleva, I. O.; Belyaev, V. V.

    2018-03-01

    Selective laser sintering is a technology of rapid manufacturing of a free form that is created as a solid object by selectively fusing successive layers of powder using a laser. The motivation of this study is due to the currently insufficient understanding of the processes and phenomena of selective laser melting of powders whose time scales differ by orders of magnitude. To construct random packings from mono- and polydispersed solid spheres, the algorithm of their generation based on the discrete element method is used. A numerical method of ray tracing is proposed that is used to simulate the interaction of laser radiation with a random bulk packing of spherical particles and to predict the optical properties of the granular layer, the extinction and absorption coefficients, depending on the optical properties of a powder material.

  14. Extensively Parameterized Mutation-Selection Models Reliably Capture Site-Specific Selective Constraint.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2016-11-01

    The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Selective randomized load balancing and mesh networks with changing demands

    NASA Astrophysics Data System (ADS)

    Shepherd, F. B.; Winzer, P. J.

    2006-05-01

    We consider the problem of building cost-effective networks that are robust to dynamic changes in demand patterns. We compare several architectures using demand-oblivious routing strategies. Traditional approaches include single-hop architectures based on a (static or dynamic) circuit-switched core infrastructure and multihop (packet-switched) architectures based on point-to-point circuits in the core. To address demand uncertainty, we seek minimum cost networks that can carry the class of hose demand matrices. Apart from shortest-path routing, Valiant's randomized load balancing (RLB), and virtual private network (VPN) tree routing, we propose a third, highly attractive approach: selective randomized load balancing (SRLB). This is a blend of dual-hop hub routing and randomized load balancing that combines the advantages of both architectures in terms of network cost, delay, and delay jitter. In particular, we give empirical analyses for the cost (in terms of transport and switching equipment) for the discussed architectures, based on three representative carrier networks. Of these three networks, SRLB maintains the resilience properties of RLB while achieving significant cost reduction over all other architectures, including RLB and multihop Internet protocol/multiprotocol label switching (IP/MPLS) networks using VPN-tree routing.

  16. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    PubMed

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  17. Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering

    NASA Astrophysics Data System (ADS)

    Bruno, Marcelo G. S.; Dias, Stiven S.

    2014-12-01

    We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.

  18. Prediction of Baseflow Index of Catchments using Machine Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Yadav, B.; Hatfield, K.

    2017-12-01

    We present the results of eight machine learning techniques for predicting the baseflow index (BFI) of ungauged basins using a surrogate of catchment scale climate and physiographic data. The tested algorithms include ordinary least squares, ridge regression, least absolute shrinkage and selection operator (lasso), elasticnet, support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Our work seeks to identify the dominant controls of BFI that can be readily obtained from ancillary geospatial databases and remote sensing measurements, such that the developed techniques can be extended to ungauged catchments. More than 800 gauged catchments spanning the continental United States were selected to develop the general methodology. The BFI calculation was based on the baseflow separated from daily streamflow hydrograph using HYSEP filter. The surrogate catchment attributes were compiled from multiple sources including digital elevation model, soil, landuse, climate data, other publicly available ancillary and geospatial data. 80% catchments were used to train the ML algorithms, and the remaining 20% of the catchments were used as an independent test set to measure the generalization performance of fitted models. A k-fold cross-validation using exhaustive grid search was used to fit the hyperparameters of each model. Initial model development was based on 19 independent variables, but after variable selection and feature ranking, we generated revised sparse models of BFI prediction that are based on only six catchment attributes. These key predictive variables selected after the careful evaluation of bias-variance tradeoff include average catchment elevation, slope, fraction of sand, permeability, temperature, and precipitation. The most promising algorithms exceeding an accuracy score (r-square) of 0.7 on test data include support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Considering both the accuracy and the computational complexity of these algorithms, we identify the extremely randomized trees as the best performing algorithm for BFI prediction in ungauged basins.

  19. Gain in Student Understanding of the Role of Random Variation in Evolution Following Teaching Intervention Based on Luria-Delbruck Experiment†

    PubMed Central

    Robson, Rachel L.; Burns, Susan

    2011-01-01

    Undergraduate students in introductory biology classes are typically saddled with pre-existing popular beliefs that impede their ability to learn about biological evolution. One of the most common misconceptions about evolution is that the environment causes advantageous mutations, rather than the correct view that mutations occur randomly and the environment only selects for mutants with advantageous traits. In this study, a significant gain in student understanding of the role of randomness in evolution was observed after students participated in an inquiry-based pedagogical intervention based on the Luria-Delbruck experiment. Questionnaires with isomorphic questions regarding environmental selection among random mutants were administered to study participants (N = 82) in five separate sections of a sophomore-level microbiology class before and after the teaching intervention. Demographic data on each participant was also collected, in a way that preserved anonymity. Repeated measures analysis showed that post-test scores were significantly higher than pre-test scores with regard to the questions about evolution (F(1, 77) = 25.913, p < 0.001). Participants’ pre-existing beliefs about evolution had no significant effect on gain in understanding of this concept. This study indicates that conducting and discussing an experiment about phage resistance in E. coli may improve student understanding of the role of stochastic events in evolution more broadly, as post-test answers showed that students were able to apply the lesson of the Luria-Delbruck experiment to other organisms subjected to other kinds of selection. PMID:23653732

  20. Methodology Series Module 5: Sampling Strategies.

    PubMed

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  1. Methodology Series Module 5: Sampling Strategies

    PubMed Central

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  2. Shaping Attention with Reward: Effects of Reward on Space- and Object-Based Selection

    PubMed Central

    Shomstein, Sarah; Johnson, Jacoba

    2014-01-01

    The contribution of rewarded actions to automatic attentional selection remains obscure. We hypothesized that some forms of automatic orienting, such as object-based selection, can be completely abandoned in lieu of reward maximizing strategy. While presenting identical visual stimuli to the observer, in a set of two experiments, we manipulate what is being rewarded (different object targets or random object locations) and the type of reward received (money or points). It was observed that reward alone guides attentional selection, entirely predicting behavior. These results suggest that guidance of selective attention, while automatic, is flexible and can be adjusted in accordance with external non-sensory reward-based factors. PMID:24121412

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.

    Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less

  4. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis.

    PubMed

    Ozçift, Akin

    2011-05-01

    Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. A Randomized Controlled Trial of an Electronic Informed Consent Process

    PubMed Central

    Rothwell, Erin; Wong, Bob; Rose, Nancy C.; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A.; Botkin, Jeffrey R.

    2018-01-01

    A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. PMID:25747685

  6. Vector control of wind turbine on the basis of the fuzzy selective neural net*

    NASA Astrophysics Data System (ADS)

    Engel, E. A.; Kovalev, I. V.; Engel, N. E.

    2016-04-01

    An article describes vector control of wind turbine based on fuzzy selective neural net. Based on the wind turbine system’s state, the fuzzy selective neural net tracks an maximum power point under random perturbations. Numerical simulations are accomplished to clarify the applicability and advantages of the proposed vector wind turbine’s control on the basis of the fuzzy selective neuronet. The simulation results show that the proposed intelligent control of wind turbine achieves real-time control speed and competitive performance, as compared to a classical control model with PID controllers based on traditional maximum torque control strategy.

  7. Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.

    PubMed

    Ma, Yunbei; Zhou, Xiao-Hua

    2017-02-01

    For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.

  8. 47 CFR 1.1602 - Designation for random selection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  9. 47 CFR 1.1602 - Designation for random selection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  10. Generation of Aptamers from A Primer-Free Randomized ssDNA Library Using Magnetic-Assisted Rapid Aptamer Selection

    NASA Astrophysics Data System (ADS)

    Tsao, Shih-Ming; Lai, Ji-Ching; Horng, Horng-Er; Liu, Tu-Chen; Hong, Chin-Yih

    2017-04-01

    Aptamers are oligonucleotides that can bind to specific target molecules. Most aptamers are generated using random libraries in the standard systematic evolution of ligands by exponential enrichment (SELEX). Each random library contains oligonucleotides with a randomized central region and two fixed primer regions at both ends. The fixed primer regions are necessary for amplifying target-bound sequences by PCR. However, these extra-sequences may cause non-specific bindings, which potentially interfere with good binding for random sequences. The Magnetic-Assisted Rapid Aptamer Selection (MARAS) is a newly developed protocol for generating single-strand DNA aptamers. No repeat selection cycle is required in the protocol. This study proposes and demonstrates a method to isolate aptamers for C-reactive proteins (CRP) from a randomized ssDNA library containing no fixed sequences at 5‧ and 3‧ termini using the MARAS platform. Furthermore, the isolated primer-free aptamer was sequenced and binding affinity for CRP was analyzed. The specificity of the obtained aptamer was validated using blind serum samples. The result was consistent with monoclonal antibody-based nephelometry analysis, which indicated that a primer-free aptamer has high specificity toward targets. MARAS is a feasible platform for efficiently generating primer-free aptamers for clinical diagnoses.

  11. Comparative levels of creative ability in black and white college students.

    PubMed

    Glover, J A

    1976-03-01

    Eighty-seven black, educational psychology students from three intact, randomly selected classes at Tennessee State University were compared to ninety-four white, educational phychology students from three intact, randomly selected classes at the University of Tennessee on Torrance's Unusual Uses and Ask and Guess activities. No differences were found on the frequency of flexibility measures of either activity. No attempt was made to examine the results on this "Level II" mental ability measure on any variable except race. There were no differences based on race.

  12. 47 CFR 1.1603 - Conduct of random selection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  13. 47 CFR 1.1603 - Conduct of random selection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  14. There is room for selection in a small local pig breed when using optimum contribution selection: a simulation study.

    PubMed

    Gourdine, J L; Sørensen, A C; Rydhmer, L

    2012-01-01

    Selection progress must be carefully balanced against the conservation of genetic variation in small populations of local breeds. Well-defined breeding programs with specified selection traits are rare in local pig breeds. Given the small population size, the focus is often on the management of genetic diversity. However, in local breeds, optimum contribution selection can be applied to control the rate of inbreeding and to avoid reduced performance in traits with high market value. The aim of this study was to assess the extent to which a breeding program aiming for improved product quality in a small local breed would be feasible. We used stochastic simulations to compare 25 scenarios. The scenarios differed in size of population, selection intensity of boars, type of selection (random selection, truncation selection based on BLUP breeding values, or optimum contribution selection based on BLUP breeding values), and heritability of the selection trait. It was assumed that the local breed is used in an extensive system for a high-meat-quality market. The simulations showed that in the smallest population (300 female reproducers), inbreeding increased by 0.8% when selection was performed at random. With optimum contribution selection, genetic progress can be achieved that is almost as great as that with truncation selection based on BLUP breeding values (0.2 to 0.5 vs. 0.3 to 0.5 genetic SD, P < 0.05), but at a considerably decreased rate of inbreeding (0.7 to 1.2 vs. 2.3 to 5.7%, P < 0.01). This confirmation of the potential utilization of OCS even in small populations is important in the context of sustainable management and the use of animal genetic resources.

  15. Goal selection versus process control while learning to use a brain-computer interface

    NASA Astrophysics Data System (ADS)

    Royer, Audrey S.; Rose, Minn L.; He, Bin

    2011-06-01

    A brain-computer interface (BCI) can be used to accomplish a task without requiring motor output. Two major control strategies used by BCIs during task completion are process control and goal selection. In process control, the user exerts continuous control and independently executes the given task. In goal selection, the user communicates their goal to the BCI and then receives assistance executing the task. A previous study has shown that goal selection is more accurate and faster in use. An unanswered question is, which control strategy is easier to learn? This study directly compares goal selection and process control while learning to use a sensorimotor rhythm-based BCI. Twenty young healthy human subjects were randomly assigned either to a goal selection or a process control-based paradigm for eight sessions. At the end of the study, the best user from each paradigm completed two additional sessions using all paradigms randomly mixed. The results of this study were that goal selection required a shorter training period for increased speed, accuracy, and information transfer over process control. These results held for the best subjects as well as in the general subject population. The demonstrated characteristics of goal selection make it a promising option to increase the utility of BCIs intended for both disabled and able-bodied users.

  16. Relationships between habitat quality and measured condition variables in Gulf of Mexico mangroves

    EPA Science Inventory

    Abstract Ecosystem condition assessments were conducted for 12 mangrove sites in the northern Gulf of Mexico. Nine sites were selected randomly; three were selected a priori based on best professional judgment to represent a poor, intermediate and good environmental condition. D...

  17. Inference from habitat-selection analysis depends on foraging strategies.

    PubMed

    Bastille-Rousseau, Guillaume; Fortin, Daniel; Dussault, Christian

    2010-11-01

    1. Several methods have been developed to assess habitat selection, most of which are based on a comparison between habitat attributes in used vs. unused or random locations, such as the popular resource selection functions (RSFs). Spatial evaluation of residency time has been recently proposed as a promising avenue for studying habitat selection. Residency-time analyses assume a positive relationship between residency time within habitat patches and selection. We demonstrate that RSF and residency-time analyses provide different information about the process of habitat selection. Further, we show how the consideration of switching rate between habitat patches (interpatch movements) together with residency-time analysis can reveal habitat-selection strategies. 2. Spatially explicit, individual-based modelling was used to simulate foragers displaying one of six foraging strategies in a heterogeneous environment. The strategies combined one of three patch-departure rules (fixed-quitting-harvest-rate, fixed-time and fixed-amount strategy), together with one of two interpatch-movement rules (random or biased). Habitat selection of simulated foragers was then assessed using RSF, residency-time and interpatch-movement analyses. 3. Our simulations showed that RSFs and residency times are not always equivalent. When foragers move in a non-random manner and do not increase residency time in richer patches, residency-time analysis can provide misleading assessments of habitat selection. This is because the overall time spent in the various patch types not only depends on residency times, but also on interpatch-movement decisions. 4. We suggest that RSFs provide the outcome of the entire selection process, whereas residency-time and interpatch-movement analyses can be used in combination to reveal the mechanisms behind the selection process. 5. We showed that there is a risk in using residency-time analysis alone to infer habitat selection. Residency-time analyses, however, may enlighten the mechanisms of habitat selection by revealing central components of resource-use strategies. Given that management decisions are often based on resource-selection analyses, the evaluation of resource-use strategies can be key information for the development of efficient habitat-management strategies. Combining RSF, residency-time and interpatch-movement analyses is a simple and efficient way to gain a more comprehensive understanding of habitat selection. © 2010 The Authors. Journal compilation © 2010 British Ecological Society.

  18. Changing friend selection in middle school: A social network analysis of a randomized intervention study designed to prevent adolescent problem behavior

    PubMed Central

    DeLay, Dawn; Ha, Thao; Van Ryzin, Mark; Winter, Charlotte; Dishion, Thomas J.

    2015-01-01

    Adolescent friendships that promote problem behavior are often chosen in middle school. The current study examines the unintended impact of a randomized school based intervention on the selection of friends in middle school, as well as on observations of deviant talk with friends five years later. Participants included 998 middle school students (526 boys and 472 girls) recruited at the onset of middle school (age 11-12 years) from three public middle schools participating in the Family Check-up model intervention. The current study focuses only on the effects of the SHAPe curriculum—one level of the Family Check-up model—on friendship choices. Participants nominated friends and completed measures of deviant peer affiliation. Approximately half of the sample (n=500) was randomly assigned to the intervention and the other half (n=498) comprised the control group within each school. The results indicate that the SHAPe curriculum affected friend selection within School 1, but not within Schools 2 or 3. The effects of friend selection in School 1 translated into reductions in observed deviancy training five years later (age 16-17 years). By coupling longitudinal social network analysis with a randomized intervention study the current findings provide initial evidence that a randomized public middle school intervention can disrupt the formation of deviant peer groups and diminish levels of adolescent deviance five years later. PMID:26377235

  19. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness.

    PubMed

    Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko

    2012-01-01

    To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Effectiveness of a new health care organization model in primary care for chronic cardiovascular disease patients based on a multifactorial intervention: the PROPRESE randomized controlled trial.

    PubMed

    Orozco-Beltran, Domingo; Ruescas-Escolano, Esther; Navarro-Palazón, Ana Isabel; Cordero, Alberto; Gaubert-Tortosa, María; Navarro-Perez, Jorge; Carratalá-Munuera, Concepción; Pertusa-Martínez, Salvador; Soler-Bahilo, Enrique; Brotons-Muntó, Francisco; Bort-Cubero, Jose; Nuñez-Martinez, Miguel Angel; Bertomeu-Martinez, Vicente; Gil-Guillen, Vicente Francisco

    2013-08-02

    To evaluate the effectiveness of a new multifactorial intervention to improve health care for chronic ischemic heart disease patients in primary care. The strategy has two components: a) organizational for the patient/professional relationship and b) training for professionals. Experimental study. Randomized clinical trial. Follow-up period: one year. primary care, multicenter (15 health centers). For the intervention group 15 health centers are selected from those participating in ESCARVAL study. Once the center agreed to participate patients are randomly selected from the total amount of patients with ischemic heart disease registered in the electronic health records. For the control group a random sample of patients with ischemic heart disease is selected from all 72 health centers electronic records. This study aims to evaluate the efficacy of a multifactorial intervention strategy involving patients with ischemic heart disease for the improvement of the degree of control of the cardiovascular risk factors and of the quality of life, number of visits, and number of hospitalizations. NCT01826929.

  1. Organic Ferroelectric-Based 1T1T Random Access Memory Cell Employing a Common Dielectric Layer Overcoming the Half-Selection Problem.

    PubMed

    Zhao, Qiang; Wang, Hanlin; Ni, Zhenjie; Liu, Jie; Zhen, Yonggang; Zhang, Xiaotao; Jiang, Lang; Li, Rongjin; Dong, Huanli; Hu, Wenping

    2017-09-01

    Organic electronics based on poly(vinylidenefluoride/trifluoroethylene) (P(VDF-TrFE)) dielectric is facing great challenges in flexible circuits. As one indispensable part of integrated circuits, there is an urgent demand for low-cost and easy-fabrication nonvolatile memory devices. A breakthrough is made on a novel ferroelectric random access memory cell (1T1T FeRAM cell) consisting of one selection transistor and one ferroelectric memory transistor in order to overcome the half-selection problem. Unlike complicated manufacturing using multiple dielectrics, this system simplifies 1T1T FeRAM cell fabrication using one common dielectric. To achieve this goal, a strategy for semiconductor/insulator (S/I) interface modulation is put forward and applied to nonhysteretic selection transistors with high performances for driving or addressing purposes. As a result, high hole mobility of 3.81 cm 2 V -1 s -1 (average) for 2,6-diphenylanthracene (DPA) and electron mobility of 0.124 cm 2 V -1 s -1 (average) for N,N'-1H,1H-perfluorobutyl dicyanoperylenecarboxydiimide (PDI-FCN 2 ) are obtained in selection transistors. In this work, we demonstrate this technology's potential for organic ferroelectric-based pixelated memory module fabrication. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A randomized controlled trial of an electronic informed consent process.

    PubMed

    Rothwell, Erin; Wong, Bob; Rose, Nancy C; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A; Botkin, Jeffrey R

    2014-12-01

    A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. © The Author(s) 2014.

  3. Analysis of creative mathematic thinking ability in problem based learning model based on self-regulation learning

    NASA Astrophysics Data System (ADS)

    Munahefi, D. N.; Waluya, S. B.; Rochmad

    2018-03-01

    The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.

  4. Studying the Effectiveness of Combination Therapy (Based on Executive Function and Sensory Integration) Child-Centered on the Symptoms of Attention Deficit/hyperactivity Disorder (ADHD)

    ERIC Educational Resources Information Center

    Salami, Fatemeh; Ashayeri, Hassan; Estaki, Mahnaz; Farzad, Valiollah; Entezar, Roya Koochak

    2017-01-01

    The aim of the present study is to examine the effectiveness of combination therapy based on executive function and sensory integration child-centered on ADHD. For this purpose, from among all first, second and third grade primary school students in Shiraz, 40 children were selected. The selected students were randomly assigned in two groups of…

  5. Selection of DNA aptamers against Human Cardiac Troponin I for colorimetric sensor based dot blot application.

    PubMed

    Dorraj, Ghamar Soltan; Rassaee, Mohammad Javad; Latifi, Ali Mohammad; Pishgoo, Bahram; Tavallaei, Mahmood

    2015-08-20

    Troponin T and I are ideal markers which are highly sensitive and specific for myocardial injury and have shown better efficacy than earlier markers. Since aptamers are ssDNA or RNA that bind to a wide variety of target molecules, the purpose of this research was to select an aptamer from a 79bp single-stranded DNA (ssDNA) random library that was used to bind the Human Cardiac Troponin I from a synthetic nucleic acids library by systematic evolution of ligands exponential enrichment (Selex) based on several selection and amplification steps. Human Cardiac Troponin I protein was coated onto the surface of streptavidin magnetic beads to extract specific aptamer from a large and diverse random ssDNA initial oligonucleotide library. As a result, several aptamers were selected and further examined for binding affinity and specificity. Finally TnIApt 23 showed beast affinity in nanomolar range (2.69nM) toward the target protein. A simple and rapid colorimetric detection assay for Human Cardiac Troponin I using the novel and specific aptamer-AuNPs conjugates based on dot blot assay was developed. The detection limit for this protein using aptamer-AuNPs-based assay was found to be 5ng/ml. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. THE WESTERN LAKE SUPERIOR COMPARATIVE WATERSHED FRAMEWORK: A FIELD TEST OF GEOGRAPHICALLY-DEPENDENT VS. THRESHOLD-BASED GEOGRAPHICALLY-INDEPENDENT CLASSIFICATION

    EPA Science Inventory

    Stratified random selection of watersheds allowed us to compare geographically-independent classification schemes based on watershed storage (wetland + lake area/watershed area) and forest fragmentation with a geographically-based classification scheme within the Northern Lakes a...

  7. A large-scale cluster randomized trial to determine the effects of community-based dietary sodium reduction--the China Rural Health Initiative Sodium Reduction Study.

    PubMed

    Li, Nicole; Yan, Lijing L; Niu, Wenyi; Labarthe, Darwin; Feng, Xiangxian; Shi, Jingpu; Zhang, Jianxin; Zhang, Ruijuan; Zhang, Yuhong; Chu, Hongling; Neiman, Andrea; Engelgau, Michael; Elliott, Paul; Wu, Yangfeng; Neal, Bruce

    2013-11-01

    Cardiovascular diseases are the leading cause of death and disability in China. High blood pressure caused by excess intake of dietary sodium is widespread and an effective sodium reduction program has potential to improve cardiovascular health. This study is a large-scale, cluster-randomized, trial done in five Northern Chinese provinces. Two counties have been selected from each province and 12 townships in each county making a total of 120 clusters. Within each township one village has been selected for participation with 1:1 randomization stratified by county. The sodium reduction intervention comprises community health education and a food supply strategy based upon providing access to salt substitute. Subsidization of the price of salt substitute was done in 30 intervention villages selected at random. Control villages continued usual practices. The primary outcome for the study is dietary sodium intake level estimated from assays of 24-hour urine. The trial recruited and randomized 120 townships in April 2011. The sodium reduction program was commenced in the 60 intervention villages between May and June of that year with outcome surveys scheduled for October to December 2012. Baseline data collection shows that randomisation achieved good balance across groups. The establishment of the China Rural Health Initiative has enabled the launch of this large-scale trial designed to identify a novel, scalable strategy for reduction of dietary sodium and control of blood pressure. If proved effective, the intervention could plausibly be implemented at low cost in large parts of China and other countries worldwide. © 2013.

  8. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti.

    PubMed

    Wampler, Peter J; Rediske, Richard R; Molla, Azizur R

    2013-01-18

    A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This method provides an important technique that can be applied to other developing countries where a randomized study design is needed but infrastructure is lacking to implement more traditional participant selection methods.

  9. Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing

    NASA Astrophysics Data System (ADS)

    Arcuri, Andrea; Iqbal, Muhammad Zohaib; Briand, Lionel

    Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML/MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies.

  10. Affect-Aware Adaptive Tutoring Based on Human-Automation Etiquette Strategies.

    PubMed

    Yang, Euijung; Dorneich, Michael C

    2018-06-01

    We investigated adapting the interaction style of intelligent tutoring system (ITS) feedback based on human-automation etiquette strategies. Most ITSs adapt the content difficulty level, adapt the feedback timing, or provide extra content when they detect cognitive or affective decrements. Our previous work demonstrated that changing the interaction style via different feedback etiquette strategies has differential effects on students' motivation, confidence, satisfaction, and performance. The best etiquette strategy was also determined by user frustration. Based on these findings, a rule set was developed that systemically selected the proper etiquette strategy to address one of four learning factors (motivation, confidence, satisfaction, and performance) under two different levels of user frustration. We explored whether etiquette strategy selection based on this rule set (systematic) or random changes in etiquette strategy for a given level of frustration affected the four learning factors. Participants solved mathematics problems under different frustration conditions with feedback that adapted dynamic changes in etiquette strategies either systematically or randomly. The results demonstrated that feedback with etiquette strategies chosen systematically via the rule set could selectively target and improve motivation, confidence, satisfaction, and performance more than changing etiquette strategies randomly. The systematic adaptation was effective no matter the level of frustration for the participant. If computer tutors can vary the interaction style to effectively mitigate negative emotions, then ITS designers would have one more mechanism in which to design affect-aware adaptations that provide the proper responses in situations where human emotions affect the ability to learn.

  11. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    PubMed

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  12. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  13. Pseudo CT estimation from MRI using patch-based random forest

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian

    2017-02-01

    Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.

  14. Expert Design Advisor

    DTIC Science & Technology

    1990-10-01

    to economic, technological, spatial or logistic concerns, or involve training, man-machine interfaces, or integration into existing systems. Once the...probabilistic reasoning, mixed analysis- and simulation-oriented, mixed computation- and communication-oriented, nonpreemptive static priority...scheduling base, nonrandomized, preemptive static priority scheduling base, randomized, simulation-oriented, and static scheduling base. The selection of both

  15. Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Steincamp, James; Taylor, Jaime

    2003-01-01

    A reduced surrogate, one point crossover genetic algorithm with random rank-based selection was used successfully to estimate the multiple phases of a segmented optical system modeled on the seven-mirror Systematic Image-Based Optical Alignment testbed located at NASA's Marshall Space Flight Center.

  16. Effects of the Case-Based Instruction Method on the Experience of Learning

    ERIC Educational Resources Information Center

    Amiri Farahani, Leila; Heidari, Tooba

    2014-01-01

    This semi-experimental study was conducted with twenty-seven midwifery students who were randomly allocated to either case-based instruction or lecture-based instruction groups. The selected subjects -- foetal intrapartum assessment, foetal antepartum assessment, ABO and Rh blood group system mismatch -- were presented in four ninety-minute…

  17. On the unfitness of natural selection to explain sexual reproduction, and the difficulties that remain.

    PubMed

    van Rossum, Joris

    2006-01-01

    In its essence, the explanatory potential of the theory of natural selection is based on the iterative process of random production and variation, and subsequent non-random, directive selection. It is shown that within this explanatory framework, there is no place for the explanation of sexual reproduction. Thus in Darwinistic literature, sexual reproduction - one of nature's most salient characteristics - is often either assumed or ignored, but not explained. This fundamental and challenging gap within a complete naturalistic understanding of living beings calls for the need of a cybernetic account for sexual reproduction, meaning an understanding of the dynamic and creative potential of living beings to continuously and autonomously produce new organisms with unique and specific constellations.

  18. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR)

    PubMed Central

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-01-01

    Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy. PMID:17543100

  19. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR).

    PubMed

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-06-01

    Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy.

  20. Unexpected substrate specificity of T4 DNA ligase revealed by in vitro selection

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1993-01-01

    We have used in vitro selection techniques to characterize DNA sequences that are ligated efficiently by T4 DNA ligase. We find that the ensemble of selected sequences ligates about 50 times as efficiently as the random mixture of sequences used as the input for selection. Surprisingly many of the selected sequences failed to produce a match at or close to the ligation junction. None of the 20 selected oligomers that we sequenced produced a match two bases upstream from the ligation junction.

  1. 13 CFR 123.410 - Which loan requests will SBA fund?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... each application (loan request) as it is received. SBA will fund loan requests which meet the selection... allocated all available program funds. Multiple applications received on the same day will be ranked by a computer based random selection system to determine their funding order. SBA will notify you in writing of...

  2. 13 CFR 123.410 - Which loan requests will SBA fund?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... each application (loan request) as it is received. SBA will fund loan requests which meet the selection... allocated all available program funds. Multiple applications received on the same day will be ranked by a computer based random selection system to determine their funding order. SBA will notify you in writing of...

  3. 13 CFR 123.410 - Which loan requests will SBA fund?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... each application (loan request) as it is received. SBA will fund loan requests which meet the selection... allocated all available program funds. Multiple applications received on the same day will be ranked by a computer based random selection system to determine their funding order. SBA will notify you in writing of...

  4. The coalescent process in models with selection and recombination.

    PubMed

    Hudson, R R; Kaplan, N L

    1988-11-01

    The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.

  5. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    PubMed

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  6. Changing Friend Selection in Middle School: A Social Network Analysis of a Randomized Intervention Study Designed to Prevent Adolescent Problem Behavior.

    PubMed

    DeLay, Dawn; Ha, Thao; Van Ryzin, Mark; Winter, Charlotte; Dishion, Thomas J

    2016-04-01

    Adolescent friendships that promote problem behavior are often chosen in middle school. The current study examines the unintended impact of a randomized school-based intervention on the selection of friends in middle school, as well as on observations of deviant talk with friends 5 years later. Participants included 998 middle school students (526 boys and 472 girls) recruited at the onset of middle school (age 11-12 years) from three public middle schools participating in the Family Check-up model intervention. The current study focuses only on the effects of the SHAPe curriculum-one level of the Family Check-up model-on friendship choices. Participants nominated friends and completed measures of deviant peer affiliation. Approximately half of the sample (n = 500) was randomly assigned to the intervention, and the other half (n = 498) comprised the control group within each school. The results indicate that the SHAPe curriculum affected friend selection within school 1 but not within schools 2 or 3. The effects of friend selection in school 1 translated into reductions in observed deviancy training 5 years later (age 16-17 years). By coupling longitudinal social network analysis with a randomized intervention study, the current findings provide initial evidence that a randomized public middle school intervention can disrupt the formation of deviant peer groups and diminish levels of adolescent deviance 5 years later.

  7. Social power and opinion formation in complex networks

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2013-02-01

    In this paper we investigate the effects of social power on the evolution of opinions in model networks as well as in a number of real social networks. A continuous opinion formation model is considered and the analysis is performed through numerical simulation. Social power is given to a proportion of agents selected either randomly or based on their degrees. As artificial network structures, we consider scale-free networks constructed through preferential attachment and Watts-Strogatz networks. Numerical simulations show that scale-free networks with degree-based social power on the hub nodes have an optimal case where the largest number of the nodes reaches a consensus. However, given power to a random selection of nodes could not improve consensus properties. Introducing social power in Watts-Strogatz networks could not significantly change the consensus profile.

  8. Random forest models to predict aqueous solubility.

    PubMed

    Palmer, David S; O'Boyle, Noel M; Glen, Robert C; Mitchell, John B O

    2007-01-01

    Random Forest regression (RF), Partial-Least-Squares (PLS) regression, Support Vector Machines (SVM), and Artificial Neural Networks (ANN) were used to develop QSPR models for the prediction of aqueous solubility, based on experimental data for 988 organic molecules. The Random Forest regression model predicted aqueous solubility more accurately than those created by PLS, SVM, and ANN and offered methods for automatic descriptor selection, an assessment of descriptor importance, and an in-parallel measure of predictive ability, all of which serve to recommend its use. The prediction of log molar solubility for an external test set of 330 molecules that are solid at 25 degrees C gave an r2 = 0.89 and RMSE = 0.69 log S units. For a standard data set selected from the literature, the model performed well with respect to other documented methods. Finally, the diversity of the training and test sets are compared to the chemical space occupied by molecules in the MDL drug data report, on the basis of molecular descriptors selected by the regression analysis.

  9. From Protocols to Publications: A Study in Selective Reporting of Outcomes in Randomized Trials in Oncology

    PubMed Central

    Raghav, Kanwal Pratap Singh; Mahajan, Sminil; Yao, James C.; Hobbs, Brian P.; Berry, Donald A.; Pentz, Rebecca D.; Tam, Alda; Hong, Waun K.; Ellis, Lee M.; Abbruzzese, James; Overman, Michael J.

    2015-01-01

    Purpose The decision by journals to append protocols to published reports of randomized trials was a landmark event in clinical trial reporting. However, limited information is available on how this initiative effected transparency and selective reporting of clinical trial data. Methods We analyzed 74 oncology-based randomized trials published in Journal of Clinical Oncology, the New England Journal of Medicine, and The Lancet in 2012. To ascertain integrity of reporting, we compared published reports with their respective appended protocols with regard to primary end points, nonprimary end points, unplanned end points, and unplanned analyses. Results A total of 86 primary end points were reported in 74 randomized trials; nine trials had greater than one primary end point. Nine trials (12.2%) had some discrepancy between their planned and published primary end points. A total of 579 nonprimary end points (median, seven per trial) were planned, of which 373 (64.4%; median, five per trial) were reported. A significant positive correlation was found between the number of planned and nonreported nonprimary end points (Spearman r = 0.66; P < .001). Twenty-eight studies (37.8%) reported a total of 65 unplanned end points; 52 (80.0%) of which were not identified as unplanned. Thirty-one (41.9%) and 19 (25.7%) of 74 trials reported a total of 52 unplanned analyses involving primary end points and 33 unplanned analyses involving nonprimary end points, respectively. Studies reported positive unplanned end points and unplanned analyses more frequently than negative outcomes in abstracts (unplanned end points odds ratio, 6.8; P = .002; unplanned analyses odd ratio, 8.4; P = .007). Conclusion Despite public and reviewer access to protocols, selective outcome reporting persists and is a major concern in the reporting of randomized clinical trials. To foster credible evidence-based medicine, additional initiatives are needed to minimize selective reporting. PMID:26304898

  10. Predictors of Start of Different Antidepressants in Patient Charts among Patients with Depression

    PubMed Central

    Kim, Hyungjin Myra; Zivin, Kara; Choe, Hae Mi; Stano, Clare M.; Ganoczy, Dara; Walters, Heather; Valenstein, Marcia

    2016-01-01

    Background In usual psychiatric care, antidepressant treatments are selected based on physician and patient preferences rather than being randomly allocated, resulting in spurious associations between these treatments and outcome studies. Objectives To identify factors recorded in electronic medical chart progress notes predictive of antidepressant selection among patients who had received a depression diagnosis. Methods This retrospective study sample consisted of 556 randomly selected Veterans Health Administration (VHA) patients diagnosed with depression from April 1, 1999 to September 30, 2004, stratified by the antidepressant agent, geographic region, gender, and year of depression cohort entry. Predictors were obtained from administrative data, and additional variables were abstracted from electronic medical chart notes in the year prior to the start of the antidepressant in five categories: clinical symptoms and diagnoses, substance use, life stressors, behavioral/ideation measures (e.g., suicide attempts), and treatments received. Multinomial logistic regression analysis was used to assess the predictors associated with different antidepressant prescribing, and adjusted relative risk ratios (RRR) are reported. Results Of the administrative data-based variables, gender, age, illicit drug abuse or dependence, and number of psychiatric medications in prior year were significantly associated with antidepressant selection. After adjusting for administrative data-based variables, sleep problems (RRR = 2.47) or marital issues (RRR = 2.64) identified in the charts were significantly associated with prescribing mirtazapine rather than sertraline; however, no other chart-based variables showed a significant association or an association with a large magnitude. Conclusion Some chart data-based variables were predictive of antidepressant selection, but we neither found many nor found them highly predictive of antidepressant selection in patients treated for depression. PMID:25943003

  11. A review of selection-based tests of abiotic surrogates for species representation.

    PubMed

    Beier, Paul; Sutcliffe, Patricia; Hjort, Jan; Faith, Daniel P; Pressey, Robert L; Albuquerque, Fabio

    2015-06-01

    Because conservation planners typically lack data on where species occur, environmental surrogates--including geophysical settings and climate types--have been used to prioritize sites within a planning area. We reviewed 622 evaluations of the effectiveness of abiotic surrogates in representing species in 19 study areas. Sites selected using abiotic surrogates represented more species than an equal number of randomly selected sites in 43% of tests (55% for plants) and on average improved on random selection of sites by about 8% (21% for plants). Environmental diversity (ED) (42% median improvement on random selection) and biotically informed clusters showed promising results and merit additional testing. We suggest 4 ways to improve performance of abiotic surrogates. First, analysts should consider a broad spectrum of candidate variables to define surrogates, including rarely used variables related to geographic separation, distance from coast, hydrology, and within-site abiotic diversity. Second, abiotic surrogates should be defined at fine thematic resolution. Third, sites (the landscape units prioritized within a planning area) should be small enough to ensure that surrogates reflect species' environments and to produce prioritizations that match the spatial resolution of conservation decisions. Fourth, if species inventories are available for some planning units, planners should define surrogates based on the abiotic variables that most influence species turnover in the planning area. Although species inventories increase the cost of using abiotic surrogates, a modest number of inventories could provide the data needed to select variables and evaluate surrogates. Additional tests of nonclimate abiotic surrogates are needed to evaluate the utility of conserving nature's stage as a strategy for conservation planning in the face of climate change. © 2015 Society for Conservation Biology.

  12. Controllability of social networks and the strategic use of random information.

    PubMed

    Cremonini, Marco; Casamassima, Francesca

    2017-01-01

    This work is aimed at studying realistic social control strategies for social networks based on the introduction of random information into the state of selected driver agents. Deliberately exposing selected agents to random information is a technique already experimented in recommender systems or search engines, and represents one of the few options for influencing the behavior of a social context that could be accepted as ethical, could be fully disclosed to members, and does not involve the use of force or of deception. Our research is based on a model of knowledge diffusion applied to a time-varying adaptive network and considers two well-known strategies for influencing social contexts: One is the selection of few influencers for manipulating their actions in order to drive the whole network to a certain behavior; the other, instead, drives the network behavior acting on the state of a large subset of ordinary, scarcely influencing users. The two approaches have been studied in terms of network and diffusion effects. The network effect is analyzed through the changes induced on network average degree and clustering coefficient, while the diffusion effect is based on two ad hoc metrics which are defined to measure the degree of knowledge diffusion and skill level, as well as the polarization of agent interests. The results, obtained through simulations on synthetic networks, show a rich dynamics and strong effects on the communication structure and on the distribution of knowledge and skills. These findings support our hypothesis that the strategic use of random information could represent a realistic approach to social network controllability, and that with both strategies, in principle, the control effect could be remarkable.

  13. A LARGE-SCALE CLUSTER RANDOMIZED TRIAL TO DETERMINE THE EFFECTS OF COMMUNITY-BASED DIETARY SODIUM REDUCTION – THE CHINA RURAL HEALTH INITIATIVE SODIUM REDUCTION STUDY

    PubMed Central

    Li, Nicole; Yan, Lijing L.; Niu, Wenyi; Labarthe, Darwin; Feng, Xiangxian; Shi, Jingpu; Zhang, Jianxin; Zhang, Ruijuan; Zhang, Yuhong; Chu, Hongling; Neiman, Andrea; Engelgau, Michael; Elliott, Paul; Wu, Yangfeng; Neal, Bruce

    2013-01-01

    Background Cardiovascular diseases are the leading cause of death and disability in China. High blood pressure caused by excess intake of dietary sodium is widespread and an effective sodium reduction program has potential to improve cardiovascular health. Design This study is a large-scale, cluster-randomized, trial done in five Northern Chinese provinces. Two counties have been selected from each province and 12 townships in each county making a total of 120 clusters. Within each township one village has been selected for participation with 1:1 randomization stratified by county. The sodium reduction intervention comprises community health education and a food supply strategy based upon providing access to salt substitute. Subsidization of the price of salt substitute was done in 30 intervention villages selected at random. Control villages continued usual practices. The primary outcome for the study is dietary sodium intake level estimated from assays of 24 hour urine. Trial status The trial recruited and randomized 120 townships in April 2011. The sodium reduction program was commenced in the 60 intervention villages between May and June of that year with outcome surveys scheduled for October to December 2012. Baseline data collection shows that randomisation achieved good balance across groups. Discussion The establishment of the China Rural Health Initiative has enabled the launch of this large-scale trial designed to identify a novel, scalable strategy for reduction of dietary sodium and control of blood pressure. If proved effective, the intervention could plausibly be implemented at low cost in large parts of China and other countries worldwide. PMID:24176436

  14. Transtheoretical Model-Based Dietary Interventions in Primary Care: A Review of the Evidence in Diabetes

    ERIC Educational Resources Information Center

    Salmela, Sanna; Poskiparta, Marita; Kasila, Kirsti; Vahasarja, Kati; Vanhala, Mauno

    2009-01-01

    The objective of this study was to review the evidence concerning stage-based dietary interventions in primary care among persons with diabetes or an elevated diabetes risk. Search strategies were electronic databases and manual search. Selection criteria were randomized controlled studies with stage-based dietary intervention, conducted in…

  15. Review of Random Phase Encoding in Volume Holographic Storage

    PubMed Central

    Su, Wei-Chia; Sun, Ching-Cherng

    2012-01-01

    Random phase encoding is a unique technique for volume hologram which can be applied to various applications such as holographic multiplexing storage, image encryption, and optical sensing. In this review article, we first review and discuss diffraction selectivity of random phase encoding in volume holograms, which is the most important parameter related to multiplexing capacity of volume holographic storage. We then review an image encryption system based on random phase encoding. The alignment of phase key for decryption of the encoded image stored in holographic memory is analyzed and discussed. In the latter part of the review, an all-optical sensing system implemented by random phase encoding and holographic interconnection is presented.

  16. Construction and identification of a D-Vine model applied to the probability distribution of modal parameters in structural dynamics

    NASA Astrophysics Data System (ADS)

    Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.

    2018-01-01

    This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.

  17. Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Luan, X.

    2017-12-01

    Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.

  18. An Active RBSE Framework to Generate Optimal Stimulus Sequences in a BCI for Spelling

    NASA Astrophysics Data System (ADS)

    Moghadamfalahi, Mohammad; Akcakaya, Murat; Nezamfar, Hooman; Sourati, Jamshid; Erdogmus, Deniz

    2017-10-01

    A class of brain computer interfaces (BCIs) employs noninvasive recordings of electroencephalography (EEG) signals to enable users with severe speech and motor impairments to interact with their environment and social network. For example, EEG based BCIs for typing popularly utilize event related potentials (ERPs) for inference. Presentation paradigm design in current ERP-based letter by letter typing BCIs typically query the user with an arbitrary subset characters. However, the typing accuracy and also typing speed can potentially be enhanced with more informed subset selection and flash assignment. In this manuscript, we introduce the active recursive Bayesian state estimation (active-RBSE) framework for inference and sequence optimization. Prior to presentation in each iteration, rather than showing a subset of randomly selected characters, the developed framework optimally selects a subset based on a query function. Selected queries are made adaptively specialized for users during each intent detection. Through a simulation-based study, we assess the effect of active-RBSE on the performance of a language-model assisted typing BCI in terms of typing speed and accuracy. To provide a baseline for comparison, we also utilize standard presentation paradigms namely, row and column matrix presentation paradigm and also random rapid serial visual presentation paradigms. The results show that utilization of active-RBSE can enhance the online performance of the system, both in terms of typing accuracy and speed.

  19. The 1989 Georgia Survey of Adolescent Drug and Alcohol Use. Volume I: The Narrative Report for Survey Findings.

    ERIC Educational Resources Information Center

    Adams, Ronald D.; And Others

    The 1989 Georgia Survey of Adolescent Drug and Alcohol Use was conducted in 373 schools throughout Georgia. The stratified random sample was obtained from schools that participated in the 1987 survey (in which 93% of the school systems in Georgia participated) and were selected randomly from strata based on size of community and geographic…

  20. What affects response rates in primary healthcare-based programmes? An analysis of individual and unit-related factors associated with increased odds of non-response based on HCV screening in the general population in Poland

    PubMed Central

    Parda, Natalia; Stępień, Małgorzata; Zakrzewska, Karolina; Madaliński, Kazimierz; Kołakowska, Agnieszka; Godzik, Paulina; Rosińska, Magdalena

    2016-01-01

    Objectives Response rate in public health programmes may be a limiting factor. It is important to first consider their delivery and acceptability for the target. This study aimed at determining individual and unit-related factors associated with increased odds of non-response based on hepatitis C virus screening in primary healthcare. Design Primary healthcare units (PHCUs) were extracted from the Register of Health Care Centres. Each of the PHCUs was to enrol adult patients selected on a random basis. Data on the recruitment of PHCUs and patients were analysed. Multilevel modelling was applied to investigate individual and unit-related factors associated with non-response. Multilevel logistic model was developed with fixed effects and only a random intercept for the unit. Preliminary analysis included a random effect for unit and each of the individual or PHCU covariates separately. For each of the PHCU covariates, we applied a two-level model with individual covariates, unit random effect and a single fixed effect of this unit covariate. Setting This study was conducted in primary care units in selected provinces in Poland. Participants A total of 242 PHCUs and 24 480 adults were invited. Of them, 44 PHCUs and 20 939 patients agreed to participate. Both PHCUs and patients were randomly selected. Results Data on 44 PHCUs and 24 480 patients were analysed. PHCU-level factors and recruitment strategies were important predictors of non-response. Unit random effect was significant in all models. Larger and private units reported higher non-response rates, while for those with a history of running public health programmes the odds of non-response was lower. Proactive recruitment, more working hours devoted to the project and patient resulted in higher acceptance of the project. Higher number of personnel had no such effect. Conclusions Prior to the implementation of public health programme, several factors that could hinder its execution should be addressed. PMID:27927665

  1. What affects response rates in primary healthcare-based programmes? An analysis of individual and unit-related factors associated with increased odds of non-response based on HCV screening in the general population in Poland.

    PubMed

    Parda, Natalia; Stępień, Małgorzata; Zakrzewska, Karolina; Madaliński, Kazimierz; Kołakowska, Agnieszka; Godzik, Paulina; Rosińska, Magdalena

    2016-12-07

    Response rate in public health programmes may be a limiting factor. It is important to first consider their delivery and acceptability for the target. This study aimed at determining individual and unit-related factors associated with increased odds of non-response based on hepatitis C virus screening in primary healthcare. Primary healthcare units (PHCUs) were extracted from the Register of Health Care Centres. Each of the PHCUs was to enrol adult patients selected on a random basis. Data on the recruitment of PHCUs and patients were analysed. Multilevel modelling was applied to investigate individual and unit-related factors associated with non-response. Multilevel logistic model was developed with fixed effects and only a random intercept for the unit. Preliminary analysis included a random effect for unit and each of the individual or PHCU covariates separately. For each of the PHCU covariates, we applied a two-level model with individual covariates, unit random effect and a single fixed effect of this unit covariate. This study was conducted in primary care units in selected provinces in Poland. A total of 242 PHCUs and 24 480 adults were invited. Of them, 44 PHCUs and 20 939 patients agreed to participate. Both PHCUs and patients were randomly selected. Data on 44 PHCUs and 24 480 patients were analysed. PHCU-level factors and recruitment strategies were important predictors of non-response. Unit random effect was significant in all models. Larger and private units reported higher non-response rates, while for those with a history of running public health programmes the odds of non-response was lower. Proactive recruitment, more working hours devoted to the project and patient resulted in higher acceptance of the project. Higher number of personnel had no such effect. Prior to the implementation of public health programme, several factors that could hinder its execution should be addressed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. Consumption of beef from cattle administered estrogenic growth promotants does not result in premature puberty and obesity using the swine model

    USDA-ARS?s Scientific Manuscript database

    The objective was to investigate the effects of ground beef from cattle administered commercial growth promotants on puberty attainment and body composition in female swine. Twenty-four gilts were selected based on strict selection criteria to reduce piglet variation. Treatments were randomly assign...

  3. Current Status of Diversity Initiatives in Selected Multinational Corporations. Diversity in the Workforce Series Report #3.

    ERIC Educational Resources Information Center

    Wentling, Rose Mary; Palma-Rivas, Nilda

    The current status of diversity initiatives in eight U.S.-based multinational corporations was examined through a process involving semistructured interviews of diversity managers and analysis of their annual reports for fiscal 1996 and related documents. The 8 corporations were randomly selected from the 30 multinational corporations in Illinois.…

  4. Random forest feature selection approach for image segmentation

    NASA Astrophysics Data System (ADS)

    Lefkovits, László; Lefkovits, Szidónia; Emerich, Simina; Vaida, Mircea Florin

    2017-03-01

    In the field of image segmentation, discriminative models have shown promising performance. Generally, every such model begins with the extraction of numerous features from annotated images. Most authors create their discriminative model by using many features without using any selection criteria. A more reliable model can be built by using a framework that selects the important variables, from the point of view of the classification, and eliminates the unimportant once. In this article we present a framework for feature selection and data dimensionality reduction. The methodology is built around the random forest (RF) algorithm and its variable importance evaluation. In order to deal with datasets so large as to be practically unmanageable, we propose an algorithm based on RF that reduces the dimension of the database by eliminating irrelevant features. Furthermore, this framework is applied to optimize our discriminative model for brain tumor segmentation.

  5. Analysis of training sample selection strategies for regression-based quantitative landslide susceptibility mapping methods

    NASA Astrophysics Data System (ADS)

    Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem

    2017-07-01

    All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.

  6. Determinants of selective reporting: A taxonomy based on content analysis of a random selection of the literature

    PubMed Central

    van den Bogert, Cornelis A.; van Soest-Poortvliet, Mirjam C.; Fazeli Farsani, Soulmaz; Otten, René H. J.; ter Riet, Gerben; Bouter, Lex M.

    2018-01-01

    Background Selective reporting is wasteful, leads to bias in the published record and harms the credibility of science. Studies on potential determinants of selective reporting currently lack a shared taxonomy and a causal framework. Objective To develop a taxonomy of determinants of selective reporting in science. Design Inductive qualitative content analysis of a random selection of the pertinent literature including empirical research and theoretical reflections. Methods Using search terms for bias and selection combined with terms for reporting and publication, we systematically searched the PubMed, Embase, PsycINFO and Web of Science databases up to January 8, 2015. Of the 918 articles identified, we screened a 25 percent random selection. From eligible articles, we extracted phrases that mentioned putative or possible determinants of selective reporting, which we used to create meaningful categories. We stopped when no new categories emerged in the most recently analyzed articles (saturation). Results Saturation was reached after analyzing 64 articles. We identified 497 putative determinants, of which 145 (29%) were supported by empirical findings. The determinants represented 12 categories (leaving 3% unspecified): focus on preferred findings (36%), poor or overly flexible research design (22%), high-risk area and its development (8%), dependence upon sponsors (8%), prejudice (7%), lack of resources including time (3%), doubts about reporting being worth the effort (3%), limitations in reporting and editorial practices (3%), academic publication system hurdles (3%), unfavorable geographical and regulatory environment (2%), relationship and collaboration issues (2%), and potential harm (0.4%). Conclusions We designed a taxonomy of putative determinants of selective reporting consisting of 12 categories. The taxonomy may help develop theory about causes of selection bias and guide policies to prevent selective reporting. PMID:29401492

  7. Statistical aspects of evolution under natural selection, with implications for the advantage of sexual reproduction.

    PubMed

    Crouch, Daniel J M

    2017-10-27

    The prevalence of sexual reproduction remains mysterious, as it poses clear evolutionary drawbacks compared to reproducing asexually. Several possible explanations exist, with one of the most likely being that finite population size causes linkage disequilibria to randomly generate and impede the progress of natural selection, and that these are eroded by recombination via sexual reproduction. Previous investigations have either analysed this phenomenon in detail for small numbers of loci, or performed population simulations for many loci. Here we present a quantitative genetic model for fitness, based on the Price Equation, in order to examine the theoretical consequences of randomly generated linkage disequilibria when there are many loci. In addition, most previous work has been concerned with the long-term consequences of deleterious linkage disequilibria for population fitness. The expected change in mean fitness between consecutive generations, a measure of short-term evolutionary success, is shown under random environmental influences to be related to the autocovariance in mean fitness between the generations, capturing the effects of stochastic forces such as genetic drift. Interaction between genetic drift and natural selection, due to randomly generated linkage disequilibria, is demonstrated to be one possible source of mean fitness autocovariance. This suggests a possible role for sexual reproduction in reducing the negative effects of genetic drift, thereby improving the short-term efficacy of natural selection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. An optical authentication system based on imaging of excitation-selected lanthanide luminescence.

    PubMed

    Carro-Temboury, Miguel R; Arppe, Riikka; Vosch, Tom; Sørensen, Thomas Just

    2018-01-01

    Secure data encryption relies heavily on one-way functions, and copy protection relies on features that are difficult to reproduce. We present an optical authentication system based on lanthanide luminescence from physical one-way functions or physical unclonable functions (PUFs). They cannot be reproduced and thus enable unbreakable encryption. Further, PUFs will prevent counterfeiting if tags with unique PUFs are grafted onto products. We have developed an authentication system that comprises a hardware reader, image analysis, and authentication software and physical keys that we demonstrate as an anticounterfeiting system. The physical keys are PUFs made from random patterns of taggants in polymer films on glass that can be imaged following selected excitation of particular lanthanide(III) ions doped into the individual taggants. This form of excitation-selected imaging ensures that by using at least two lanthanide(III) ion dopants, the random patterns cannot be copied, because the excitation selection will fail when using any other emitter. With the developed reader and software, the random patterns are read and digitized, which allows a digital pattern to be stored. This digital pattern or digital key can be used to authenticate the physical key in anticounterfeiting or to encrypt any message. The PUF key was produced with a staggering nominal encoding capacity of 7 3600 . Although the encoding capacity of the realized authentication system reduces to 6 × 10 104 , it is more than sufficient to completely preclude counterfeiting of products.

  9. Hierarchy and extremes in selections from pools of randomized proteins

    PubMed Central

    Boyer, Sébastien; Biswas, Dipanwita; Kumar Soshee, Ananda; Scaramozzino, Natale; Nizak, Clément; Rivoire, Olivier

    2016-01-01

    Variation and selection are the core principles of Darwinian evolution, but quantitatively relating the diversity of a population to its capacity to respond to selection is challenging. Here, we examine this problem at a molecular level in the context of populations of partially randomized proteins selected for binding to well-defined targets. We built several minimal protein libraries, screened them in vitro by phage display, and analyzed their response to selection by high-throughput sequencing. A statistical analysis of the results reveals two main findings. First, libraries with the same sequence diversity but built around different “frameworks” typically have vastly different responses; second, the distribution of responses of the best binders in a library follows a simple scaling law. We show how an elementary probabilistic model based on extreme value theory rationalizes the latter finding. Our results have implications for designing synthetic protein libraries, estimating the density of functional biomolecules in sequence space, characterizing diversity in natural populations, and experimentally investigating evolvability (i.e., the potential for future evolution). PMID:26969726

  10. Hierarchy and extremes in selections from pools of randomized proteins.

    PubMed

    Boyer, Sébastien; Biswas, Dipanwita; Kumar Soshee, Ananda; Scaramozzino, Natale; Nizak, Clément; Rivoire, Olivier

    2016-03-29

    Variation and selection are the core principles of Darwinian evolution, but quantitatively relating the diversity of a population to its capacity to respond to selection is challenging. Here, we examine this problem at a molecular level in the context of populations of partially randomized proteins selected for binding to well-defined targets. We built several minimal protein libraries, screened them in vitro by phage display, and analyzed their response to selection by high-throughput sequencing. A statistical analysis of the results reveals two main findings. First, libraries with the same sequence diversity but built around different "frameworks" typically have vastly different responses; second, the distribution of responses of the best binders in a library follows a simple scaling law. We show how an elementary probabilistic model based on extreme value theory rationalizes the latter finding. Our results have implications for designing synthetic protein libraries, estimating the density of functional biomolecules in sequence space, characterizing diversity in natural populations, and experimentally investigating evolvability (i.e., the potential for future evolution).

  11. Random and non-random mating populations: Evolutionary dynamics in meiotic drive.

    PubMed

    Sarkar, Bijan

    2016-01-01

    Game theoretic tools are utilized to analyze a one-locus continuous selection model of sex-specific meiotic drive by considering nonequivalence of the viabilities of reciprocal heterozygotes that might be noticed at an imprinted locus. The model draws attention to the role of viability selections of different types to examine the stable nature of polymorphic equilibrium. A bridge between population genetics and evolutionary game theory has been built up by applying the concept of the Fundamental Theorem of Natural Selection. In addition to pointing out the influences of male and female segregation ratios on selection, configuration structure reveals some noted results, e.g., Hardy-Weinberg frequencies hold in replicator dynamics, occurrence of faster evolution at the maximized variance fitness, existence of mixed Evolutionarily Stable Strategy (ESS) in asymmetric games, the tending evolution to follow not only a 1:1 sex ratio but also a 1:1 different alleles ratio at particular gene locus. Through construction of replicator dynamics in the group selection framework, our selection model introduces a redefining bases of game theory to incorporate non-random mating where a mating parameter associated with population structure is dependent on the social structure. Also, the model exposes the fact that the number of polymorphic equilibria will depend on the algebraic expression of population structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. The impact of innovation intermediary on knowledge transfer

    NASA Astrophysics Data System (ADS)

    Lin, Min; Wei, Jun

    2018-07-01

    Many firms have opened up their innovation process and actively transfer knowledge with external partners in the market of technology. To reduce some of the market inefficiencies, more and more firms collaborate with innovation intermediaries. In light of the increasing importance of intermediary in the context of open innovation, we in this paper systematically investigate the effect of innovation intermediary on knowledge transfer and innovation process in networked systems. We find that the existence of innovation intermediary is conducive to the knowledge diffusion and facilitate the knowledge growth at system level. Interestingly, the scale of the innovation intermediary has little effect on the growth of knowledge. We further investigate the selection of intermediary members by comparing four selection strategies: random selection, initial knowledge level based selection, absorptive capability based selection, and innovative ability based selection. It is found that the selection strategy based on innovative ability outperforms all the other strategies in promoting the system knowledge growth. Our study provides a theoretical understanding of the impact of innovation intermediary on knowledge transfer and sheds light on the design and selection of innovation intermediary in open innovation.

  13. A random rule model of surface growth

    NASA Astrophysics Data System (ADS)

    Mello, Bernardo A.

    2015-02-01

    Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.

  14. [Evidence-based obstetric conduct. Severe preeclampsia: aggressive or expectant management?].

    PubMed

    Briceño Pérez, Carlos; Briceño Sanabria, Liliana

    2007-02-01

    In severe preeclampsia, delivery is assisted immediately without thinking in fetal conditions. Some decades ago, there is agreement to hospitalize, but there is no agreement between expectant or aggressive management. Here are revised these two management evidence based medicine. Fifteen non randomized non controlled trials in English and 4 in Latin American literature highlight 10-14 days pregnancy prolongation without increase maternal morbidity with conservative management; but there were criticized by non random patient selection and non controlled. Two randomized controlled trials showed improvement in neonatal results with no change in maternal, with expectant management. One systematic review of these two trials concluded there is not sufficient data to any reliable recommendation and proposes longer trials are necessary. In United States, National Working Group in the High Blood Pressure Educational Program believes expectant management is only possible in selective women group between 23-32 weeks. The American College of Obstetricians and Gynecologist recommends this management in a tertiary care setting or in consultation with an obstetrician-gynecologist with training in high risk pregnancies. Expectant management present proposal in severe preeclampsia remote from term is summarized.

  15. Graphene based widely-tunable and singly-polarized pulse generation with random fiber lasers

    PubMed Central

    Yao, B. C.; Rao, Y. J.; Wang, Z. N.; Wu, Y.; Zhou, J. H.; Wu, H.; Fan, M. Q.; Cao, X. L.; Zhang, W. L.; Chen, Y. F.; Li, Y. R.; Churkin, D.; Turitsyn, S.; Wong, C. W.

    2015-01-01

    Pulse generation often requires a stabilized cavity and its corresponding mode structure for initial phase-locking. Contrastingly, modeless cavity-free random lasers provide new possibilities for high quantum efficiency lasing that could potentially be widely tunable spectrally and temporally. Pulse generation in random lasers, however, has remained elusive since the discovery of modeless gain lasing. Here we report coherent pulse generation with modeless random lasers based on the unique polarization selectivity and broadband saturable absorption of monolayer graphene. Simultaneous temporal compression of cavity-free pulses are observed with such a polarization modulation, along with a broadly-tunable pulsewidth across two orders of magnitude down to 900 ps, a broadly-tunable repetition rate across three orders of magnitude up to 3 MHz, and a singly-polarized pulse train at 41 dB extinction ratio, about an order of magnitude larger than conventional pulsed fiber lasers. Moreover, our graphene-based pulse formation also demonstrates robust pulse-to-pulse stability and wide-wavelength operation due to the cavity-less feature. Such a graphene-based architecture not only provides a tunable pulsed random laser for fiber-optic sensing, speckle-free imaging, and laser-material processing, but also a new way for the non-random CW fiber lasers to generate widely tunable and singly-polarized pulses. PMID:26687730

  16. Graphene based widely-tunable and singly-polarized pulse generation with random fiber lasers.

    PubMed

    Yao, B C; Rao, Y J; Wang, Z N; Wu, Y; Zhou, J H; Wu, H; Fan, M Q; Cao, X L; Zhang, W L; Chen, Y F; Li, Y R; Churkin, D; Turitsyn, S; Wong, C W

    2015-12-21

    Pulse generation often requires a stabilized cavity and its corresponding mode structure for initial phase-locking. Contrastingly, modeless cavity-free random lasers provide new possibilities for high quantum efficiency lasing that could potentially be widely tunable spectrally and temporally. Pulse generation in random lasers, however, has remained elusive since the discovery of modeless gain lasing. Here we report coherent pulse generation with modeless random lasers based on the unique polarization selectivity and broadband saturable absorption of monolayer graphene. Simultaneous temporal compression of cavity-free pulses are observed with such a polarization modulation, along with a broadly-tunable pulsewidth across two orders of magnitude down to 900 ps, a broadly-tunable repetition rate across three orders of magnitude up to 3 MHz, and a singly-polarized pulse train at 41 dB extinction ratio, about an order of magnitude larger than conventional pulsed fiber lasers. Moreover, our graphene-based pulse formation also demonstrates robust pulse-to-pulse stability and wide-wavelength operation due to the cavity-less feature. Such a graphene-based architecture not only provides a tunable pulsed random laser for fiber-optic sensing, speckle-free imaging, and laser-material processing, but also a new way for the non-random CW fiber lasers to generate widely tunable and singly-polarized pulses.

  17. Evidence-based outcomes for mesh-based surgery for pelvic organ prolapse.

    PubMed

    Mettu, Jayadev R; Colaco, Marc; Badlani, Gopal H

    2014-07-01

    In light of all the recent controversy regarding the use of synthetic mesh for pelvic organ prolapse, we did a retrospective review of the evidence-based outcomes and complications for its use. A total of 18 of the most recent studies in the last 5 years were selected. Studies selected were prospective randomized or quasi-randomized controlled trials that included surgical operations for pelvic organ prolapse for this review. Additionally, Cochrane review and meta-analysis of outcomes and complication were also analyzed. In terms of outcomes, the definition of successful surgery is currently being debated. Synthetic mesh provides superior anatomical and subjective cure rates compared with native tissue repair. Success rates varied greatly depending on the nature of prolapse and surgical approach. Furthermore, recurrence rates for mesh-based surgery are significantly lower than that for native tissue repair. The main unique complication of mesh is exposure and was reported in a mean of 11.4% of patients, with 6.8% of patients requiring surgical partial excision of mesh. Mesh significantly improves anatomical outcomes with sacrocolpopexy and vaginal repair. Mesh does create the unique complication which can be reduced with training and proper patient selection. Further development of better materials is vital rather than reverting to tissue-based repair. Ultimately, the decision to use mesh should be based upon a patient's personal goals and preferences after an informed conversation with her physician.

  18. Identifying taxonomic and functional surrogates for spring biodiversity conservation.

    PubMed

    Jyväsjärvi, Jussi; Virtanen, Risto; Ilmonen, Jari; Paasivirta, Lauri; Muotka, Timo

    2018-02-27

    Surrogate approaches are widely used to estimate overall taxonomic diversity for conservation planning. Surrogate taxa are frequently selected based on rarity or charisma, whereas selection through statistical modeling has been applied rarely. We used boosted-regression-tree models (BRT) fitted to biological data from 165 springs to identify bryophyte and invertebrate surrogates for taxonomic and functional diversity of boreal springs. We focused on these 2 groups because they are well known and abundant in most boreal springs. The best indicators of taxonomic versus functional diversity differed. The bryophyte Bryum weigelii and the chironomid larva Paratrichocladius skirwithensis best indicated taxonomic diversity, whereas the isopod Asellus aquaticus and the chironomid Macropelopia spp. were the best surrogates of functional diversity. In a scoring algorithm for priority-site selection, taxonomic surrogates performed only slightly better than random selection for all spring-dwelling taxa, but they were very effective in representing spring specialists, providing a distinct improvement over random solutions. However, the surrogates for taxonomic diversity represented functional diversity poorly and vice versa. When combined with cross-taxon complementarity analyses, surrogate selection based on statistical modeling provides a promising approach for identifying groundwater-dependent ecosystems of special conservation value, a key requirement of the EU Water Framework Directive. © 2018 Society for Conservation Biology.

  19. Pseudo-random number generator for the Sigma 5 computer

    NASA Technical Reports Server (NTRS)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  20. Power of an Adaptive Trial Design for Endovascular Stroke Studies: Simulations Using IMS (Interventional Management of Stroke) III Data.

    PubMed

    Lansberg, Maarten G; Bhat, Ninad S; Yeatts, Sharon D; Palesch, Yuko Y; Broderick, Joseph P; Albers, Gregory W; Lai, Tze L; Lavori, Philip W

    2016-12-01

    Adaptive trial designs that allow enrichment of the study population through subgroup selection can increase the chance of a positive trial when there is a differential treatment effect among patient subgroups. The goal of this study is to illustrate the potential benefit of adaptive subgroup selection in endovascular stroke studies. We simulated the performance of a trial design with adaptive subgroup selection and compared it with that of a traditional design. Outcome data were based on 90-day modified Rankin Scale scores, observed in IMS III (Interventional Management of Stroke III), among patients with a vessel occlusion on baseline computed tomographic angiography (n=382). Patients were categorized based on 2 methods: (1) according to location of the arterial occlusive lesion and onset-to-randomization time and (2) according to onset-to-randomization time alone. The power to demonstrate a treatment benefit was based on 10 000 trial simulations for each design. The treatment effect was relatively homogeneous across categories when patients were categorized based on arterial occlusive lesion and time. Consequently, the adaptive design had similar power (47%) compared with the fixed trial design (45%). There was a differential treatment effect when patients were categorized based on time alone, resulting in greater power with the adaptive design (82%) than with the fixed design (57%). These simulations, based on real-world patient data, indicate that adaptive subgroup selection has merit in endovascular stroke trials as it substantially increases power when the treatment effect differs among subgroups in a predicted pattern. © 2016 American Heart Association, Inc.

  1. The Effectiveness of Educational Interventions to Enhance the Adoption of Fee-Based Arsenic Testing in Bangladesh: A Cluster Randomized Controlled Trial

    PubMed Central

    George, Christine Marie; Inauen, Jennifer; Rahman, Sheikh Masudur; Zheng, Yan

    2013-01-01

    Arsenic (As) testing could help 22 million people, using drinking water sources that exceed the Bangladesh As standard, to identify safe sources. A cluster randomized controlled trial was conducted to evaluate the effectiveness of household education and local media in the increasing demand for fee-based As testing. Randomly selected households (N = 452) were divided into three interventions implemented by community workers: 1) fee-based As testing with household education (HE); 2) fee-based As testing with household education and a local media campaign (HELM); and 3) fee-based As testing alone (Control). The fee for the As test was US$ 0.28, higher than the cost of the test (US$ 0.16). Of households with untested wells, 93% in both intervention groups HE and HELM purchased an As test, whereas only 53% in the control group. In conclusion, fee-based As testing with household education is effective in the increasing demand for As testing in rural Bangladesh. PMID:23716409

  2. The effectiveness of educational interventions to enhance the adoption of fee-based arsenic testing in Bangladesh: a cluster randomized controlled trial.

    PubMed

    George, Christine Marie; Inauen, Jennifer; Rahman, Sheikh Masudur; Zheng, Yan

    2013-07-01

    Arsenic (As) testing could help 22 million people, using drinking water sources that exceed the Bangladesh As standard, to identify safe sources. A cluster randomized controlled trial was conducted to evaluate the effectiveness of household education and local media in the increasing demand for fee-based As testing. Randomly selected households (N = 452) were divided into three interventions implemented by community workers: 1) fee-based As testing with household education (HE); 2) fee-based As testing with household education and a local media campaign (HELM); and 3) fee-based As testing alone (Control). The fee for the As test was US$ 0.28, higher than the cost of the test (US$ 0.16). Of households with untested wells, 93% in both intervention groups HE and HELM purchased an As test, whereas only 53% in the control group. In conclusion, fee-based As testing with household education is effective in the increasing demand for As testing in rural Bangladesh.

  3. The Effect of Different Modes of English Captioning on EFL Learners' General Listening Comprehension: Full Text vs. Keyword Captions

    ERIC Educational Resources Information Center

    Behroozizad, Sorayya; Majidi, Sudabeh

    2015-01-01

    This study investigated the effect of different modes of English captioning on EFL learners' general listening comprehension. To this end, forty-five intermediate-level learners were selected based on their scores on a standardized English proficiency test (PET) to carry out the study. Then, the selected participants were randomly assigned into…

  4. Valuing Equal Protection in Aviation Security Screening.

    PubMed

    Nguyen, Kenneth D; Rosoff, Heather; John, Richard S

    2017-12-01

    The growing number of anti-terrorism policies has elevated public concerns about discrimination. Within the context of airport security screening, the current study examines how American travelers value the principle of equal protection by quantifying the "equity premium" that they are willing to sacrifice to avoid screening procedures that result in differential treatments. In addition, we applied the notion of procedural justice to explore the effect of alternative selective screening procedures on the value of equal protection. Two-hundred and twenty-two respondents were randomly assigned to one of three selective screening procedures: (1) randomly, (2) using behavioral indicators, or (3) based on demographic characteristics. They were asked to choose between airlines using either an equal or a discriminatory screening procedure. While the former requires all passengers to be screened in the same manner, the latter mandates all passengers undergo a quick primary screening and, in addition, some passengers are selected for a secondary screening based on a predetermined selection criterion. Equity premiums were quantified in terms of monetary cost, wait time, convenience, and safety compromise. Results show that equity premiums varied greatly across respondents, with many indicating little willingness to sacrifice to avoid inequitable screening, and a smaller minority willing to sacrifice anything to avoid the discriminatory screening. The selective screening manipulation was effective in that equity premiums were greater under selection by demographic characteristics compared to the other two procedures. © 2017 Society for Risk Analysis.

  5. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  6. Combined rule extraction and feature elimination in supervised classification.

    PubMed

    Liu, Sheng; Patel, Ronak Y; Daga, Pankaj R; Liu, Haining; Fu, Gang; Doerksen, Robert J; Chen, Yixin; Wilkins, Dawn E

    2012-09-01

    There are a vast number of biology related research problems involving a combination of multiple sources of data to achieve a better understanding of the underlying problems. It is important to select and interpret the most important information from these sources. Thus it will be beneficial to have a good algorithm to simultaneously extract rules and select features for better interpretation of the predictive model. We propose an efficient algorithm, Combined Rule Extraction and Feature Elimination (CRF), based on 1-norm regularized random forests. CRF simultaneously extracts a small number of rules generated by random forests and selects important features. We applied CRF to several drug activity prediction and microarray data sets. CRF is capable of producing performance comparable with state-of-the-art prediction algorithms using a small number of decision rules. Some of the decision rules are biologically significant.

  7. Selecting Single Model in Combination Forecasting Based on Cointegration Test and Encompassing Test

    PubMed Central

    Jiang, Chuanjin; Zhang, Jing; Song, Fugen

    2014-01-01

    Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability. PMID:24892061

  8. Selecting single model in combination forecasting based on cointegration test and encompassing test.

    PubMed

    Jiang, Chuanjin; Zhang, Jing; Song, Fugen

    2014-01-01

    Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability.

  9. wayGoo recommender system: personalized recommendations for events scheduling, based on static and real-time information

    NASA Astrophysics Data System (ADS)

    Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.

    2016-05-01

    wayGoo is a fully functional application whose main functionalities include content geolocation, event scheduling, and indoor navigation. However, significant information about events do not reach users' attention, either because of the size of this information or because some information comes from real - time data sources. The purpose of this work is to facilitate event management operations by prioritizing the presented events, based on users' interests using both, static and real - time data. Through the wayGoo interface, users select conceptual topics that are interesting for them. These topics constitute a browsing behavior vector which is used for learning users' interests implicitly, without being intrusive. Then, the system estimates user preferences and return an events list sorted from the most preferred one to the least. User preferences are modeled via a Naïve Bayesian Network which consists of: a) the `decision' random variable corresponding to users' decision on attending an event, b) the `distance' random variable, modeled by a linear regression that estimates the probability that the distance between a user and each event destination is not discouraging, ` the seat availability' random variable, modeled by a linear regression, which estimates the probability that the seat availability is encouraging d) and the `relevance' random variable, modeled by a clustering - based collaborative filtering, which determines the relevance of each event users' interests. Finally, experimental results show that the proposed system contribute essentially to assisting users in browsing and selecting events to attend.

  10. Accelerating IMRT optimization by voxel sampling

    NASA Astrophysics Data System (ADS)

    Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.

    2007-12-01

    This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.

  11. Single-mode SOA-based 1kHz-linewidth dual-wavelength random fiber laser.

    PubMed

    Xu, Yanping; Zhang, Liang; Chen, Liang; Bao, Xiaoyi

    2017-07-10

    Narrow-linewidth multi-wavelength fiber lasers are of significant interests for fiber-optic sensors, spectroscopy, optical communications, and microwave generation. A novel narrow-linewidth dual-wavelength random fiber laser with single-mode operation, based on the semiconductor optical amplifier (SOA) gain, is achieved in this work for the first time, to the best of our knowledge. A simplified theoretical model is established to characterize such kind of random fiber laser. The inhomogeneous gain in SOA mitigates the mode competition significantly and alleviates the laser instability, which are frequently encountered in multi-wavelength fiber lasers with Erbium-doped fiber gain. The enhanced random distributed feedback from a 5km non-uniform fiber provides coherent feedback, acting as mode selection element to ensure single-mode operation with narrow linewidth of ~1kHz. The laser noises are also comprehensively investigated and studied, showing the improvements of the proposed random fiber laser with suppressed intensity and frequency noises.

  12. Application of random effects to the study of resource selection by animals

    USGS Publications Warehouse

    Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.

    2006-01-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  13. Application of random effects to the study of resource selection by animals.

    PubMed

    Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L

    2006-07-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  14. Population-Based Study of Trachoma in Guatemala.

    PubMed

    Silva, Juan Carlos; Diaz, Marco Antonio; Maul, Eugenio; Munoz, Beatriz E; West, Sheila K

    2015-01-01

    A prevalence survey for active trachoma in children aged under 10 years and trichiasis in women aged 40 years and older was carried out in four districts in the Sololá region in Guatemala, which is suspected of still having a trachoma problem. Population-based surveys were undertaken in three districts, within 15 randomly selected communities in each district. In addition, in a fourth district that borders the third district chosen, we surveyed the small northern sub-district, by randomly selecting three communities in each community, 100 children aged under 10 years were randomly selected, and all females over 40 years. Five survey teams were trained and standardized. Trachoma was graded using the World Health Organization simplified grading scheme and ocular swabs were taken in cases of clinical follicular or inflammatory trachoma. Prevalence estimates were calculated at district and sub-district level. Trachoma rates at district level varied from 0-5.1%. There were only two sub-districts where active trachoma approached 10% (Nahualá Costa, 8.1%, and Santa Catarina Costa, 7.3%). Trichiasis rates in females aged 40 years and older varied from 0-3%. Trachoma was likely a problem in the past. Trachoma is disappearing in the Sololá region in Guatemala. Health leadership may consider further mapping of villages around the areas with an especially high rate of trachoma and infection, and instituting trichiasis surgery and active trachoma intervention where needed.

  15. Active classifier selection for RGB-D object categorization using a Markov random field ensemble method

    NASA Astrophysics Data System (ADS)

    Durner, Maximilian; Márton, Zoltán.; Hillenbrand, Ulrich; Ali, Haider; Kleinsteuber, Martin

    2017-03-01

    In this work, a new ensemble method for the task of category recognition in different environments is presented. The focus is on service robotic perception in an open environment, where the robot's task is to recognize previously unseen objects of predefined categories, based on training on a public dataset. We propose an ensemble learning approach to be able to flexibly combine complementary sources of information (different state-of-the-art descriptors computed on color and depth images), based on a Markov Random Field (MRF). By exploiting its specific characteristics, the MRF ensemble method can also be executed as a Dynamic Classifier Selection (DCS) system. In the experiments, the committee- and topology-dependent performance boost of our ensemble is shown. Despite reduced computational costs and using less information, our strategy performs on the same level as common ensemble approaches. Finally, the impact of large differences between datasets is analyzed.

  16. The Impact of a School-Based Hygiene, Water Quality and Sanitation Intervention on Soil-Transmitted Helminth Reinfection: A Cluster-Randomized Trial

    PubMed Central

    Freeman, Matthew C.; Clasen, Thomas; Brooker, Simon J.; Akoko, Daniel O.; Rheingans, Richard

    2013-01-01

    We conducted a cluster-randomized trial to assess the impact of a school-based water treatment, hygiene, and sanitation program on reducing infection with soil-transmitted helminths (STHs) after school-based deworming. We assessed infection with STHs at baseline and then at two follow-up rounds 8 and 10 months after deworming. Forty government primary schools in Nyanza Province, Kenya were randomly selected and assigned to intervention or control arms. The intervention reduced reinfection prevalence (odds ratio [OR] 0.56, 95% confidence interval [CI] 0.31–1.00) and egg count (rate ratio [RR] 0.34, CI 0.15–0.75) of Ascaris lumbricoides. We found no evidence of significant intervention effects on the overall prevalence and intensity of Trichuris trichiura, hookworm, or Schistosoma mansoni reinfection. Provision of school-based sanitation, water quality, and hygiene improvements may reduce reinfection of STHs after school-based deworming, but the magnitude of the effects may be sex- and helminth species-specific. PMID:24019429

  17. Response rate differences between web and alternative data collection methods for public health research: a systematic review of the literature.

    PubMed

    Blumenberg, Cauane; Barros, Aluísio J D

    2018-07-01

    To systematically review the literature and compare response rates (RRs) of web surveys to alternative data collection methods in the context of epidemiologic and public health studies. We reviewed the literature using PubMed, LILACS, SciELO, WebSM, and Google Scholar databases. We selected epidemiologic and public health studies that considered the general population and used two parallel data collection methods, being one web-based. RR differences were analyzed using two-sample test of proportions, and pooled using random effects. We investigated agreement using Bland-and-Altman, and correlation using Pearson's coefficient. We selected 19 studies (nine randomized trials). The RR of the web-based data collection was 12.9 percentage points (p.p.) lower (95% CI = - 19.0, - 6.8) than the alternative methods, and 15.7 p.p. lower (95% CI = - 24.2, - 7.3) considering only randomized trials. Monetary incentives did not reduce the RR differences. A strong positive correlation (r = 0.83) between the RRs was observed. Web-based data collection present lower RRs compared to alternative methods. However, it is not recommended to interpret this as a meta-analytical evidence due to the high heterogeneity of the studies.

  18. A randomized evaluation of a computer-based physician's workstation: design considerations and baseline results.

    PubMed Central

    Rotman, B. L.; Sullivan, A. N.; McDonald, T.; DeSmedt, P.; Goodnature, D.; Higgins, M.; Suermondt, H. J.; Young, C. Y.; Owens, D. K.

    1995-01-01

    We are performing a randomized, controlled trial of a Physician's Workstation (PWS), an ambulatory care information system, developed for use in the General Medical Clinic (GMC) of the Palo Alto VA. Goals for the project include selecting appropriate outcome variables and developing a statistically powerful experimental design with a limited number of subjects. As PWS provides real-time drug-ordering advice, we retrospectively examined drug costs and drug-drug interactions in order to select outcome variables sensitive to our short-term intervention as well as to estimate the statistical efficiency of alternative design possibilities. Drug cost data revealed the mean daily cost per physician per patient was 99.3 cents +/- 13.4 cents, with a range from 0.77 cent to 1.37 cents. The rate of major interactions per prescription for each physician was 2.9% +/- 1%, with a range from 1.5% to 4.8%. Based on these baseline analyses, we selected a two-period parallel design for the evaluation, which maximized statistical power while minimizing sources of bias. PMID:8563376

  19. A new mosaic method for three-dimensional surface

    NASA Astrophysics Data System (ADS)

    Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun

    2011-08-01

    Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.

  20. Experimental rugged fitness landscape in protein sequence space.

    PubMed

    Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya

    2006-12-20

    The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12-130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7x10(4)-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18-24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region.

  1. Experimental Rugged Fitness Landscape in Protein Sequence Space

    PubMed Central

    Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya

    2006-01-01

    The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12–130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7×104-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18–24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region. PMID:17183728

  2. The Evaluation of Teachers' Job Performance Based on Total Quality Management (TQM)

    ERIC Educational Resources Information Center

    Shahmohammadi, Nayereh

    2017-01-01

    This study aimed to evaluate teachers' job performance based on total quality management (TQM) model. This was a descriptive survey study. The target population consisted of all primary school teachers in Karaj (N = 2917). Using Cochran formula and simple random sampling, 340 participants were selected as sample. A total quality management…

  3. Parent Reactions to a School-Based Body Mass Index Screening Program

    ERIC Educational Resources Information Center

    Johnson, Suzanne Bennett; Pilkington, Lorri L.; Lamp, Camilla; He, Jianghua; Deeb, Larry C.

    2009-01-01

    Background: This study assessed parent reactions to school-based body mass index (BMI) screening. Methods: After a K-8 BMI screening program, parents were sent a letter detailing their child's BMI results. Approximately 50 parents were randomly selected for interview from each of 4 child weight-classification groups (overweight, at risk of…

  4. Location Based Services for Outdoor Ecological Learning System: Design and Implementation

    ERIC Educational Resources Information Center

    Hsiao, Hsien-Sheng; Lin, Chih-Cheng; Feng, Ruei-Ting; Li, Kun Jing

    2010-01-01

    This paper aimed to demonstrate how location-based services were implemented in ubiquitous outdoor ecological learning system. In an elementary school in northern Taiwan, two fifth grade classes on an ecology project were randomly selected: The experimental group could access the ecological learning system on hand-held devices while the control…

  5. Effects of Activity Based Blended Learning Strategy on Prospective of Teachers' Achievement and Motivation

    ERIC Educational Resources Information Center

    Abdelraheem, Ahmed Yousif; Ahmed, Abdelrahman Mohammed

    2015-01-01

    The study investigates the effect of Activity based Blended Learning strategy and Conventional Blended Learning strategy on students' achievement and motivation. Two groups namely, experimental and control group from Sultan Qaboos University were selected randomly for the study. To assess students' achievement in the different groups, pre- and…

  6. An Energy-Efficient Game-Theory-Based Spectrum Decision Scheme for Cognitive Radio Sensor Networks

    PubMed Central

    Salim, Shelly; Moh, Sangman

    2016-01-01

    A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. In this paper, we propose an energy-efficient game-theory-based spectrum decision (EGSD) scheme for CRSNs to prolong the network lifetime. Note that energy efficiency is the most important design consideration in CRSNs because it determines the network lifetime. The central part of the EGSD scheme consists of two spectrum selection algorithms: random selection and game-theory-based selection. The EGSD scheme also includes a clustering algorithm, spectrum characterization with a Markov chain, and cluster member coordination. Our performance study shows that EGSD outperforms the existing popular framework in terms of network lifetime and coordination overhead. PMID:27376290

  7. An Energy-Efficient Game-Theory-Based Spectrum Decision Scheme for Cognitive Radio Sensor Networks.

    PubMed

    Salim, Shelly; Moh, Sangman

    2016-06-30

    A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. In this paper, we propose an energy-efficient game-theory-based spectrum decision (EGSD) scheme for CRSNs to prolong the network lifetime. Note that energy efficiency is the most important design consideration in CRSNs because it determines the network lifetime. The central part of the EGSD scheme consists of two spectrum selection algorithms: random selection and game-theory-based selection. The EGSD scheme also includes a clustering algorithm, spectrum characterization with a Markov chain, and cluster member coordination. Our performance study shows that EGSD outperforms the existing popular framework in terms of network lifetime and coordination overhead.

  8. Genetic parameters for body condition score, body weight, milk yield, and fertility estimated using random regression models.

    PubMed

    Berry, D P; Buckley, F; Dillon, P; Evans, R D; Rath, M; Veerkamp, R F

    2003-11-01

    Genetic (co)variances between body condition score (BCS), body weight (BW), milk yield, and fertility were estimated using a random regression animal model extended to multivariate analysis. The data analyzed included 81,313 BCS observations, 91,937 BW observations, and 100,458 milk test-day yields from 8725 multiparous Holstein-Friesian cows. A cubic random regression was sufficient to model the changing genetic variances for BCS, BW, and milk across different days in milk. The genetic correlations between BCS and fertility changed little over the lactation; genetic correlations between BCS and interval to first service and between BCS and pregnancy rate to first service varied from -0.47 to -0.31, and from 0.15 to 0.38, respectively. This suggests that maximum genetic gain in fertility from indirect selection on BCS should be based on measurements taken in midlactation when the genetic variance for BCS is largest. Selection for increased BW resulted in shorter intervals to first service, but more services and poorer pregnancy rates; genetic correlations between BW and pregnancy rate to first service varied from -0.52 to -0.45. Genetic selection for higher lactation milk yield alone through selection on increased milk yield in early lactation is likely to have a more deleterious effect on genetic merit for fertility than selection on higher milk yield in late lactation.

  9. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  10. Improved site-specific recombinase-based method to produce selectable marker- and vector-backbone-free transgenic cells

    NASA Astrophysics Data System (ADS)

    Yu, Yuan; Tong, Qi; Li, Zhongxia; Tian, Jinhai; Wang, Yizhi; Su, Feng; Wang, Yongsheng; Liu, Jun; Zhang, Yong

    2014-02-01

    PhiC31 integrase-mediated gene delivery has been extensively used in gene therapy and animal transgenesis. However, random integration events are observed in phiC31-mediated integration in different types of mammalian cells; as a result, the efficiencies of pseudo attP site integration and evaluation of site-specific integration are compromised. To improve this system, we used an attB-TK fusion gene as a negative selection marker, thereby eliminating random integration during phiC31-mediated transfection. We also excised the selection system and plasmid bacterial backbone by using two other site-specific recombinases, Cre and Dre. Thus, we generated clean transgenic bovine fetal fibroblast cells free of selectable marker and plasmid bacterial backbone. These clean cells were used as donor nuclei for somatic cell nuclear transfer (SCNT), indicating a similar developmental competence of SCNT embryos to that of non-transgenic cells. Therefore, the present gene delivery system facilitated the development of gene therapy and agricultural biotechnology.

  11. Promoting state health department evidence-based cancer and chronic disease prevention: a multi-phase dissemination study with a cluster randomized trial component

    PubMed Central

    2013-01-01

    Background Cancer and other chronic diseases reduce quality and length of life and productivity, and represent a significant financial burden to society. Evidence-based public health approaches to prevent cancer and other chronic diseases have been identified in recent decades and have the potential for high impact. Yet, barriers to implement prevention approaches persist as a result of multiple factors including lack of organizational support, limited resources, competing emerging priorities and crises, and limited skill among the public health workforce. The purpose of this study is to learn how best to promote the adoption of evidence based public health practice related to chronic disease prevention. Methods/design This paper describes the methods for a multi-phase dissemination study with a cluster randomized trial component that will evaluate the dissemination of public health knowledge about evidence-based prevention of cancer and other chronic diseases. Phase one involves development of measures of practitioner views on and organizational supports for evidence-based public health and data collection using a national online survey involving state health department chronic disease practitioners. In phase two, a cluster randomized trial design will be conducted to test receptivity and usefulness of dissemination strategies directed toward state health department chronic disease practitioners to enhance capacity and organizational support for evidence-based chronic disease prevention. Twelve state health department chronic disease units will be randomly selected and assigned to intervention or control. State health department staff and the university-based study team will jointly identify, refine, and select dissemination strategies within intervention units. Intervention (dissemination) strategies may include multi-day in-person training workshops, electronic information exchange modalities, and remote technical assistance. Evaluation methods include pre-post surveys, structured qualitative phone interviews, and abstraction of state-level chronic disease prevention program plans and progress reports. Trial registration clinicaltrials.gov: NCT01978054. PMID:24330729

  12. Competitive seeds-selection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhao, Jiuhua; Liu, Qipeng; Wang, Lin; Wang, Xiaofan

    2017-02-01

    This paper investigates a competitive diffusion model where two competitors simultaneously select a set of nodes (seeds) in the network to influence. We focus on the problem of how to select these seeds such that, when the diffusion process terminates, a competitor can obtain more supports than its opponent. Instead of studying this problem in the game-theoretic framework as in the existing work, in this paper we design several heuristic seed-selection strategies inspired by commonly used centrality measures-Betweenness Centrality (BC), Closeness Centrality (CC), Degree Centrality (DC), Eigenvector Centrality (EC), and K-shell Centrality (KS). We mainly compare three centrality-based strategies, which have better performances in competing with the random selection strategy, through simulations on both real and artificial networks. Even though network structure varies across different networks, we find certain common trend appearing in all of these networks. Roughly speaking, BC-based strategy and DC-based strategy are better than CC-based strategy. Moreover, if a competitor adopts CC-based strategy, then BC-based strategy is a better strategy than DC-based strategy for his opponent, and the superiority of BC-based strategy decreases as the heterogeneity of the network decreases.

  13. Narrow line width dual wavelength semiconductor optical amplifier based random fiber laser

    NASA Astrophysics Data System (ADS)

    Shawki, Heba A.; Kotb, Hussein E.; Khalil, Diaa

    2018-02-01

    A novel narrow line-width Single longitudinal mode (SLM) dual wavelength random fiber laser of 20 nm separation between wavelengths of 1530 and 1550 nm is presented. The laser is based on Rayleigh backscattering in a standard single mode fiber of 2 Km length as distributed mirrors, and a semiconductor optical amplifier (SOA) as the optical amplification medium. Two optical bandpass filters are used for the two wavelengths selectivity, and two Faraday Rotator mirrors are used to stabilize the two lasing wavelengths against fiber random birefringence. The optical signal to noise ratio (OSNR) was measured to be 38 dB. The line-width of the laser was measured to be 13.3 and 14 KHz at 1530 and 1550 nm respectively, at SOA pump current of 370 mA.

  14. Exact intervals and tests for median when one sample value possibly an outliner

    NASA Technical Reports Server (NTRS)

    Keller, G. J.; Walsh, J. E.

    1973-01-01

    Available are independent observations (continuous data) that are believed to be a random sample. Desired are distribution-free confidence intervals and significance tests for the population median. However, there is the possibility that either the smallest or the largest observation is an outlier. Then, use of a procedure for rejection of an outlying observation might seem appropriate. Such a procedure would consider that two alternative situations are possible and would select one of them. Either (1) the n observations are truly a random sample, or (2) an outlier exists and its removal leaves a random sample of size n-1. For either situation, confidence intervals and tests are desired for the median of the population yielding the random sample. Unfortunately, satisfactory rejection procedures of a distribution-free nature do not seem to be available. Moreover, all rejection procedures impose undesirable conditional effects on the observations, and also, can select the wrong one of the two above situations. It is found that two-sided intervals and tests based on two symmetrically located order statistics (not the largest and smallest) of the n observations have this property.

  15. Distribution of Orientation Selectivity in Recurrent Networks of Spiking Neurons with Different Random Topologies

    PubMed Central

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704

  16. Genetic parameters for growth characteristics of free-range chickens under univariate random regression models.

    PubMed

    Rovadoscki, Gregori A; Petrini, Juliana; Ramirez-Diaz, Johanna; Pertile, Simone F N; Pertille, Fábio; Salvian, Mayara; Iung, Laiza H S; Rodriguez, Mary Ana P; Zampar, Aline; Gaya, Leila G; Carvalho, Rachel S B; Coelho, Antonio A D; Savino, Vicente J M; Coutinho, Luiz L; Mourão, Gerson B

    2016-09-01

    Repeated measures from the same individual have been analyzed by using repeatability and finite dimension models under univariate or multivariate analyses. However, in the last decade, the use of random regression models for genetic studies with longitudinal data have become more common. Thus, the aim of this research was to estimate genetic parameters for body weight of four experimental chicken lines by using univariate random regression models. Body weight data from hatching to 84 days of age (n = 34,730) from four experimental free-range chicken lines (7P, Caipirão da ESALQ, Caipirinha da ESALQ and Carijó Barbado) were used. The analysis model included the fixed effects of contemporary group (gender and rearing system), fixed regression coefficients for age at measurement, and random regression coefficients for permanent environmental effects and additive genetic effects. Heterogeneous variances for residual effects were considered, and one residual variance was assigned for each of six subclasses of age at measurement. Random regression curves were modeled by using Legendre polynomials of the second and third orders, with the best model chosen based on the Akaike Information Criterion, Bayesian Information Criterion, and restricted maximum likelihood. Multivariate analyses under the same animal mixed model were also performed for the validation of the random regression models. The Legendre polynomials of second order were better for describing the growth curves of the lines studied. Moderate to high heritabilities (h(2) = 0.15 to 0.98) were estimated for body weight between one and 84 days of age, suggesting that selection for body weight at all ages can be used as a selection criteria. Genetic correlations among body weight records obtained through multivariate analyses ranged from 0.18 to 0.96, 0.12 to 0.89, 0.06 to 0.96, and 0.28 to 0.96 in 7P, Caipirão da ESALQ, Caipirinha da ESALQ, and Carijó Barbado chicken lines, respectively. Results indicate that genetic gain for body weight can be achieved by selection. Also, selection for body weight at 42 days of age can be maintained as a selection criterion. © 2016 Poultry Science Association Inc.

  17. Blood Selenium Concentration and Blood Cystatin C Concentration in a Randomly Selected Population of Healthy Children Environmentally Exposed to Lead and Cadmium.

    PubMed

    Gać, Paweł; Pawlas, Natalia; Wylężek, Paweł; Poręba, Rafał; Poręba, Małgorzata; Pawlas, Krystyna

    2017-01-01

    This study aimed at evaluation of a relationship between blood selenium concentration (Se-B) and blood cystatin C concentration (CST) in a randomly selected population of healthy children, environmentally exposed to lead and cadmium. The studies were conducted on 172 randomly selected children (7.98 ± 0.97 years). Among participants, the subgroups were distinguished, manifesting marginally low blood selenium concentration (Se-B 40-59 μg/l), suboptimal blood selenium concentration (Se-B: 60-79 μg/l) or optimal blood selenium concentration (Se-B ≥ 80 μg/l). At the subsequent stage, analogous subgroups of participants were selected separately in groups of children with BMI below median value (BMI <16.48 kg/m 2 ) and in children with BMI ≥ median value (BMI ≥16.48 kg/m 2 ). In all participants, values of Se-B and CST were estimated. In the entire group of examined children no significant differences in mean CST values were detected between groups distinguished on the base of normative Se-B values. Among children with BMI below 16.48 kg/m 2 , children with marginally low Se-B manifested significantly higher mean CST values, as compared to children with optimum Se-B (0.95 ± 0.07 vs. 0.82 ± 0.15 mg/l, p < 0.05). In summary, in a randomly selected population of healthy children no relationships could be detected between blood selenium concentration and blood cystatin C concentration. On the other hand, in children with low body mass index, a negative non-linear relationship was present between blood selenium concentration and blood cystatin C concentration.

  18. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences.

    PubMed

    Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter

    2014-01-13

    Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.

  19. [Research on K-means clustering segmentation method for MRI brain image based on selecting multi-peaks in gray histogram].

    PubMed

    Chen, Zhaoxue; Yu, Haizhong; Chen, Hao

    2013-12-01

    To solve the problem of traditional K-means clustering in which initial clustering centers are selected randomly, we proposed a new K-means segmentation algorithm based on robustly selecting 'peaks' standing for White Matter, Gray Matter and Cerebrospinal Fluid in multi-peaks gray histogram of MRI brain image. The new algorithm takes gray value of selected histogram 'peaks' as the initial K-means clustering center and can segment the MRI brain image into three parts of tissue more effectively, accurately, steadily and successfully. Massive experiments have proved that the proposed algorithm can overcome many shortcomings caused by traditional K-means clustering method such as low efficiency, veracity, robustness and time consuming. The histogram 'peak' selecting idea of the proposed segmentootion method is of more universal availability.

  20. Factors Associated With Time to Site Activation, Randomization, and Enrollment Performance in a Stroke Prevention Trial.

    PubMed

    Demaerschalk, Bart M; Brown, Robert D; Roubin, Gary S; Howard, Virginia J; Cesko, Eldina; Barrett, Kevin M; Longbottom, Mary E; Voeks, Jenifer H; Chaturvedi, Seemant; Brott, Thomas G; Lal, Brajesh K; Meschia, James F; Howard, George

    2017-09-01

    Multicenter clinical trials attempt to select sites that can move rapidly to randomization and enroll sufficient numbers of patients. However, there are few assessments of the success of site selection. In the CREST-2 (Carotid Revascularization and Medical Management for Asymptomatic Carotid Stenosis Trials), we assess factors associated with the time between site selection and authorization to randomize, the time between authorization to randomize and the first randomization, and the average number of randomizations per site per month. Potential factors included characteristics of the site, specialty of the principal investigator, and site type. For 147 sites, the median time between site selection to authorization to randomize was 9.9 months (interquartile range, 7.7, 12.4), and factors associated with early site activation were not identified. The median time between authorization to randomize and a randomization was 4.6 months (interquartile range, 2.6, 10.5). Sites with authorization to randomize in only the carotid endarterectomy study were slower to randomize, and other factors examined were not significantly associated with time-to-randomization. The recruitment rate was 0.26 (95% confidence interval, 0.23-0.28) patients per site per month. By univariate analysis, factors associated with faster recruitment were authorization to randomize in both trials, principal investigator specialties of interventional radiology and cardiology, pre-trial reported performance >50 carotid angioplasty and stenting procedures per year, status in the top half of recruitment in the CREST trial, and classification as a private health facility. Participation in StrokeNet was associated with slower recruitment as compared with the non-StrokeNet sites. Overall, selection of sites with high enrollment rates will likely require customization to align the sites selected to the factor under study in the trial. URL: http://www.clinicaltrials.gov. Unique identifier: NCT02089217. © 2017 American Heart Association, Inc.

  1. Efficacy Trial of a Selective Prevention Program Targeting Both Eating Disorder Symptoms and Unhealthy Weight Gain among Female College Students

    ERIC Educational Resources Information Center

    Stice, Eric; Rohde, Paul; Shaw, Heather; Marti, C. Nathan

    2012-01-01

    Objective: Evaluate a selective prevention program targeting both eating disorder symptoms and unhealthy weight gain in young women. Method: Female college students at high-risk for these outcomes by virtue of body image concerns (N = 398; M age = 18.4 years, SD = 0.6) were randomized to the Healthy Weight group-based 4-hr prevention program,…

  2. Recombinant Peptides as Biomarkers for Metastatic Breast Cancer Response

    DTIC Science & Technology

    2007-10-01

    could be specific to breast cancer tumor models has just been concluded. In vivo biopanning wsa conducted with a T7 phage -based random peptide library...peptides selected from phage -displayed libraries. 15. SUBJECT TERMS Breast cancer, phage display, molecular imaging, personalized medicine 16...recombinant peptides from phage -displayed peptide libraries can be selected that bind to receptors activated in response to therapy. These peptides in turn

  3. Kansas Adult Observational Safety Belt Usage Rates

    DOT National Transportation Integrated Search

    2011-07-01

    Methodology of Adult Survey - based on the federal guidelines in the Uniform Criteria manual. The Kansas survey is performed at 548 sites on 6 different road types in 20 randomly selected counties which encompass 85% of the population of Kansas. The ...

  4. Assessing risk-adjustment approaches under non-random selection.

    PubMed

    Luft, Harold S; Dudley, R Adams

    2004-01-01

    Various approaches have been proposed to adjust for differences in enrollee risk in health plans. Because risk-selection strategies may have different effects on enrollment, we simulated three types of selection--dumping, skimming, and stinting. Concurrent diagnosis-based risk adjustment, and a hybrid using concurrent adjustment for about 8% of the cases and prospective adjustment for the rest, perform markedly better than prospective or demographic adjustments, both in terms of R2 and the extent to which plans experience unwarranted gains or losses. The simulation approach offers a valuable tool for analysts in assessing various risk-adjustment strategies under different selection situations.

  5. Improving Secondary School Students' Achievement and Retention in Biology through Video-Based Multimedia Instruction

    ERIC Educational Resources Information Center

    Gambari, Amosa Isiaka; Yaki, Akawo Angwal; Gana, Eli S.; Ughovwa, Queen Eguono

    2014-01-01

    The study examined the effects of video-based multimedia instruction on secondary school students' achievement and retention in biology. In Nigeria, 120 students (60 boys and 60 girls) were randomly selected from four secondary schools assigned either into one of three experimental groups: Animation + Narration; Animation + On-screen Text;…

  6. Problem-Based Learning in an Eleventh Grade Chemistry Class: "Factors Affecting Cell Potential"

    ERIC Educational Resources Information Center

    Tarhan, Leman; Acar, Burcin

    2007-01-01

    The purpose of this research study was to examine the effectiveness of problem-based learning (PBL) on eleventh grade students' understanding of "The effects of temperature, concentration and pressure on cell potential" and also their social skills. Stratified randomly selected control and experimental groups with 20 students each were used in…

  7. The Effect of Cluster-Based Instruction on Mathematic Achievement in Inclusive Schools

    ERIC Educational Resources Information Center

    Gunarhadi, Sunardi; Anwar, Mohammad; Andayani, Tri Rejeki; Shaari, Abdull Sukor

    2016-01-01

    The research aimed to investigate the effect of Cluster-Based Instruction (CBI) on the academic achievement of Mathematics in inclusive schools. The sample was 68 students in two intact classes, including those with learning disabilities, selected using a cluster random technique among 17 inclusive schools in the regency of Surakarta. The two…

  8. Investigating the Relationship between Effective Communication of Spouse and Father-Child Relationship (Test Pattern Causes to Education Parents)

    ERIC Educational Resources Information Center

    Ataeifar, Robabeh; Amiri, Sholeh; Ali Nadi, Mohammad

    2016-01-01

    This research is targeted with the plan of father-child model or effective relationship mediating of spouses or investigating attachment style, personality traits, communication skills, and spouses' sexual satisfaction. Based on this, 260 people (father and child) were selected through random sampling method based on share. Participants were…

  9. The Prediction of Item Parameters Based on Classical Test Theory and Latent Trait Theory

    ERIC Educational Resources Information Center

    Anil, Duygu

    2008-01-01

    In this study, the prediction power of the item characteristics based on the experts' predictions on conditions try-out practices cannot be applied was examined for item characteristics computed depending on classical test theory and two-parameters logistic model of latent trait theory. The study was carried out on 9914 randomly selected students…

  10. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    PubMed

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  11. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  12. CW-SSIM kernel based random forest for image classification

    NASA Astrophysics Data System (ADS)

    Fan, Guangzhe; Wang, Zhou; Wang, Jiheng

    2010-07-01

    Complex wavelet structural similarity (CW-SSIM) index has been proposed as a powerful image similarity metric that is robust to translation, scaling and rotation of images, but how to employ it in image classification applications has not been deeply investigated. In this paper, we incorporate CW-SSIM as a kernel function into a random forest learning algorithm. This leads to a novel image classification approach that does not require a feature extraction or dimension reduction stage at the front end. We use hand-written digit recognition as an example to demonstrate our algorithm. We compare the performance of the proposed approach with random forest learning based on other kernels, including the widely adopted Gaussian and the inner product kernels. Empirical evidences show that the proposed method is superior in its classification power. We also compared our proposed approach with the direct random forest method without kernel and the popular kernel-learning method support vector machine. Our test results based on both simulated and realworld data suggest that the proposed approach works superior to traditional methods without the feature selection procedure.

  13. Does self-selection affect samples' representativeness in online surveys? An investigation in online video game research.

    PubMed

    Khazaal, Yasser; van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-07-07

    The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Our objective was to explore the representativeness of a self-selected sample of online gamers using online players' virtual characters (avatars). All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars' characteristics were defined using various games' scores, reported on the WoW's official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted.

  14. PyGlobal: A toolkit for automated compilation of DFT-based descriptors.

    PubMed

    Nath, Shilpa R; Kurup, Sudheer S; Joshi, Kaustubh A

    2016-06-15

    Density Functional Theory (DFT)-based Global reactivity descriptor calculations have emerged as powerful tools for studying the reactivity, selectivity, and stability of chemical and biological systems. A Python-based module, PyGlobal has been developed for systematically parsing a typical Gaussian outfile and extracting the relevant energies of the HOMO and LUMO. Corresponding global reactivity descriptors are further calculated and the data is saved into a spreadsheet compatible with applications like Microsoft Excel and LibreOffice. The efficiency of the module has been accounted by measuring the time interval for randomly selected Gaussian outfiles for 1000 molecules. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Optical encrypted holographic memory using triple random phase-encoded multiplexing in photorefractive LiNbO3:Fe crystal

    NASA Astrophysics Data System (ADS)

    Tang, Li-Chuan; Hu, Guang W.; Russell, Kendra L.; Chang, Chen S.; Chang, Chi Ching

    2000-10-01

    We propose a new holographic memory scheme based on random phase-encoded multiplexing in a photorefractive LiNbO3:Fe crystal. Experimental results show that rotating a diffuser placed as a random phase modulator in the path of the reference beam provides a simple yet effective method of increasing the holographic storage capabilities of the crystal. Combining this rotational multiplexing with angular multiplexing offers further advantages. Storage capabilities can be optimized by using a post-image random phase plate in the path of the object beam. The technique is applied to a triple phase-encoded optical security system that takes advantage of the high angular selectivity of the angular-rotational multiplexing components.

  16. Results from six generations of selection for intramuscular fat in Duroc swine using real-time ultrasound. I. Direct and correlated phenotypic responses to selection.

    PubMed

    Schwab, C R; Baas, T J; Stalder, K J; Nettleton, D

    2009-09-01

    A study was conducted to evaluate the efficacy of selection for intramuscular fat (IMF) in a population of purebred Duroc swine using real-time ultrasound. Forty gilts were purchased from US breeders and randomly mated for 2 generations to boars available in regional boar studs, resulting in a base population of 56 litters. Littermate pairs of gilts from this population were randomly assigned to a select line (SL) or control line (CL) and mated to the same sire to establish genetic ties between lines. At an average BW of 114 kg, a minimum of 4 longitudinal ultrasound images were collected 7 cm off-midline across the 10th to 13th ribs of all pigs for the prediction of IMF (UIMF). At least 1 barrow or gilt was slaughtered from each litter, and carcass data were collected. A sample of the LM from the 10th to 11th rib interface was analyzed for carcass IMF (CIMF). Breeding values for IMF were estimated by fitting a 2-trait (UIMF and CIMF) animal model in MATVEC. In the SL, selection in each subsequent generation was based on EBV for IMF with the top 10 boars and top 75 gilts used to produce the next generation. One boar from each sire family and 50 to 60 gilts representing all sire families were randomly selected to maintain the CL. Through 6 generations of selection, an 88% improvement in IMF has been realized (4.53% in SL vs. 2.41% in CL). Results of this study revealed no significant correlated responses in measures of growth performance. However, 6 generations of selection for IMF have yielded correlated effects of decreased loin muscle area and increased backfat. Additionally, the SL obtained more desirable objective measures of tenderness and sensory evaluations of flavor and off-flavor. Meat quality characteristics of pH, water holding capacity, and percent cooking loss were not significantly affected by selection for IMF. Selection for IMF using real-time ultrasound is effective but may be associated with genetic ramifications for carcass composition traits. Intramuscular fat may be used in purebred Duroc swine breeding programs as an indicator trait for sensory traits that influence consumer acceptance; however, rapid improvement should not be expected when simultaneous improvement in other trait categories is also pursued.

  17. Constructing high complexity synthetic libraries of long ORFs using in vitro selection

    NASA Technical Reports Server (NTRS)

    Cho, G.; Keefe, A. D.; Liu, R.; Wilson, D. S.; Szostak, J. W.

    2000-01-01

    We present a method that can significantly increase the complexity of protein libraries used for in vitro or in vivo protein selection experiments. Protein libraries are often encoded by chemically synthesized DNA, in which part of the open reading frame is randomized. There are, however, major obstacles associated with the chemical synthesis of long open reading frames, especially those containing random segments. Insertions and deletions that occur during chemical synthesis cause frameshifts, and stop codons in the random region will cause premature termination. These problems can together greatly reduce the number of full-length synthetic genes in the library. We describe a strategy in which smaller segments of the synthetic open reading frame are selected in vitro using mRNA display for the absence of frameshifts and stop codons. These smaller segments are then ligated together to form combinatorial libraries of long uninterrupted open reading frames. This process can increase the number of full-length open reading frames in libraries by up to two orders of magnitude, resulting in protein libraries with complexities of greater than 10(13). We have used this methodology to generate three types of displayed protein library: a completely random sequence library, a library of concatemerized oligopeptide cassettes with a propensity for forming amphipathic alpha-helical or beta-strand structures, and a library based on one of the most common enzymatic scaffolds, the alpha/beta (TIM) barrel. Copyright 2000 Academic Press.

  18. Prevalence and pattern of self-medication in Karachi: A community survey

    PubMed Central

    Afridi, M. Iqbal; Rasool, Ghulam; Tabassum, Rabia; Shaheen, Marriam; Siddiqullah; Shujauddin, M.

    2015-01-01

    Objective: To study the prevalence and pattern of self-medication among adult males and females in Karachi, Pakistan. Methods: This cross-sectional community- based survey was carried out at five randomly selected towns of Karachi (Defence, Gulshan-e-Iqbal, North Nazimabad, Malir, Orangi town) over a period of 3 months (October, November & December 2012). A sample size of 500 adult cases (250 males & 250 females), with systemic random selection from different towns of Karachi were inducted in this study. The city was divided in 5 zones and one town from each zone was selected by systemic randomization. First available male and female from each randomly selected house were included in the study. After consent and confidentiality assurance they were interviewed on semi-structured Performa designed for this purpose. Results were analyzed and tabulated through SPSS v14.0. Result: The prevalence of self-medication in males and females in Karachi is found to be 84.8% (males 88.4% and females 81.2%). The most frequent symptoms for which self-medication used were headache (32.7%), fever (23.3%) and the medicines used were painkillers (28.8%), fever reducer medicines (19.8%). The most common reason 33.3% was previous experience with similar symptom. Conclusion: Self-medication is highly prevalent (84.8%) in Karachi. It was frequently used for headache followed by fever. Predominantly painkillers, fever reducer and cough syrups were used in the form of tablets and syrups. Main source of medicines for males were friends and for females were relatives. PMID:26649022

  19. Knowledge diffusion of dynamical network in terms of interaction frequency.

    PubMed

    Liu, Jian-Guo; Zhou, Qing; Guo, Qiang; Yang, Zhen-Hua; Xie, Fei; Han, Jing-Ti

    2017-09-07

    In this paper, we present a knowledge diffusion (SKD) model for dynamic networks by taking into account the interaction frequency which always used to measure the social closeness. A set of agents, which are initially interconnected to form a random network, either exchange knowledge with their neighbors or move toward a new location through an edge-rewiring procedure. The activity of knowledge exchange between agents is determined by a knowledge transfer rule that the target node would preferentially select one neighbor node to transfer knowledge with probability p according to their interaction frequency instead of the knowledge distance, otherwise, the target node would build a new link with its second-order neighbor preferentially or select one node in the system randomly with probability 1 - p. The simulation results show that, comparing with the Null model defined by the random selection mechanism and the traditional knowledge diffusion (TKD) model driven by knowledge distance, the knowledge would spread more fast based on SKD driven by interaction frequency. In particular, the network structure of SKD would evolve as an assortative one, which is a fundamental feature of social networks. This work would be helpful for deeply understanding the coevolution of the knowledge diffusion and network structure.

  20. 47 CFR 1.1604 - Post-selection hearings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  1. 47 CFR 1.1604 - Post-selection hearings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  2. Our Town

    ERIC Educational Resources Information Center

    Robinson, Amanda

    2015-01-01

    This article outlines an issue-based lesson for a physical science course in which students investigate potential alternative energy sources for Alternatown, a fictitious city. Students are randomly selected to serve as town council members or as representatives of different alternative energy source options put before the council. The…

  3. Wave propagation modeling in composites reinforced by randomly oriented fibers

    NASA Astrophysics Data System (ADS)

    Kudela, Pawel; Radzienski, Maciej; Ostachowicz, Wieslaw

    2018-02-01

    A new method for prediction of elastic constants in randomly oriented fiber composites is proposed. It is based on mechanics of composites, the rule of mixtures and total mass balance tailored to the spectral element mesh composed of 3D brick elements. Selected elastic properties predicted by the proposed method are compared with values obtained by another theoretical method. The proposed method is applied for simulation of Lamb waves in glass-epoxy composite plate reinforced by randomly oriented fibers. Full wavefield measurements conducted by the scanning laser Doppler vibrometer are in good agreement with simulations performed by using the time domain spectral element method.

  4. Surgical treatment of hepatocellular carcinoma: evidence-based outcomes.

    PubMed

    Yamazaki, Shintaro; Takayama, Tadatoshi

    2008-02-07

    Surgeons may be severely criticized from the perspective of evidence-based medicine because the majority of surgical publications appear not to be convincing. In the top nine surgical journals in 1996, half of the 175 publications refer to pilot studies lacking a control group, 18% to animal experiments, and only 5% to randomized controlled trials (RCT). There are five levels of clinical evidence: level 1 (randomized controlled trial), level 2 (prospective concurrent cohort study), level 3 (retrospective historical cohort study), level 4 (pre-post study), and level 5 (case report). Recently, a Japanese evidence-based guideline for the surgical treatment of hepatocellular carcinoma (HCC) was made by a committee (Chairman, Professor Makuuchi and five members). We searched the literature using the Medline Dialog System with four keywords: HCC, surgery, English papers, in the last 20 years. A total of 915 publications were identified systematically reviewed. At the first selection (in which surgery-dominant papers were selected), 478 papers survived. In the second selection (clearly concluded papers), 181 papers survived. In the final selection (clinically significant papers), 100 papers survived. The evidence level of the 100 surviving papers is shown here: level-1 papers (13%), level-2 papers (11%), level-3 papers (52%), and level-4 papers (24%); therefore, there were 24% prospective papers and 76% retrospective papers. Here, we present a part of the guideline on the five main surgical issues: indication to operation, operative procedure, peri-operative care, prognostic factor, and post-operative adjuvant therapy.

  5. Surgical treatment of hepatocellular carcinoma: Evidence-based outcomes

    PubMed Central

    Yamazaki, Shintaro; Takayama, Tadatoshi

    2008-01-01

    Surgeons may be severely criticized from the perspective of evidence-based medicine because the majority of surgical publications appear not to be convincing. In the top nine surgical journals in 1996, half of the 175 publications refer to pilot studies lacking a control group, 18% to animal experiments, and only 5% to randomized controlled trials (RCT). There are five levels of clinical evidence: level 1 (randomized controlled trial), level 2 (prospective concurrent cohort study), level 3 (retrospective historical cohort study), level 4 (pre-post study), and level 5 (case report). Recently, a Japanese evidence-based guideline for the surgical treatment of hepatocellular carcinoma (HCC) was made by a committee (Chairman, Professor Makuuchi and five members). We searched the literature using the Medline Dialog System with four keywords: HCC, surgery, English papers, in the last 20 years. A total of 915 publications were identified systematically reviewed. At the first selection (in which surgery-dominant papers were selected), 478 papers survived. In the second selection (clearly concluded papers), 181 papers survived. In the final selection (clinically significant papers), 100 papers survived. The evidence level of the 100 surviving papers is shown here: level-1 papers (13%), level-2 papers (11%), level-3 papers (52%), and level-4 papers (24%); therefore, there were 24% prospective papers and 76% retrospective papers. Here, we present a part of the guideline on the five main surgical issues: indication to operation, operative procedure, peri-operative care, prognostic factor, and post-operative adjuvant therapy. PMID:18205256

  6. Coevolutionary dynamics in large, but finite populations

    NASA Astrophysics Data System (ADS)

    Traulsen, Arne; Claussen, Jens Christian; Hauert, Christoph

    2006-07-01

    Coevolving and competing species or game-theoretic strategies exhibit rich and complex dynamics for which a general theoretical framework based on finite populations is still lacking. Recently, an explicit mean-field description in the form of a Fokker-Planck equation was derived for frequency-dependent selection with two strategies in finite populations based on microscopic processes [A. Traulsen, J. C. Claussen, and C. Hauert, Phys. Rev. Lett. 95, 238701 (2005)]. Here we generalize this approach in a twofold way: First, we extend the framework to an arbitrary number of strategies and second, we allow for mutations in the evolutionary process. The deterministic limit of infinite population size of the frequency-dependent Moran process yields the adjusted replicator-mutator equation, which describes the combined effect of selection and mutation. For finite populations, we provide an extension taking random drift into account. In the limit of neutral selection, i.e., whenever the process is determined by random drift and mutations, the stationary strategy distribution is derived. This distribution forms the background for the coevolutionary process. In particular, a critical mutation rate uc is obtained separating two scenarios: above uc the population predominantly consists of a mixture of strategies whereas below uc the population tends to be in homogeneous states. For one of the fundamental problems in evolutionary biology, the evolution of cooperation under Darwinian selection, we demonstrate that the analytical framework provides excellent approximations to individual based simulations even for rather small population sizes. This approach complements simulation results and provides a deeper, systematic understanding of coevolutionary dynamics.

  7. Randomness Testing of the Advanced Encryption Standard Finalist Candidates

    DTIC Science & Technology

    2000-03-28

    Excursions Variant 18 168-185 Rank 1 7 Serial 2 186-187 Spectral DFT 1 8 Lempel - Ziv Compression 1 188 Aperiodic Templates 148 9-156 Linear Complexity...256 bits) for each of the algorithms , for a total of 80 different data sets10. These data sets were selected based on the belief that they would be...useful in evaluating the randomness of cryptographic algorithms . Table 2 lists the eight data types. For a description of the data types, see Appendix

  8. Optimal design of aperiodic, vertical silicon nanowire structures for photovoltaics.

    PubMed

    Lin, Chenxi; Povinelli, Michelle L

    2011-09-12

    We design a partially aperiodic, vertically-aligned silicon nanowire array that maximizes photovoltaic absorption. The optimal structure is obtained using a random walk algorithm with transfer matrix method based electromagnetic forward solver. The optimal, aperiodic structure exhibits a 2.35 times enhancement in ultimate efficiency compared to its periodic counterpart. The spectral behavior mimics that of a periodic array with larger lattice constant. For our system, we find that randomly-selected, aperiodic structures invariably outperform the periodic array.

  9. Classification of Medical Datasets Using SVMs with Hybrid Evolutionary Algorithms Based on Endocrine-Based Particle Swarm Optimization and Artificial Bee Colony Algorithms.

    PubMed

    Lin, Kuan-Cheng; Hsieh, Yi-Hsiu

    2015-10-01

    The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.

  10. [Study on correction of data bias caused by different missing mechanisms in survey of medical expenditure among students enrolling in Urban Resident Basic Medical Insurance].

    PubMed

    Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong

    2015-05-01

    The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.

  11. Bridging the gap between formal and experience-based knowledge for context-aware laparoscopy.

    PubMed

    Katić, Darko; Schuck, Jürgen; Wekerle, Anna-Laura; Kenngott, Hannes; Müller-Stich, Beat Peter; Dillmann, Rüdiger; Speidel, Stefanie

    2016-06-01

    Computer assistance is increasingly common in surgery. However, the amount of information is bound to overload processing abilities of surgeons. We propose methods to recognize the current phase of a surgery for context-aware information filtering. The purpose is to select the most suitable subset of information for surgical situations which require special assistance. We combine formal knowledge, represented by an ontology, and experience-based knowledge, represented by training samples, to recognize phases. For this purpose, we have developed two different methods. Firstly, we use formal knowledge about possible phase transitions to create a composition of random forests. Secondly, we propose a method based on cultural optimization to infer formal rules from experience to recognize phases. The proposed methods are compared with a purely formal knowledge-based approach using rules and a purely experience-based one using regular random forests. The comparative evaluation on laparoscopic pancreas resections and adrenalectomies employs a consistent set of quality criteria on clean and noisy input. The rule-based approaches proved best with noisefree data. The random forest-based ones were more robust in the presence of noise. Formal and experience-based knowledge can be successfully combined for robust phase recognition.

  12. The health of Canada's Aboriginal children: results from the First Nations and Inuit Regional Health Survey.

    PubMed

    MacMillan, Harriet L; Jamieson, Ellen; Walsh, Christine; Boyle, Michael; Crawford, Allison; MacMillan, Angus

    2010-04-01

    Reports on child health in Canada often refer to the disproportionate burden of poor health experienced by Aboriginal children and youth, yet little national data are available. This paper describes the health of First Nations and Inuit children and youth based on the First Nations and Inuit Regional Health Survey (FNIRHS). The FNIRHS combines data from 9 regional surveys conducted in 1996-1997 in Aboriginal reserve communities in all provinces. The target population consisted of all on-reserve communities. All households or a random sample of households or adults (depending on province) were selected based on their population representation. One child was randomly selected from each participating household, except in Ontario and Nova Scotia, where children were randomly selected based upon their population representation. Alberta did not include the section on children's health in their regional survey. Approximately 84% of adults, who were proxy respondents for their child, rated their children's health as very good or excellent. The most frequently reported conditions were ear problems (15%), followed by allergies (13%) and asthma (12%). Broken bones or fractures were the most frequently reported injuries (13%). Respondents reported that 17% of children had behavioural or emotional problems. Overall, 76% of children were reported to get along with the family "very well" or "quite well." While most respondents rated their child's health as very good or excellent, injuries, emotional and behavioural problems, respiratory conditions and ear problems were reported among many Aboriginal children. Issues such as substance abuse, exposure to violence and academic performance were not addressed in the 10 core survey questions. Clearly there is a need for more in-depth information about both the physical and emotional health of Aboriginal children and youth.

  13. Are outcomes reported in surgical randomized trials patient-important? A systematic review and meta-analysis

    PubMed Central

    Adie, Sam; Harris, Ian A.; Naylor, Justine M.; Mittal, Rajat

    2017-01-01

    Background The dangers of using surrogate outcomes are well documented. They may have little or no association with their patient-important correlates, leading to the approval and use of interventions that lack efficacy. We sought to assess whether primary outcomes in surgical randomized controlled trials (RCTs) are more likely to be patient-important outcomes than surrogate or laboratory-based outcomes. Methods We reviewed RCTs assessing an operative intervention published in 2008 and 2009 and indexed in MEDLINE, EMBASE or the Cochrane Central Register of Controlled Trials. After a pilot of the selection criteria, 1 reviewer selected trials and another reviewer checked the selection. We extracted information on outcome characteristics (patient-important, surrogate, or laboratory-based outcome) and whether they were primary or secondary outcomes. We calculated odds ratios (OR) and pooled in random-effects meta-analysis to obtain an overall estimate of the association between patient importance and primary outcome specification. Results In 350 included RCTs, a total of 8258 outcomes were reported (median 18 per trial. The mean proportion (per trial) of patient-important outcomes was 60%, and 66% of trials specified a patient-important primary outcome. The most commonly reported patient-important primary outcomes were morbid events (41%), intervention outcomes (11%), function (11%) and pain (9%). Surrogate and laboratory-based primary outcomes were reported in 33% and 8% of trials, respectively. Patient-important outcomes were not associated with primary outcome status (OR 0.82, 95% confidence interval 0.63–1.1, I2 = 21%). Conclusion A substantial proportion of surgical RCTs specify primary outcomes that are not patient-important. Authors, journals and trial funders should insist that patient-important outcomes are the focus of study. PMID:28234219

  14. Validity and power of association testing in family-based sampling designs: evidence for and against the common wisdom.

    PubMed

    Knight, Stacey; Camp, Nicola J

    2011-04-01

    Current common wisdom posits that association analyses using family-based designs have inflated type 1 error rates (if relationships are ignored) and independent controls are more powerful than familial controls. We explore these suppositions. We show theoretically that family-based designs can have deflated type-error rates. Through simulation, we examine the validity and power of family designs for several scenarios: cases from randomly or selectively ascertained pedigrees; and familial or independent controls. Family structures considered are as follows: sibships, nuclear families, moderate-sized and extended pedigrees. Three methods were considered with the χ(2) test for trend: variance correction (VC), weighted (weights assigned to account for genetic similarity), and naïve (ignoring relatedness) as well as the Modified Quasi-likelihood Score (MQLS) test. Selectively ascertained pedigrees had similar levels of disease enrichment; random ascertainment had no such restriction. Data for 1,000 cases and 1,000 controls were created under the null and alternate models. The VC and MQLS methods were always valid. The naïve method was anti-conservative if independent controls were used and valid or conservative in designs with familial controls. The weighted association method was generally valid for independent controls, and was conservative for familial controls. With regard to power, independent controls were more powerful for small-to-moderate selectively ascertained pedigrees, but familial and independent controls were equivalent in the extended pedigrees and familial controls were consistently more powerful for all randomly ascertained pedigrees. These results suggest a more complex situation than previously assumed, which has important implications for study design and analysis. © 2011 Wiley-Liss, Inc.

  15. Nest construction by a ground-nesting bird represents a potential trade-off between egg crypticity and thermoregulation.

    PubMed

    Mayer, Paul M; Smith, Levica M; Ford, Robert G; Watterson, Dustin C; McCutchen, Marshall D; Ryan, Mark R

    2009-04-01

    Predation selects against conspicuous colors in bird eggs and nests, while thermoregulatory constraints select for nest-building behavior that regulates incubation temperatures. We present results that suggest a trade-off between nest crypticity and thermoregulation of eggs based on selection of nest materials by piping plovers (Charadrius melodus), a ground-nesting bird that constructs simple, pebble-lined nests highly vulnerable to predators and exposed to temperature extremes. Piping plovers selected pebbles that were whiter and appeared closer in color to eggs than randomly available pebbles, suggesting a crypsis function. However, nests that were more contrasting in color to surrounding substrates were at greater risk of predation, suggesting an alternate strategy driving selection of white rocks. Near-infrared reflectance of nest pebbles was higher than randomly available pebbles, indicating a direct physical mechanism for heat control through pebble selection. Artificial nests constructed of randomly available pebbles heated more quickly and conferred heat to model eggs, causing eggs to heat more rapidly than in nests constructed from piping plover nest pebbles. Thermal models and field data indicated that temperatures inside nests may remain up to 2-6 degrees C cooler than surrounding substrates. Thermal models indicated that nests heat especially rapidly if not incubated, suggesting that nest construction behavior may serve to keep eggs cooler during the unattended laying period. Thus, pebble selection suggests a potential trade-off between maximizing heat reflectance to improve egg microclimate and minimizing conspicuous contrast of nests with the surrounding substrate to conceal eggs from predators. Nest construction behavior that employs light-colored, thermally reflective materials may represent an evolutionary response by birds and other egg-laying organisms to egg predation and heat stress.

  16. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    PubMed Central

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  17. Writing on wet paper

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav; Lisonek, Petr; Soukal, David

    2005-03-01

    In this paper, we show that the communication channel known as writing in memory with defective cells is a relevant information-theoretical model for a specific case of passive warden steganography when the sender embeds a secret message into a subset C of the cover object X without sharing the selection channel C with the recipient. The set C could be arbitrary, determined by the sender from the cover object using a deterministic, pseudo-random, or a truly random process. We call this steganography "writing on wet paper" and realize it using low-density random linear codes with the encoding step based on the LT process. The importance of writing on wet paper for covert communication is discussed within the context of adaptive steganography and perturbed quantization steganography. Heuristic arguments supported by tests using blind steganalysis indicate that the wet paper steganography provides improved steganographic security for embedding in JPEG images and is less vulnerable to attacks when compared to existing methods with shared selection channels.

  18. Process to Selectively Distinguish Viable from Non-Viable Bacterial Cells

    NASA Technical Reports Server (NTRS)

    LaDuc, Myron T.; Bernardini, Jame N.; Stam, Christina N.

    2010-01-01

    The combination of ethidium monoazide (EMA) and post-fragmentation, randomly primed DNA amplification technologies will enhance the analytical capability to discern viable from non-viable bacterial cells in spacecraft-related samples. Intercalating agents have been widely used since the inception of molecular biology to stain and visualize nucleic acids. Only recently, intercalating agents such as EMA have been exploited to selectively distinguish viable from dead bacterial cells. Intercalating dyes can only penetrate the membranes of dead cells. Once through the membrane and actually inside the cell, they intercalate DNA and, upon photolysis with visible light, produce stable DNA monoadducts. Once the DNA is crosslinked, it becomes insoluble and unable to be fragmented for post-fragmentation, randomly primed DNA library formation. Viable organisms DNA remains unaffected by the intercalating agents, allowing for amplification via post-fragmentation, randomly primed technologies. This results in the ability to carry out downstream nucleic acid-based analyses on viable microbes to the exclusion of all non-viable cells.

  19. Improved Neural Networks with Random Weights for Short-Term Load Forecasting

    PubMed Central

    Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo

    2015-01-01

    An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting. PMID:26629825

  20. Improved Neural Networks with Random Weights for Short-Term Load Forecasting.

    PubMed

    Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo

    2015-01-01

    An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting.

  1. Automatic learning-based beam angle selection for thoracic IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationallymore » efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume coverage and organ at risk sparing and were superior over plans produced with fixed sets of common beam angles. The great majority of the automatic plans (93%) were approved as clinically acceptable by three radiation therapy specialists. Conclusions: The results demonstrated the feasibility of utilizing a learning-based approach for automatic selection of beam angles in thoracic IMRT planning. The proposed method may assist in reducing the manual planning workload, while sustaining plan quality.« less

  2. Randomized clinical trial of Appendicitis Inflammatory Response score-based management of patients with suspected appendicitis.

    PubMed

    Andersson, M; Kolodziej, B; Andersson, R E

    2017-10-01

    The role of imaging in the diagnosis of appendicitis is controversial. This prospective interventional study and nested randomized trial analysed the impact of implementing a risk stratification algorithm based on the Appendicitis Inflammatory Response (AIR) score, and compared routine imaging with selective imaging after clinical reassessment. Patients presenting with suspicion of appendicitis between September 2009 and January 2012 from age 10 years were included at 21 emergency surgical centres and from age 5 years at three university paediatric centres. Registration of clinical characteristics, treatments and outcomes started during the baseline period. The AIR score-based algorithm was implemented during the intervention period. Intermediate-risk patients were randomized to routine imaging or selective imaging after clinical reassessment. The baseline period included 1152 patients, and the intervention period 2639, of whom 1068 intermediate-risk patients were randomized. In low-risk patients, use of the AIR score-based algorithm resulted in less imaging (19·2 versus 34·5 per cent; P < 0·001), fewer admissions (29·5 versus 42·8 per cent; P < 0·001), and fewer negative explorations (1·6 versus 3·2 per cent; P = 0·030) and operations for non-perforated appendicitis (6·8 versus 9·7 per cent; P = 0·034). Intermediate-risk patients randomized to the imaging and observation groups had the same proportion of negative appendicectomies (6·4 versus 6·7 per cent respectively; P = 0·884), number of admissions, number of perforations and length of hospital stay, but routine imaging was associated with an increased proportion of patients treated for appendicitis (53·4 versus 46·3 per cent; P = 0·020). AIR score-based risk classification can safely reduce the use of diagnostic imaging and hospital admissions in patients with suspicion of appendicitis. Registration number: NCT00971438 ( http://www.clinicaltrials.gov). © 2017 BJS Society Ltd Published by John Wiley & Sons Ltd.

  3. Mindfulness-based stress reduction for treating chronic headache: A systematic review and meta-analysis.

    PubMed

    Anheyer, Dennis; Leach, Matthew J; Klose, Petra; Dobos, Gustav; Cramer, Holger

    2018-01-01

    Background Mindfulness-based stress reduction/cognitive therapy are frequently used for pain-related conditions, but their effects on headache remain uncertain. This review aimed to assess the efficacy and safety of mindfulness-based stress reduction/cognitive therapy in reducing the symptoms of chronic headache. Data sources and study selection MEDLINE/PubMed, Scopus, CENTRAL, and PsychINFO were searched to 16 June 2017. Randomized controlled trials comparing mindfulness-based stress reduction/cognitive therapy with usual care or active comparators for migraine and/or tension-type headache, which assessed headache frequency, duration or intensity as a primary outcome, were eligible for inclusion. Risk of bias was assessed using the Cochrane Tool. Results Five randomized controlled trials (two on tension-type headache; one on migraine; two with mixed samples) with a total of 185 participants were included. Compared to usual care, mindfulness-based stress reduction/cognitive therapy did not improve headache frequency (three randomized controlled trials; standardized mean difference = 0.00; 95% confidence interval = -0.33,0.32) or headache duration (three randomized controlled trials; standardized mean difference = -0.08; 95% confidence interval = -1.03,0.87). Similarly, no significant difference between groups was found for pain intensity (five randomized controlled trials; standardized mean difference = -0.78; 95% confidence interval = -1.72,0.16). Conclusions Due to the low number, small scale and often high or unclear risk of bias of included randomized controlled trials, the results are imprecise; this may be consistent with either an important or negligible effect. Therefore, more rigorous trials with larger sample sizes are needed.

  4. Investigating the Washback Effects of Task-Based Instruction on the Iranian EFL Learners' Vocabulary Learning

    ERIC Educational Resources Information Center

    Hamzeh, Alireza

    2016-01-01

    The current research was an attempt to explore the washback impact of task-based instruction (TBI) on EFL Iranian learners' vocabulary development. To this end, conducting an Oxford Placement Test (OPT), 30 out of 72 EFL Iranian learners studying in an English language institute, were randomly selected. Then, they were assigned to experimental (N…

  5. Child's Weight Status and Parent's Response to a School-Based Body Mass Index Screening and Parent Notification Program

    ERIC Educational Resources Information Center

    Lee, Jiwoo; Kubik, Martha Y.

    2015-01-01

    This study examined the response of parents of elementary school-aged children to a school-based body mass index (BMI) screening and parent notification program conducted in one Minnesota school district in 2010-2011 and whether parent's response was moderated by child's weight status. Randomly selected parents (N = 122) of second- and…

  6. Effects of Computer Based Learning on Students' Attitudes and Achievements towards Analytical Chemistry

    ERIC Educational Resources Information Center

    Akcay, Hüsamettin; Durmaz, Asli; Tüysüz, Cengiz; Feyzioglu, Burak

    2006-01-01

    The aim of this study was to compare the effects of computer-based learning and traditional method on students' attitudes and achievement towards analytical chemistry. Students from Chemistry Education Department at Dokuz Eylul University (D.E.U) were selected randomly and divided into three groups; two experimental (Eg-1 and Eg-2) and a control…

  7. Examining the Use of Web-Based Tests for Testing Academic Vocabulary in EAP Instruction

    ERIC Educational Resources Information Center

    Dashtestani, Reza

    2015-01-01

    Interest in Web-based and computer-assisted language testing is growing in the field of English for academic purposes (EAP). In this study, four groups of undergraduate EAP students (n = 120), each group consisted of 30 students, were randomly selected from four different disciplines, i.e. biology, political sciences, psychology, and law. The four…

  8. Measuring New Environmental Paradigm Based on Students' Knowledge about Ecosystem and Locus of Control

    ERIC Educational Resources Information Center

    Putrawan, I. Made

    2015-01-01

    This research is aimed at obtaining information related to instrument development of Students' New Environmental Paradigm (NEP) based on their knowledge about ecosystem and Locus of Control (LOC). A survey method has been carried out by selecting senior high school students randomly with n = 362 (first stage 2013) and n = 722 (2014). Data analysed…

  9. The Effectiveness of the Harm Reduction Group Therapy Based on Bandura's Self-Efficacy Theory on Risky Behaviors of Drug-Dependent Sex Worker Women.

    PubMed

    Rabani-Bavojdan, Marjan; Rabani-Bavojdan, Mozhgan; Rajabizadeh, Ghodratollah; Kaviani, Nahid; Bahramnejad, Ali; Ghaffari, Zohreh; Shafiei-Bafti, Mehdi

    2017-07-01

    The aim of this study was to investigate the effectiveness of the harm reduction group therapy based on Bandura's self-efficacy theory on risky behaviors of sex workers in Kerman, Iran. A quasi-experimental two-group design (a random selection with pre-test and post-test) was used. A risky behaviors questionnaire was used to collect. The sample was selected among sex workers referring to drop-in centers in Kerman. Subjects were allocated to two groups and were randomly classified into two experimental and control groups. The sample group consisted of 56 subjects. The experimental design was carried out during 12 sessions, and the post-test was performed one month and two weeks after the completion of the sessions. The results were analyzed statistically. By reducing harm based on Bandura's self-efficacy theory, the risky behaviors of the experimental group, including injection behavior, sexual behavior, violence, and damage to the skin, were significantly reduced in the pre-test compared to the post-test (P < 0.010). The harm reduction group therapy based on Bandura's self-efficacy theory can reduce the risky behaviors of sex workers.

  10. On the feasibility of automatically selecting similar patients in highly individualized radiotherapy dose reconstruction for historic data of pediatric cancer survivors.

    PubMed

    Virgolin, Marco; van Dijk, Irma W E M; Wiersma, Jan; Ronckers, Cécile M; Witteveen, Cees; Bel, Arjan; Alderliesten, Tanja; Bosman, Peter A N

    2018-04-01

    The aim of this study is to establish the first step toward a novel and highly individualized three-dimensional (3D) dose distribution reconstruction method, based on CT scans and organ delineations of recently treated patients. Specifically, the feasibility of automatically selecting the CT scan of a recently treated childhood cancer patient who is similar to a given historically treated child who suffered from Wilms' tumor is assessed. A cohort of 37 recently treated children between 2- and 6-yr old are considered. Five potential notions of ground-truth similarity are proposed, each focusing on different anatomical aspects. These notions are automatically computed from CT scans of the abdomen and 3D organ delineations (liver, spleen, spinal cord, external body contour). The first is based on deformable image registration, the second on the Dice similarity coefficient, the third on the Hausdorff distance, the fourth on pairwise organ distances, and the last is computed by means of the overlap volume histogram. The relationship between typically available features of historically treated patients and the proposed ground-truth notions of similarity is studied by adopting state-of-the-art machine learning techniques, including random forest. Also, the feasibility of automatically selecting the most similar patient is assessed by comparing ground-truth rankings of similarity with predicted rankings. Similarities (mainly) based on the external abdomen shape and on the pairwise organ distances are highly correlated (Pearson r p ≥ 0.70) and are successfully modeled with random forests based on historically recorded features (pseudo-R 2 ≥ 0.69). In contrast, similarities based on the shape of internal organs cannot be modeled. For the similarities that random forest can reliably model, an estimation of feature relevance indicates that abdominal diameters and weight are the most important. Experiments on automatically selecting similar patients lead to coarse, yet quite robust results: the most similar patient is retrieved only 22% of the times, however, the error in worst-case scenarios is limited, with the fourth most similar patient being retrieved. Results demonstrate that automatically selecting similar patients is feasible when focusing on the shape of the external abdomen and on the position of internal organs. Moreover, whereas the common practice in phantom-based dose reconstruction is to select a representative phantom using age, height, and weight as discriminant factors for any treatment scenario, our analysis on abdominal tumor treatment for children shows that the most relevant features are weight and the anterior-posterior and left-right abdominal diameters. © 2018 American Association of Physicists in Medicine.

  11. Determining Consumer Preference for Furniture Product Characteristics

    ERIC Educational Resources Information Center

    Turner, Carolyn S.; Edwards, Kay P.

    1974-01-01

    The paper describes instruments for determining preferences of consumers for selected product characteristics associated with furniture choices--specifically style, color, color scheme, texture, and materials--and the procedures for administration of those instruments. Results are based on a random sampling of public housing residents. (Author/MW)

  12. Text Detection and Translation from Natural Scenes

    DTIC Science & Technology

    2001-06-01

    is no explicit tags around Chinese words. A module for Chinese word segmentation is included in the system. This segmentor uses a word- frequency ... list to make segmentation decisions. We tested the EBMT based method using randomly selected 50 signs from our database, assuming perfect sign

  13. Evidentiary Pluralism as a Strategy for Research and Evidence-Based Practice in Rehabilitation Psychology

    PubMed Central

    Tucker, Jalie A.; Reed, Geoffrey M.

    2008-01-01

    This paper examines the utility of evidentiary pluralism, a research strategy that selects methods in service of content questions, in the context of rehabilitation psychology. Hierarchical views that favor randomized controlled clinical trials (RCTs) over other evidence are discussed, and RCTs are considered as they intersect with issues in the field. RCTs are vital for establishing treatment efficacy, but whether they are uniformly the best evidence to inform practice is critically evaluated. We argue that because treatment is only one of several variables that influence functioning, disability, and participation over time, an expanded set of conceptual and data analytic approaches should be selected in an informed way to support an expanded research agenda that investigates therapeutic and extra-therapeutic influences on rehabilitation processes and outcomes. The benefits of evidentiary pluralism are considered, including helping close the gap between the narrower clinical rehabilitation model and a public health disability model. KEY WORDS: evidence-based practice, evidentiary pluralism, rehabilitation psychology, randomized controlled trials PMID:19649150

  14. Vast Portfolio Selection with Gross-exposure Constraints*

    PubMed Central

    Fan, Jianqing; Zhang, Jingjin; Yu, Ke

    2012-01-01

    We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404

  15. Randomization in clinical trials in orthodontics: its significance in research design and methods to achieve it.

    PubMed

    Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore

    2011-12-01

    Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.

  16. Which Approach Is More Effective in the Selection of Plants with Antimicrobial Activity?

    PubMed Central

    Silva, Ana Carolina Oliveira; Santana, Elidiane Fonseca; Saraiva, Antonio Marcos; Coutinho, Felipe Neves; Castro, Ricardo Henrique Acre; Pisciottano, Maria Nelly Caetano; Amorim, Elba Lúcia Cavalcanti; Albuquerque, Ulysses Paulino

    2013-01-01

    The development of the present study was based on selections using random, direct ethnopharmacological, and indirect ethnopharmacological approaches, aiming to evaluate which method is the best for bioprospecting new antimicrobial plant drugs. A crude extract of 53 species of herbaceous plants collected in the semiarid region of Northeast Brazil was tested against 11 microorganisms. Well-agar diffusion and minimum inhibitory concentration (MIC) techniques were used. Ten extracts from direct, six from random, and three from indirect ethnopharmacological selections exhibited activities that ranged from weak to very active against the organisms tested. The strain most susceptible to the evaluated extracts was Staphylococcus aureus. The MIC analysis revealed the best result for the direct ethnopharmacological approach, considering that some species yielded extracts classified as active or moderately active (MICs between 250 and 1000 µg/mL). Furthermore, one species from this approach inhibited the growth of the three Candida strains. Thus, it was concluded that the direct ethnopharmacological approach is the most effective when selecting species for bioprospecting new plant drugs with antimicrobial activities. PMID:23878595

  17. Identification of chondrocyte-binding peptides by phage display.

    PubMed

    Cheung, Crystal S F; Lui, Julian C; Baron, Jeffrey

    2013-07-01

    As an initial step toward targeting cartilage tissue for potential therapeutic applications, we sought cartilage-binding peptides using phage display, a powerful technology for selection of peptides that bind to molecules of interest. A library of phage displaying random 12-amino acid peptides was iteratively incubated with cultured chondrocytes to select phage that bind cartilage. The resulting phage clones demonstrated increased affinity to chondrocytes by ELISA, when compared to a wild-type, insertless phage. Furthermore, the selected phage showed little preferential binding to other cell types, including primary skin fibroblast, myocyte and hepatocyte cultures, suggesting a tissue-specific interaction. Immunohistochemical staining revealed that the selected phage bound chondrocytes themselves and the surrounding extracellular matrix. FITC-tagged peptides were synthesized based on the sequence of cartilage-binding phage clones. These peptides, but not a random peptide, bound cultured chondrocytes, and extracelluar matrix. In conclusion, using phage display, we identified peptide sequences that specifically target chondrocytes. We anticipate that such peptides may be coupled to therapeutic molecules to provide targeted treatment for cartilage disorders. Copyright © 2013 Orthopaedic Research Society.

  18. Evaluation of variable selection methods for random forests and omics data sets.

    PubMed

    Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke

    2017-10-16

    Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.

  19. Population genetics of polymorphism and divergence for diploid selection models with arbitrary dominance.

    PubMed

    Williamson, Scott; Fledel-Alon, Adi; Bustamante, Carlos D

    2004-09-01

    We develop a Poisson random-field model of polymorphism and divergence that allows arbitrary dominance relations in a diploid context. This model provides a maximum-likelihood framework for estimating both selection and dominance parameters of new mutations using information on the frequency spectrum of sequence polymorphisms. This is the first DNA sequence-based estimator of the dominance parameter. Our model also leads to a likelihood-ratio test for distinguishing nongenic from genic selection; simulations indicate that this test is quite powerful when a large number of segregating sites are available. We also use simulations to explore the bias in selection parameter estimates caused by unacknowledged dominance relations. When inference is based on the frequency spectrum of polymorphisms, genic selection estimates of the selection parameter can be very strongly biased even for minor deviations from the genic selection model. Surprisingly, however, when inference is based on polymorphism and divergence (McDonald-Kreitman) data, genic selection estimates of the selection parameter are nearly unbiased, even for completely dominant or recessive mutations. Further, we find that weak overdominant selection can increase, rather than decrease, the substitution rate relative to levels of polymorphism. This nonintuitive result has major implications for the interpretation of several popular tests of neutrality.

  20. GIS-based support vector machine modeling of earthquake-triggered landslide susceptibility in the Jianjiang River watershed, China

    NASA Astrophysics Data System (ADS)

    Xu, Chong; Dai, Fuchu; Xu, Xiwei; Lee, Yuan Hsi

    2012-04-01

    Support vector machine (SVM) modeling is based on statistical learning theory. It involves a training phase with associated input and target output values. In recent years, the method has become increasingly popular. The main purpose of this study is to evaluate the mapping power of SVM modeling in earthquake triggered landslide-susceptibility mapping for a section of the Jianjiang River watershed using a Geographic Information System (GIS) software. The river was affected by the Wenchuan earthquake of May 12, 2008. Visual interpretation of colored aerial photographs of 1-m resolution and extensive field surveys provided a detailed landslide inventory map containing 3147 landslides related to the 2008 Wenchuan earthquake. Elevation, slope angle, slope aspect, distance from seismogenic faults, distance from drainages, and lithology were used as the controlling parameters. For modeling, three groups of positive and negative training samples were used in concert with four different kernel functions. Positive training samples include the centroids of 500 large landslides, those of all 3147 landslides, and 5000 randomly selected points in landslide polygons. Negative training samples include 500, 3147, and 5000 randomly selected points on slopes that remained stable during the Wenchuan earthquake. The four kernel functions are linear, polynomial, radial basis, and sigmoid. In total, 12 cases of landslide susceptibility were mapped. Comparative analyses of landslide-susceptibility probability and area relation curves show that both the polynomial and radial basis functions suitably classified the input data as either landslide positive or negative though the radial basis function was more successful. The 12 generated landslide-susceptibility maps were compared with known landslide centroid locations and landslide polygons to verify the success rate and predictive accuracy of each model. The 12 results were further validated using area-under-curve analysis. Group 3 with 5000 randomly selected points on the landslide polygons, and 5000 randomly selected points along stable slopes gave the best results with a success rate of 79.20% and predictive accuracy of 79.13% under the radial basis function. Of all the results, the sigmoid kernel function was the least skillful when used in concert with the centroid data of all 3147 landslides as positive training samples, and the negative training samples of 3147 randomly selected points in regions of stable slope (success rate = 54.95%; predictive accuracy = 61.85%). This paper also provides suggestions and reference data for selecting appropriate training samples and kernel function types for earthquake triggered landslide-susceptibility mapping using SVM modeling. Predictive landslide-susceptibility maps could be useful in hazard mitigation by helping planners understand the probability of landslides in different regions.

  1. Optimized probability sampling of study sites to improve generalizability in a multisite intervention trial.

    PubMed

    Kraschnewski, Jennifer L; Keyserling, Thomas C; Bangdiwala, Shrikant I; Gizlice, Ziya; Garcia, Beverly A; Johnston, Larry F; Gustafson, Alison; Petrovic, Lindsay; Glasgow, Russell E; Samuel-Hodge, Carmen D

    2010-01-01

    Studies of type 2 translation, the adaption of evidence-based interventions to real-world settings, should include representative study sites and staff to improve external validity. Sites for such studies are, however, often selected by convenience sampling, which limits generalizability. We used an optimized probability sampling protocol to select an unbiased, representative sample of study sites to prepare for a randomized trial of a weight loss intervention. We invited North Carolina health departments within 200 miles of the research center to participate (N = 81). Of the 43 health departments that were eligible, 30 were interested in participating. To select a representative and feasible sample of 6 health departments that met inclusion criteria, we generated all combinations of 6 from the 30 health departments that were eligible and interested. From the subset of combinations that met inclusion criteria, we selected 1 at random. Of 593,775 possible combinations of 6 counties, 15,177 (3%) met inclusion criteria. Sites in the selected subset were similar to all eligible sites in terms of health department characteristics and county demographics. Optimized probability sampling improved generalizability by ensuring an unbiased and representative sample of study sites.

  2. True randomness from an incoherent source

    NASA Astrophysics Data System (ADS)

    Qi, Bing

    2017-11-01

    Quantum random number generators (QRNGs) harness the intrinsic randomness in measurement processes: the measurement outputs are truly random, given the input state is a superposition of the eigenstates of the measurement operators. In the case of trusted devices, true randomness could be generated from a mixed state ρ so long as the system entangled with ρ is well protected. We propose a random number generation scheme based on measuring the quadrature fluctuations of a single mode thermal state using an optical homodyne detector. By mixing the output of a broadband amplified spontaneous emission (ASE) source with a single mode local oscillator (LO) at a beam splitter and performing differential photo-detection, we can selectively detect the quadrature fluctuation of a single mode output of the ASE source, thanks to the filtering function of the LO. Experimentally, a quadrature variance about three orders of magnitude larger than the vacuum noise has been observed, suggesting this scheme can tolerate much higher detector noise in comparison with QRNGs based on measuring the vacuum noise. The high quality of this entropy source is evidenced by the small correlation coefficients of the acquired data. A Toeplitz-hashing extractor is applied to generate unbiased random bits from the Gaussian distributed raw data, achieving an efficiency of 5.12 bits per sample. The output of the Toeplitz extractor successfully passes all the NIST statistical tests for random numbers.

  3. Vortex-Core Reversal Dynamics: Towards Vortex Random Access Memory

    NASA Astrophysics Data System (ADS)

    Kim, Sang-Koog

    2011-03-01

    An energy-efficient, ultrahigh-density, ultrafast, and nonvolatile solid-state universal memory is a long-held dream in the field of information-storage technology. The magnetic random access memory (MRAM) along with a spin-transfer-torque switching mechanism is a strong candidate-means of realizing that dream, given its nonvolatility, infinite endurance, and fast random access. Magnetic vortices in patterned soft magnetic dots promise ground-breaking applications in information-storage devices, owing to the very stable twofold ground states of either their upward or downward core magnetization orientation and plausible core switching by in-plane alternating magnetic fields or spin-polarized currents. However, two technologically most important but very challenging issues --- low-power recording and reliable selection of each memory cell with already existing cross-point architectures --- have not yet been resolved for the basic operations in information storage, that is, writing (recording) and readout. Here, we experimentally demonstrate a magnetic vortex random access memory (VRAM) in the basic cross-point architecture. This unique VRAM offers reliable cell selection and low-power-consumption control of switching of out-of-plane core magnetizations using specially designed rotating magnetic fields generated by two orthogonal and unipolar Gaussian-pulse currents along with optimized pulse width and time delay. Our achievement of a new device based on a new material, that is, a medium composed of patterned vortex-state disks, together with the new physics on ultrafast vortex-core switching dynamics, can stimulate further fruitful research on MRAMs that are based on vortex-state dot arrays.

  4. Results from six generations of selection for intramuscular fat in Duroc swine using real-time ultrasound. II. Genetic parameters and trends.

    PubMed

    Schwab, C R; Baas, T J; Stalder, K J

    2010-01-01

    Design of breeding programs requires knowledge of variance components that exist for traits included in specific breeding goals and the genetic relationships that exist among traits of economic importance. A study was conducted to evaluate direct and correlated genetic responses to selection for intramuscular fat (IMF) and to estimate genetic parameters for economically important traits in Duroc swine. Forty gilts were purchased from US breeders and randomly mated for 2 generations to boars available in regional boar studs to develop a base population of 56 litters. Littermate pairs of gilts from this population were randomly assigned to a select line (SL) or control line (CL) and mated to the same boar to establish genetic ties between lines. In the SL, the top 10 boars and 75 gilts were selected based on IMF EBV obtained from a bivariate animal model that included IMF evaluated on the carcass and IMF predicted via ultrasound. One boar from each sire family and 50 to 60 gilts representing all sire families were randomly selected to maintain the CL. Carcass and ultrasound IMF were both moderately heritable (0.31 and 0.38, respectively). Moderate to high genetic relationships were estimated among carcass backfat and meat quality measures of IMF, Instron tenderness, and objective loin muscle color. Based on estimates obtained in this study, more desirable genetic merit for pH is associated with greater genetic value for loin color, tenderness, and sensory characteristics. Intramuscular fat measures obtained on the carcass and predicted using ultrasound technology were highly correlated (r(g) = 0.86 from a 12-trait analysis; r(g) = 0.90 from a 5-trait analysis). Estimated genetic relationships among IMF measures and other traits evaluated were generally consistent. Intramuscular fat measures were also genetically associated with Instron tenderness and flavor score in a desirable direction. Direct genetic response in IMF measures observed in the SL corresponded to a significant decrease in EBV for carcass loin muscle area (-0.90 cm(2) per generation) and an increase in carcass backfat EBV (0.98 mm per generation). Selection for IMF has led to more desirable EBV for objective tenderness and has had an adverse effect on additive genetic merit for objective loin color.

  5. The statistics of Pearce element diagrams and the Chayes closure problem

    NASA Astrophysics Data System (ADS)

    Nicholls, J.

    1988-05-01

    Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of random closed arrays can be drawn from the compositional space available to rock-forming processes. The minerals comprising the available space can be described with one additive component per mineral phase and a small number of exchange components. This space is called Thompson space. Statistics based on either space lead to the conclusion that Pearce element ratios are statistically valid and that Pearce element diagrams depict the processes that create chemical inhomogeneities in igneous rock suites.

  6. On the information content of hydrological signatures and their relationship to catchment attributes

    NASA Astrophysics Data System (ADS)

    Addor, Nans; Clark, Martyn P.; Prieto, Cristina; Newman, Andrew J.; Mizukami, Naoki; Nearing, Grey; Le Vine, Nataliya

    2017-04-01

    Hydrological signatures, which are indices characterizing hydrologic behavior, are increasingly used for the evaluation, calibration and selection of hydrological models. Their key advantage is to provide more direct insights into specific hydrological processes than aggregated metrics (e.g., the Nash-Sutcliffe efficiency). A plethora of signatures now exists, which enable characterizing a variety of hydrograph features, but also makes the selection of signatures for new studies challenging. Here we propose that the selection of signatures should be based on their information content, which we estimated using several approaches, all leading to similar conclusions. To explore the relationship between hydrological signatures and the landscape, we extended a previously published data set of hydrometeorological time series for 671 catchments in the contiguous United States, by characterizing the climatic conditions, topography, soil, vegetation and stream network of each catchment. This new catchment attributes data set will soon be in open access, and we are looking forward to introducing it to the community. We used this data set in a data-learning algorithm (random forests) to explore whether hydrological signatures could be inferred from catchment attributes alone. We find that some signatures can be predicted remarkably well by random forests and, interestingly, the same signatures are well captured when simulating discharge using a conceptual hydrological model. We discuss what this result reveals about our understanding of hydrological processes shaping hydrological signatures. We also identify which catchment attributes exert the strongest control on catchment behavior, in particular during extreme hydrological events. Overall, climatic attributes have the most significant influence, and strongly condition how well hydrological signatures can be predicted by random forests and simulated by the hydrological model. In contrast, soil characteristics at the catchment scale are not found to be significant predictors by random forests, which raises questions on how to best use soil data for hydrological modeling, for instance for parameter estimation. We finally demonstrate that signatures with high spatial variability are poorly captured by random forests and model simulations, which makes their regionalization delicate. We conclude with a ranking of signatures based on their information content, and propose that the signatures with high information content are best suited for model calibration, model selection and understanding hydrologic similarity.

  7. [Plaque segmentation of intracoronary optical coherence tomography images based on K-means and improved random walk algorithm].

    PubMed

    Wang, Guanglei; Wang, Pengyu; Han, Yechen; Liu, Xiuling; Li, Yan; Lu, Qian

    2017-06-01

    In recent years, optical coherence tomography (OCT) has developed into a popular coronary imaging technology at home and abroad. The segmentation of plaque regions in coronary OCT images has great significance for vulnerable plaque recognition and research. In this paper, a new algorithm based on K -means clustering and improved random walk is proposed and Semi-automated segmentation of calcified plaque, fibrotic plaque and lipid pool was achieved. And the weight function of random walk is improved. The distance between the edges of pixels in the image and the seed points is added to the definition of the weight function. It increases the weak edge weights and prevent over-segmentation. Based on the above methods, the OCT images of 9 coronary atherosclerotic patients were selected for plaque segmentation. By contrasting the doctor's manual segmentation results with this method, it was proved that this method had good robustness and accuracy. It is hoped that this method can be helpful for the clinical diagnosis of coronary heart disease.

  8. Effects of multiple spreaders in community networks

    NASA Astrophysics Data System (ADS)

    Hu, Zhao-Long; Ren, Zhuo-Ming; Yang, Guang-Yong; Liu, Jian-Guo

    2014-12-01

    Human contact networks exhibit the community structure. Understanding how such community structure affects the epidemic spreading could provide insights for preventing the spreading of epidemics between communities. In this paper, we explore the spreading of multiple spreaders in community networks. A network based on the clustering preferential mechanism is evolved, whose communities are detected by the Girvan-Newman (GN) algorithm. We investigate the spreading effectiveness by selecting the nodes as spreaders in the following ways: nodes with the largest degree in each community (community hubs), the same number of nodes with the largest degree from the global network (global large-degree) and randomly selected one node within each community (community random). The experimental results on the SIR model show that the spreading effectiveness based on the global large-degree and community hubs methods is the same in the early stage of the infection and the method of community random is the worst. However, when the infection rate exceeds the critical value, the global large-degree method embodies the worst spreading effectiveness. Furthermore, the discrepancy of effectiveness for the three methods will decrease as the infection rate increases. Therefore, we should immunize the hubs in each community rather than those hubs in the global network to prevent the outbreak of epidemics.

  9. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences

    PubMed Central

    2014-01-01

    Background Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Results Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. Conclusion The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification. PMID:24418292

  10. How does epistasis influence the response to selection?

    PubMed Central

    Barton, N H

    2017-01-01

    Much of quantitative genetics is based on the ‘infinitesimal model', under which selection has a negligible effect on the genetic variance. This is typically justified by assuming a very large number of loci with additive effects. However, it applies even when genes interact, provided that the number of loci is large enough that selection on each of them is weak relative to random drift. In the long term, directional selection will change allele frequencies, but even then, the effects of epistasis on the ultimate change in trait mean due to selection may be modest. Stabilising selection can maintain many traits close to their optima, even when the underlying alleles are weakly selected. However, the number of traits that can be optimised is apparently limited to ~4Ne by the ‘drift load', and this is hard to reconcile with the apparent complexity of many organisms. Just as for the mutation load, this limit can be evaded by a particular form of negative epistasis. A more robust limit is set by the variance in reproductive success. This suggests that selection accumulates information most efficiently in the infinitesimal regime, when selection on individual alleles is weak, and comparable with random drift. A review of evidence on selection strength suggests that although most variance in fitness may be because of alleles with large Nes, substantial amounts of adaptation may be because of alleles in the infinitesimal regime, in which epistasis has modest effects. PMID:27901509

  11. How does epistasis influence the response to selection?

    PubMed

    Barton, N H

    2017-01-01

    Much of quantitative genetics is based on the 'infinitesimal model', under which selection has a negligible effect on the genetic variance. This is typically justified by assuming a very large number of loci with additive effects. However, it applies even when genes interact, provided that the number of loci is large enough that selection on each of them is weak relative to random drift. In the long term, directional selection will change allele frequencies, but even then, the effects of epistasis on the ultimate change in trait mean due to selection may be modest. Stabilising selection can maintain many traits close to their optima, even when the underlying alleles are weakly selected. However, the number of traits that can be optimised is apparently limited to ~4N e by the 'drift load', and this is hard to reconcile with the apparent complexity of many organisms. Just as for the mutation load, this limit can be evaded by a particular form of negative epistasis. A more robust limit is set by the variance in reproductive success. This suggests that selection accumulates information most efficiently in the infinitesimal regime, when selection on individual alleles is weak, and comparable with random drift. A review of evidence on selection strength suggests that although most variance in fitness may be because of alleles with large N e s, substantial amounts of adaptation may be because of alleles in the infinitesimal regime, in which epistasis has modest effects.

  12. Does Self-Selection Affect Samples’ Representativeness in Online Surveys? An Investigation in Online Video Game Research

    PubMed Central

    van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-01-01

    Background The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Objective Our objective was to explore the representativeness of a self-selected sample of online gamers using online players’ virtual characters (avatars). Methods All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars’ characteristics were defined using various games’ scores, reported on the WoW’s official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. Results We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Conclusions Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted. PMID:25001007

  13. The estimation of selection coefficients in Afrikaners: Huntington disease, porphyria variegata, and lipoid proteinosis.

    PubMed Central

    Stine, O C; Smith, K D

    1990-01-01

    The effects of mutation, migration, random drift, and selection on the change in frequency of the alleles associated with Huntington disease, porphyria variegata, and lipoid proteinosis have been assessed in the Afrikaner population of South Africa. Although admixture cannot be completely discounted, it was possible to exclude migration and new mutation as major sources of changes in the frequency of these alleles by limiting analyses to pedigrees descendant from founding families. Calculations which overestimated the possible effect of random drift demonstrated that drift did not account for the observed changes in gene frequencies. Therefore these changes must have been caused by natural selection, and a coefficient of selection was estimated for each trait. For the rare, dominant, deleterious allele associated with Huntington disease, the coefficient of selection was estimated to be .34, indicating that this allele has a selective disadvantage, contrary to some recent studies. For the presumed dominant and probably deleterious allele associated with porphyria variegata, the coefficient of selection lies between .07 and .02. The coefficient of selection for the rare, clinically recessive allele associated with lipoid proteinosis was estimated to be .07. Calculations based on a model system indicate that the observed decrease in allele frequency cannot be explained solely on the basis of selection against the homozygote. Thus, this may be an example of a pleiotropic gene which has a dominant effect in terms of selection even though its known clinical effect is recessive. PMID:2137963

  14. The estimation of selection coefficients in Afrikaners: Huntington disease, porphyria variegata, and lipoid proteinosis.

    PubMed

    Stine, O C; Smith, K D

    1990-03-01

    The effects of mutation, migration, random drift, and selection on the change in frequency of the alleles associated with Huntington disease, porphyria variegata, and lipoid proteinosis have been assessed in the Afrikaner population of South Africa. Although admixture cannot be completely discounted, it was possible to exclude migration and new mutation as major sources of changes in the frequency of these alleles by limiting analyses to pedigrees descendant from founding families. Calculations which overestimated the possible effect of random drift demonstrated that drift did not account for the observed changes in gene frequencies. Therefore these changes must have been caused by natural selection, and a coefficient of selection was estimated for each trait. For the rare, dominant, deleterious allele associated with Huntington disease, the coefficient of selection was estimated to be .34, indicating that this allele has a selective disadvantage, contrary to some recent studies. For the presumed dominant and probably deleterious allele associated with porphyria variegata, the coefficient of selection lies between .07 and .02. The coefficient of selection for the rare, clinically recessive allele associated with lipoid proteinosis was estimated to be .07. Calculations based on a model system indicate that the observed decrease in allele frequency cannot be explained solely on the basis of selection against the homozygote. Thus, this may be an example of a pleiotropic gene which has a dominant effect in terms of selection even though its known clinical effect is recessive.

  15. Multi-Sensory Intervention Observational Research

    ERIC Educational Resources Information Center

    Thompson, Carla J.

    2011-01-01

    An observational research study based on sensory integration theory was conducted to examine the observed impact of student selected multi-sensory experiences within a multi-sensory intervention center relative to the sustained focus levels of students with special needs. A stratified random sample of 50 students with severe developmental…

  16. Florida Residents' Preferred Approach to Sexuality Education

    ERIC Educational Resources Information Center

    Howard-Barr, Elissa M.; Moore, Michele Johnson

    2007-01-01

    Although there is widespread support for sexuality education, whether to use an abstinence-only or comprehensive approach is hotly debated. This study assessed Florida residents preferred approach to school-based sexuality education. The 641 respondents were selected by random digit dialing, using methods to ensure ethnic and geographic…

  17. Summer Staff Salaries Studied.

    ERIC Educational Resources Information Center

    Henderson, Karla; And Others

    1988-01-01

    Reports 1987 camp staff salaries, based on survey of 500 randomly selected camps. Analyzes average weekly and seasonal salaries according to staff position and number of camps with position. Staff salaries are consistent nationally with private independent camps paying higher salaries for some positions than agency or church camps. (CS)

  18. A comparative study of restricted randomization procedures for multiarm trials with equal or unequal treatment allocation ratios.

    PubMed

    Ryeznik, Yevgen; Sverdlov, Oleksandr

    2018-06-04

    Randomization designs for multiarm clinical trials are increasingly used in practice, especially in phase II dose-ranging studies. Many new methods have been proposed in the literature; however, there is lack of systematic, head-to-head comparison of the competing designs. In this paper, we systematically investigate statistical properties of various restricted randomization procedures for multiarm trials with fixed and possibly unequal allocation ratios. The design operating characteristics include measures of allocation balance, randomness of treatment assignments, variations in the allocation ratio, and statistical characteristics such as type I error rate and power. The results from the current paper should help clinical investigators select an appropriate randomization procedure for their clinical trial. We also provide a web-based R shiny application that can be used to reproduce all results in this paper and run simulations under additional user-defined experimental scenarios. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Evaluation of Three Adolescent Sexual Health Programs in Ha Noi and Khanh Hoa Province, Vietnam

    PubMed Central

    Pham, Van; Nguyen, Hoang; Tho, Le Huu; Minh, Truong Tan; Lerdboon, Porntip; Riel, Rosemary; Green, Mackenzie S.; Kaljee, Linda M.

    2012-01-01

    With an increase in sexual activity among young adults in Vietnam and associated risks, there is a need for evidence-based sexual health interventions. This evaluation of three sexual health programs based on the Protection Motivation Theory (PMT) was conducted in 12 communes in Ha Noi, Nha Trang City, and Ninh Hoa District. Inclusion criteria included unmarried youth 15–20 years residing in selected communes. Communes were randomly allocated to an intervention, and participants were randomly selected within each commune. The intervention programs included Vietnamese Focus on Kids (VFOK), the gender-based program Exploring the World of Adolescents (EWA), and EWA plus parental and health provider education (EWA+). Programs were delivered over a ten-week period in the communities by locally trained facilitators. The gender-based EWA program with parental involvement (EWA+) compared to VFOK showed significantly greater increase in knowledge. EWA+ in comparison to VFOK also showed significant decrease at immediate postintervention for intention to have sex. Sustained changes are observed in all three interventions for self-efficacy condom use, self-efficacy abstinence, response efficacy for condoms, extrinsic rewards, and perceived vulnerability for HIV. These findings suggest that theory-based community programs contribute to sustained changes in knowledge and attitudes regarding sexual risk among Vietnamese adolescents. PMID:22666565

  20. Evaluation of three adolescent sexual health programs in ha noi and khanh hoa province, Vietnam.

    PubMed

    Pham, Van; Nguyen, Hoang; Tho, Le Huu; Minh, Truong Tan; Lerdboon, Porntip; Riel, Rosemary; Green, Mackenzie S; Kaljee, Linda M

    2012-01-01

    With an increase in sexual activity among young adults in Vietnam and associated risks, there is a need for evidence-based sexual health interventions. This evaluation of three sexual health programs based on the Protection Motivation Theory (PMT) was conducted in 12 communes in Ha Noi, Nha Trang City, and Ninh Hoa District. Inclusion criteria included unmarried youth 15-20 years residing in selected communes. Communes were randomly allocated to an intervention, and participants were randomly selected within each commune. The intervention programs included Vietnamese Focus on Kids (VFOK), the gender-based program Exploring the World of Adolescents (EWA), and EWA plus parental and health provider education (EWA+). Programs were delivered over a ten-week period in the communities by locally trained facilitators. The gender-based EWA program with parental involvement (EWA+) compared to VFOK showed significantly greater increase in knowledge. EWA+ in comparison to VFOK also showed significant decrease at immediate postintervention for intention to have sex. Sustained changes are observed in all three interventions for self-efficacy condom use, self-efficacy abstinence, response efficacy for condoms, extrinsic rewards, and perceived vulnerability for HIV. These findings suggest that theory-based community programs contribute to sustained changes in knowledge and attitudes regarding sexual risk among Vietnamese adolescents.

  1. A randomized controlled trial investigating the use of a predictive nomogram for the selection of the FSH starting dose in IVF/ICSI cycles.

    PubMed

    Allegra, Adolfo; Marino, Angelo; Volpes, Aldo; Coffaro, Francesco; Scaglione, Piero; Gullo, Salvatore; La Marca, Antonio

    2017-04-01

    The number of oocytes retrieved is a relevant intermediate outcome in women undergoing IVF/intracytoplasmic sperm injection (ICSI). This trial compared the efficiency of the selection of the FSH starting dose according to a nomogram based on multiple biomarkers (age, day 3 FSH, anti-Müllerian hormone) versus an age-based strategy. The primary outcome measure was the proportion of women with an optimal number of retrieved oocytes defined as 8-14. At their first IVF/ICSI cycle, 191 patients underwent a long gonadotrophin-releasing hormone agonist protocol and were randomized to receive a starting dose of recombinant (human) FSH, based on their age (150 IU if ≤35 years, 225 IU if >35 years) or based on the nomogram. Optimal response was observed in 58/92 patients (63%) in the nomogram group and in 42/99 (42%) in the control group (+21%, 95% CI = 0.07 to 0.35, P = 0.0037). No significant differences were found in the clinical pregnancy rate or the number of embryos cryopreserved per patient. The study showed that the FSH starting dose selected according to ovarian reserve is associated with an increase in the proportion of patients with an optimal response: large trials are recommended to investigate any possible effect on the live-birth rate. Copyright © 2017 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  2. Prioritizing Conservation of Ungulate Calving Resources in Multiple-Use Landscapes

    PubMed Central

    Dzialak, Matthew R.; Harju, Seth M.; Osborn, Robert G.; Wondzell, John J.; Hayden-Wing, Larry D.; Winstead, Jeffrey B.; Webb, Stephen L.

    2011-01-01

    Background Conserving animal populations in places where human activity is increasing is an ongoing challenge in many parts of the world. We investigated how human activity interacted with maternal status and individual variation in behavior to affect reliability of spatially-explicit models intended to guide conservation of critical ungulate calving resources. We studied Rocky Mountain elk (Cervus elaphus) that occupy a region where 2900 natural gas wells have been drilled. Methodology/Principal Findings We present novel applications of generalized additive modeling to predict maternal status based on movement, and of random-effects resource selection models to provide population and individual-based inference on the effects of maternal status and human activity. We used a 2×2 factorial design (treatment vs. control) that included elk that were either parturient or non-parturient and in areas either with or without industrial development. Generalized additive models predicted maternal status (parturiency) correctly 93% of the time based on movement. Human activity played a larger role than maternal status in shaping resource use; elk showed strong spatiotemporal patterns of selection or avoidance and marked individual variation in developed areas, but no such pattern in undeveloped areas. This difference had direct consequences for landscape-level conservation planning. When relative probability of use was calculated across the study area, there was disparity throughout 72–88% of the landscape in terms of where conservation intervention should be prioritized depending on whether models were based on behavior in developed areas or undeveloped areas. Model validation showed that models based on behavior in developed areas had poor predictive accuracy, whereas the model based on behavior in undeveloped areas had high predictive accuracy. Conclusions/Significance By directly testing for differences between developed and undeveloped areas, and by modeling resource selection in a random-effects framework that provided individual-based inference, we conclude that: 1) amplified selection or avoidance behavior and individual variation, as responses to increasing human activity, complicate conservation planning in multiple-use landscapes, and 2) resource selection behavior in places where human activity is predictable or less dynamic may provide a more reliable basis from which to prioritize conservation action. PMID:21297866

  3. Determination of the Optimal Chromosomal Location(s) for a DNA Element in Escherichia coli Using a Novel Transposon-mediated Approach.

    PubMed

    Frimodt-Møller, Jakob; Charbon, Godefroid; Krogfelt, Karen A; Løbner-Olesen, Anders

    2017-09-11

    The optimal chromosomal position(s) of a given DNA element was/were determined by transposon-mediated random insertion followed by fitness selection. In bacteria, the impact of the genetic context on the function of a genetic element can be difficult to assess. Several mechanisms, including topological effects, transcriptional interference from neighboring genes, and/or replication-associated gene dosage, may affect the function of a given genetic element. Here, we describe a method that permits the random integration of a DNA element into the chromosome of Escherichia coli and select the most favorable locations using a simple growth competition experiment. The method takes advantage of a well-described transposon-based system of random insertion, coupled with a selection of the fittest clone(s) by growth advantage, a procedure that is easily adjustable to experimental needs. The nature of the fittest clone(s) can be determined by whole-genome sequencing on a complex multi-clonal population or by easy gene walking for the rapid identification of selected clones. Here, the non-coding DNA region DARS2, which controls the initiation of chromosome replication in E. coli, was used as an example. The function of DARS2 is known to be affected by replication-associated gene dosage; the closer DARS2 gets to the origin of DNA replication, the more active it becomes. DARS2 was randomly inserted into the chromosome of a DARS2-deleted strain. The resultant clones containing individual insertions were pooled and competed against one another for hundreds of generations. Finally, the fittest clones were characterized and found to contain DARS2 inserted in close proximity to the original DARS2 location.

  4. Random Forest Application for NEXRAD Radar Data Quality Control

    NASA Astrophysics Data System (ADS)

    Keem, M.; Seo, B. C.; Krajewski, W. F.

    2017-12-01

    Identification and elimination of non-meteorological radar echoes (e.g., returns from ground, wind turbines, and biological targets) are the basic data quality control steps before radar data use in quantitative applications (e.g., precipitation estimation). Although WSR-88Ds' recent upgrade to dual-polarization has enhanced this quality control and echo classification, there are still challenges to detect some non-meteorological echoes that show precipitation-like characteristics (e.g., wind turbine or anomalous propagation clutter embedded in rain). With this in mind, a new quality control method using Random Forest is proposed in this study. This classification algorithm is known to produce reliable results with less uncertainty. The method introduces randomness into sampling and feature selections and integrates consequent multiple decision trees. The multidimensional structure of the trees can characterize the statistical interactions of involved multiple features in complex situations. The authors explore the performance of Random Forest method for NEXRAD radar data quality control. Training datasets are selected using several clear cases of precipitation and non-precipitation (but with some non-meteorological echoes). The model is structured using available candidate features (from the NEXRAD data) such as horizontal reflectivity, differential reflectivity, differential phase shift, copolar correlation coefficient, and their horizontal textures (e.g., local standard deviation). The influence of each feature on classification results are quantified by variable importance measures that are automatically estimated by the Random Forest algorithm. Therefore, the number and types of features in the final forest can be examined based on the classification accuracy. The authors demonstrate the capability of the proposed approach using several cases ranging from distinct to complex rain/no-rain events and compare the performance with the existing algorithms (e.g., MRMS). They also discuss operational feasibility based on the observed strength and weakness of the method.

  5. A Meta-Analysis of Depressive Symptom Outcomes in Randomized, Controlled Trials for PTSD.

    PubMed

    Ronconi, Julia McDougal; Shiner, Brian; Watts, Bradley V

    2015-07-01

    Posttraumatic stress disorder (PTSD) often co-occurs with depression. Current PTSD practice guidelines lack specific guidance for clinicians regarding the treatment of depressive symptoms. We conducted a meta-analysis of all randomized, placebo-controlled trials for PTSD therapies focusing on depression outcomes to inform clinicians about effective treatment options for depressive symptoms associated with PTSD. We searched literature databases for randomized, controlled clinical trials of any treatment for PTSD published between 1980 and 2013. We selected articles in which all subjects were adults with a diagnosis of PTSD based on the Diagnostic and Statistical Manual of Mental Disorders criteria, and valid PTSD and depressive symptom measures were reported. The sample consisted of 116 treatment comparisons drawn from 93 manuscripts. Evidence-based PTSD treatments are effective for comorbid depressive symptoms. Existing PTSD treatments work as well for comorbid depressive symptoms as they do for PTSD symptoms.

  6. Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Zhang, Li-jie

    2017-10-01

    Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s

  7. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy

    PubMed Central

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638

  8. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    PubMed

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  9. Selective epidemic vaccination under the performant routing algorithms

    NASA Astrophysics Data System (ADS)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  10. Factual knowledge about AIDS and dating practices among high school students from selected schools.

    PubMed

    Nyachuru-Sihlangu, R H; Ndlovu, J

    1992-06-01

    Following various educational strategies by governmental and non-governmental organisations to educate youths and school teachers about HIV infection and prevention, this KABP survey was one attempt to evaluate the results. The study sample of 478 high school students was drawn from four randomly selected schools in Mashonaland and Matabeleland including high and low density, government and mission co-educational schools. The sample was randomly selected and stratified to represent sex and grade level. The KABP self administered questionnaire was used. The paper analyses the relationship between the knowledge and dating patterns. Generally, respondents demonstrated a 50pc to 80pc accuracy of factual knowledge. Of the 66pc Forms I through IV pupils who dated, 30pc preferred only sexually involved relationships and a small number considered the possibility of HIV/AIDS infection. A theoretically based tripartite coalition involving the school, the family health care services for education, guidance and support to promote responsible behaviour throughout childhood was suggested.

  11. Mobile access to virtual randomization for investigator-initiated trials.

    PubMed

    Deserno, Thomas M; Keszei, András P

    2017-08-01

    Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization becomes available for investigator-initiated trials and potentially for large multi-center trials.

  12. Missing Value Imputation Approach for Mass Spectrometry-based Metabolomics Data.

    PubMed

    Wei, Runmin; Wang, Jingye; Su, Mingming; Jia, Erik; Chen, Shaoqiu; Chen, Tianlu; Ni, Yan

    2018-01-12

    Missing values exist widely in mass-spectrometry (MS) based metabolomics data. Various methods have been applied for handling missing values, but the selection can significantly affect following data analyses. Typically, there are three types of missing values, missing not at random (MNAR), missing at random (MAR), and missing completely at random (MCAR). Our study comprehensively compared eight imputation methods (zero, half minimum (HM), mean, median, random forest (RF), singular value decomposition (SVD), k-nearest neighbors (kNN), and quantile regression imputation of left-censored data (QRILC)) for different types of missing values using four metabolomics datasets. Normalized root mean squared error (NRMSE) and NRMSE-based sum of ranks (SOR) were applied to evaluate imputation accuracy. Principal component analysis (PCA)/partial least squares (PLS)-Procrustes analysis were used to evaluate the overall sample distribution. Student's t-test followed by correlation analysis was conducted to evaluate the effects on univariate statistics. Our findings demonstrated that RF performed the best for MCAR/MAR and QRILC was the favored one for left-censored MNAR. Finally, we proposed a comprehensive strategy and developed a public-accessible web-tool for the application of missing value imputation in metabolomics ( https://metabolomics.cc.hawaii.edu/software/MetImp/ ).

  13. The G matrix under fluctuating correlational mutation and selection.

    PubMed

    Revell, Liam J

    2007-08-01

    Theoretical quantitative genetics provides a framework for reconstructing past selection and predicting future patterns of phenotypic differentiation. However, the usefulness of the equations of quantitative genetics for evolutionary inference relies on the evolutionary stability of the additive genetic variance-covariance matrix (G matrix). A fruitful new approach for exploring the evolutionary dynamics of G involves the use of individual-based computer simulations. Previous studies have focused on the evolution of the eigenstructure of G. An alternative approach employed in this paper uses the multivariate response-to-selection equation to evaluate the stability of G. In this approach, I measure similarity by the correlation between response-to-selection vectors due to random selection gradients. I analyze the dynamics of G under several conditions of correlational mutation and selection. As found in a previous study, the eigenstructure of G is stabilized by correlational mutation and selection. However, over broad conditions, instability of G did not result in a decreased consistency of the response to selection. I also analyze the stability of G when the correlation coefficients of correlational mutation and selection and the effective population size change through time. To my knowledge, no prior study has used computer simulations to investigate the stability of G when correlational mutation and selection fluctuate. Under these conditions, the eigenstructure of G is unstable under some simulation conditions. Different results are obtained if G matrix stability is assessed by eigenanalysis or by the response to random selection gradients. In this case, the response to selection is most consistent when certain aspects of the eigenstructure of G are least stable and vice versa.

  14. Caries status in 16 year-olds with varying exposure to water fluoridation in Ireland.

    PubMed

    Mullen, J; McGaffin, J; Farvardin, N; Brightman, S; Haire, C; Freeman, R

    2012-12-01

    Most of the Republic of Ireland's public water supplies have been fluoridated since the mid-1960s while Northern Ireland has never been fluoridated, apart from some small short-lived schemes in east Ulster. This study examines dental caries status in 16 year-olds in a part of Ireland straddling fluoridated and non-fluoridated water supply areas and compares two methods of assessing the effectiveness of water fluoridation. The cross-sectional survey tested differences in caries status by two methods: 1, Estimated Fluoridation Status as used previously in national and regional studies in the Republic and in the All-Island study of 2002; 2, Percentage Lifetime Exposure, a modification of a system described by Slade in 1995 and used in Australian caries research. Adolescents were selected for the study by a two-part random sampling process. Firstly, schools were selected in each area by creating three tiers based on school size, and selecting schools randomly from each tier. Then random sampling of 16-year-olds from these schools, based on a pre-set sampling fraction for each tier of schools. With both systems of measurement, significantly lower caries levels were found in those children with the greatest exposure to fluoridated water when compared to those with the least exposure. The survey provides further evidence of the effectiveness in reducing dental caries experience up to 16 years of age. The extra intricacies involved in using the Percentage Lifetime Exposure method did not provide much more information when compared to the simpler Estimated Fluoridation Status method.

  15. Efficient encapsulation of proteins with random copolymers.

    PubMed

    Nguyen, Trung Dac; Qiao, Baofu; Olvera de la Cruz, Monica

    2018-06-12

    Membraneless organelles are aggregates of disordered proteins that form spontaneously to promote specific cellular functions in vivo. The possibility of synthesizing membraneless organelles out of cells will therefore enable fabrication of protein-based materials with functions inherent to biological matter. Since random copolymers contain various compositions and sequences of solvophobic and solvophilic groups, they are expected to function in nonbiological media similarly to a set of disordered proteins in membraneless organelles. Interestingly, the internal environment of these organelles has been noted to behave more like an organic solvent than like water. Therefore, an adsorbed layer of random copolymers that mimics the function of disordered proteins could, in principle, protect and enhance the proteins' enzymatic activity even in organic solvents, which are ideal when the products and/or the reactants have limited solubility in aqueous media. Here, we demonstrate via multiscale simulations that random copolymers efficiently incorporate proteins into different solvents with the potential to optimize their enzymatic activity. We investigate the key factors that govern the ability of random copolymers to encapsulate proteins, including the adsorption energy, copolymer average composition, and solvent selectivity. The adsorbed polymer chains have remarkably similar sequences, indicating that the proteins are able to select certain sequences that best reduce their exposure to the solvent. We also find that the protein surface coverage decreases when the fluctuation in the average distance between the protein adsorption sites increases. The results herein set the stage for computational design of random copolymers for stabilizing and delivering proteins across multiple media.

  16. An Overview of Randomization and Minimization Programs for Randomized Clinical Trials

    PubMed Central

    Saghaei, Mahmoud

    2011-01-01

    Randomization is an essential component of sound clinical trials, which prevents selection biases and helps in blinding the allocations. Randomization is a process by which subsequent subjects are enrolled into trial groups only by chance, which essentially eliminates selection biases. A serious consequence of randomization is severe imbalance among the treatment groups with respect to some prognostic factors, which invalidate the trial results or necessitate complex and usually unreliable secondary analysis to eradicate the source of imbalances. Minimization on the other hand tends to allocate in such a way as to minimize the differences among groups, with respect to prognostic factors. Pure minimization is therefore completely deterministic, that is, one can predict the allocation of the next subject by knowing the factor levels of a previously enrolled subject and having the properties of the next subject. To eliminate the predictability of randomization, it is necessary to include some elements of randomness in the minimization algorithms. In this article brief descriptions of randomization and minimization are presented followed by introducing selected randomization and minimization programs. PMID:22606659

  17. Kindergarten Teachers' Experience with Reporting Child Abuse in Taiwan

    ERIC Educational Resources Information Center

    Feng, Jui-Ying; Huang, Tzu-Yi; Wang, Chi-Jen

    2010-01-01

    Objective: The objectives were to examine factors associated with reporting child abuse among kindergarten teachers in Taiwan based on the Theory of Planned Behavior (TPB). Method: A stratified quota sampling technique was used to randomly select kindergarten teachers in Taiwan. The Child Abuse Intention Report Scale, which includes demographics,…

  18. Development of an RAPD-based SCAR marker for smut disease resistance in commercial sugarcane cultivars of Pakistan

    USDA-ARS?s Scientific Manuscript database

    Development of RAPD-derived Sequence Characterized Amplified Region (SCAR) marker in order to select Sporisorium scitamineum resistant and susceptible commercial cultivars of sugarcane from Pakistan was achieved. Bulked segregant and RAPD-analysis were conducted using 480 random decamers in initial ...

  19. School Climate in Middle Schools: A Cultural Perspective

    ERIC Educational Resources Information Center

    Schneider, Stephanie H.; Duran, Lauren

    2010-01-01

    In 2007-08 and 2008-09, 2,500 randomly-selected middle school students completed an annual survey on school climate and character development. In examining differences based upon grade, gender, race/ethnicity, school, and length of program participation, significant differences were found for all but length of program participation. Responses of…

  20. Factors Associated with Abnormal Eating Attitudes among Greek Adolescents

    ERIC Educational Resources Information Center

    Bilali, Aggeliki; Galanis, Petros; Velonakis, Emmanuel; Katostaras, Theofanis

    2010-01-01

    Objective: To estimate the prevalence of abnormal eating attitudes among Greek adolescents and identify possible risk factors associated with these attitudes. Design: Cross-sectional, school-based study. Setting: Six randomly selected schools in Patras, southern Greece. Participants: The study population consisted of 540 Greek students aged 13-18…

  1. E.S.T. and the Oracle.

    ERIC Educational Resources Information Center

    Richardson, Ian M.

    1990-01-01

    A possible syllabus for English for Science and Technology is suggested based upon a set of causal relations, arising from a logical description of the presuppositional rhetoric of scientific passages that underlie most semantic functions. An empirical study is reported of the semantic functions present in 52 randomly selected passages.…

  2. Nitrogen Removal by Streams and Rivers of the Upper Mississippi River Basin

    EPA Science Inventory

    Our study, based on chemistry and channel dimensions data collected at 893 randomly-selected stream and river sites in the Mississippi River basin, demonstrated the interaction of stream chemistry, stream size, and NO3-N uptake metrics across a range of stream sizes and across re...

  3. Catholic High Schools and Their Finances. 1986.

    ERIC Educational Resources Information Center

    Augenstein, John J.

    This report is based on a randomly selected and stratified sample of 208 United States Catholic high schools. The sample was stratified by governance (diocesan, parochial/interparochial, and private); five categories of enrollment; and six regions. Data are compared with an earlier study, "The Catholic High School: A National Portrait" and show…

  4. Realized gain from breeding Eucalyptus grandis in Florida

    Treesearch

    George Meskimen

    1983-01-01

    E. grandis is in the fourth generation of selection in southwest Florida. The breeding strategy combines a provenance trial, genetic base population, seedling seed orchard, and progeny test in a single plantation where all families are completely randomized in single-tree plots. That planting configuration closely predicted the magnitude of genetic...

  5. Determining the Anxiety Sensitivity Bases of Anxiety: A Study with Undergraduate Students

    ERIC Educational Resources Information Center

    Erozkan, Atilgan

    2017-01-01

    This study aims to examine the relationships between subdimensions of anxiety sensitivity and anxiety. The participants in the study were 841 undergraduate students (411 females; 430 males) randomly selected from three different faculties--Faculties of Technical Education, Education, and Sport Sciences--at Mugla Sitki Kocman University. Data…

  6. Adjustment to College in Students with ADHD

    ERIC Educational Resources Information Center

    Rabiner, David L.; Anastopoulos, Arthur D.; Costello, Jane; Hoyle, Rick H.; Swartzwelder, H. Scott

    2008-01-01

    Objective: To examine college adjustment in students reporting an ADHD diagnosis and the effect of medication treatment on students' adjustment. Method: 1,648 first-semester freshmen attending a public and a private university completed a Web-based survey to examine their adjustment to college. Results: Compared with 200 randomly selected control…

  7. Exploring the Innovative Personality Characteristics among Teachers

    ERIC Educational Resources Information Center

    Othman, Nooraini

    2016-01-01

    The aim of this study is to explore the characteristics of innovative personality among teachers in Malaysia. Samples of the research were randomly selected among secondary school teachers in three districts in Malaysia. Research instrument was self-developed by the researchers based on interviews carried out with some resource persons who are…

  8. Conducting a wildland visual resources inventory

    Treesearch

    James F. Palmer

    1979-01-01

    This paper describes a procedure for systematically inventorying the visual resources of wildland environments. Visual attributes are recorded photographically using two separate sampling methods: one based on professional judgment and the other on random selection. The location and description of each inventoried scene are recorded on U.S. Geological Survey...

  9. Public Opinion toward Sexuality Education: Findings among One South Florida County

    ERIC Educational Resources Information Center

    Howard-Barr, Elissa; Moore, Michele Johnson; Weiss, Josie A.; Jobli, Edessa

    2011-01-01

    As part of a community plan to implement abstinence-based sexuality education, this study assessed opinions toward sexuality education among residents. Respondents (N = 1,090) were selected by random digit dialing. The survey, adopted from previous national studies, assessed attitudes towards sexuality education. Chi-square tests of significance…

  10. Marginal and Random Intercepts Models for Longitudinal Binary Data With Examples From Criminology.

    PubMed

    Long, Jeffrey D; Loeber, Rolf; Farrington, David P

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides individual-level information including information about heterogeneity of growth. It is shown how a type of numerical averaging can be used with the random intercepts model to obtain group-level information, thus approximating individual and marginal aspects of the LMM. The types of inferences associated with each model are illustrated with longitudinal criminal offending data based on N = 506 males followed over a 22-year period. Violent offending indexed by official records and self-report were analyzed, with the marginal model estimated using generalized estimating equations and the random intercepts model estimated using maximum likelihood. The results show that the numerical averaging based on the random intercepts can produce prediction curves almost identical to those obtained directly from the marginal model parameter estimates. The results provide a basis for contrasting the models and the estimation procedures and key features are discussed to aid in selecting a method for empirical analysis.

  11. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  12. Greater sage-grouse winter habitat use on the eastern edge of their range

    USGS Publications Warehouse

    Swanson, Christopher C.; Rumble, Mark A.; Grovenburg, Troy W.; Kaczor, Nicholas W.; Klaver, Robert W.; Herman-Brunson, Katie M.; Jenks, Jonathan A.; Jensen, Kent C.

    2013-01-01

    Greater sage-grouse (Centrocercus urophasianus) at the western edge of the Dakotas occur in the transition zone between sagebrush and grassland communities. These mixed sagebrush (Artemisia sp.) and grasslands differ from those habitats that comprise the central portions of the sage-grouse range; yet, no information is available on winter habitat selection within this region of their distribution. We evaluated factors influencing greater sage-grouse winter habitat use in North Dakota during 2005–2006 and 2006–2007 and in South Dakota during 2006–2007 and 2007–2008. We captured and radio-marked 97 breeding-age females and 54 breeding-age males from 2005 to 2007 and quantified habitat selection for 98 of these birds that were alive during winter. We collected habitat measurements at 340 (177 ND, 163 SD) sage-grouse use sites and 680 random (340 each at 250 m and 500 m from locations) dependent sites. Use sites differed from random sites with greater percent sagebrush cover (14.75% use vs. 7.29% random; P 2 use vs. 0.94 plants/m2 random; P ≤ 0.001), but lesser percent grass cover (11.76% use vs. 16.01% random; P ≤ 0.001) and litter cover (4.34% use vs. 5.55% random; P = 0.001) and lower sagebrush height (20.02 cm use vs. 21.35 cm random; P = 0.13) and grass height (21.47 cm use vs. 23.21 cm random; P = 0.15). We used conditional logistic regression to estimate winter habitat selection by sage-grouse on continuous scales. The model sagebrush cover + sagebrush height + sagebrush cover × sagebrush height (wi = 0.60) was the most supported of the 13 models we considered, indicating that percent sagebrush cover strongly influenced selection. Logistic odds ratios indicated that the probability of selection by sage-grouse increased by 1.867 for every 1% increase in sagebrush cover (95% CI = 1.627–2.141) and by 1.041 for every 1 cm increase in sagebrush height (95% CI = 1.002–1.082). The interaction between percent sagebrush canopy cover and sagebrush height (β = −0.01, SE ≤ 0.01; odds ratio = 0.987 [95% CI = 0.983–0.992]) also was significant. Management could focus on avoiding additional loss of sagebrush habitat, identifying areas of critical winter habitat, and implementing management actions based on causal mechanisms (e.g., soil moisture, precipitation) that affect sagebrush community structure in this region.

  13. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    PubMed

    Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H

    2017-01-01

    In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  14. Effects of prey abundance, distribution, visual contrast and morphology on selection by a pelagic piscivore

    USGS Publications Warehouse

    Hansen, Adam G.; Beauchamp, David A.

    2014-01-01

    Most predators eat only a subset of possible prey. However, studies evaluating diet selection rarely measure prey availability in a manner that accounts for temporal–spatial overlap with predators, the sensory mechanisms employed to detect prey, and constraints on prey capture.We evaluated the diet selection of cutthroat trout (Oncorhynchus clarkii) feeding on a diverse planktivore assemblage in Lake Washington to test the hypothesis that the diet selection of piscivores would reflect random (opportunistic) as opposed to non-random (targeted) feeding, after accounting for predator–prey overlap, visual detection and capture constraints.Diets of cutthroat trout were sampled in autumn 2005, when the abundance of transparent, age-0 longfin smelt (Spirinchus thaleichthys) was low, and 2006, when the abundance of smelt was nearly seven times higher. Diet selection was evaluated separately using depth-integrated and depth-specific (accounted for predator–prey overlap) prey abundance. The abundance of different prey was then adjusted for differences in detectability and vulnerability to predation to see whether these factors could explain diet selection.In 2005, cutthroat trout fed non-randomly by selecting against the smaller, transparent age-0 longfin smelt, but for the larger age-1 longfin smelt. After adjusting prey abundance for visual detection and capture, cutthroat trout fed randomly. In 2006, depth-integrated and depth-specific abundance explained the diets of cutthroat trout well, indicating random feeding. Feeding became non-random after adjusting for visual detection and capture. Cutthroat trout selected strongly for age-0 longfin smelt, but against similar sized threespine stickleback (Gasterosteus aculeatus) and larger age-1 longfin smelt in 2006. Overlap with juvenile sockeye salmon (O. nerka) was minimal in both years, and sockeye salmon were rare in the diets of cutthroat trout.The direction of the shift between random and non-random selection depended on the presence of a weak versus a strong year class of age-0 longfin smelt. These fish were easy to catch, but hard to see. When their density was low, poor detection could explain their rarity in the diet. When their density was high, poor detection was compensated by higher encounter rates with cutthroat trout, sufficient to elicit a targeted feeding response. The nature of the feeding selectivity of a predator can be highly dependent on fluctuations in the abundance and suitability of key prey.

  15. Group Counseling With Emotionally Disturbed School Children in Taiwan.

    ERIC Educational Resources Information Center

    Chiu, Peter

    The application of group counseling to emotionally disturbed school children in Chinese culture was examined. Two junior high schools located in Tao-Yuan Province were randomly selected with two eighth-grade classes randomly selected from each school. Ten emotionally disturbed students were chosen from each class and randomly assigned to two…

  16. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  17. On Measuring and Reducing Selection Bias with a Quasi-Doubly Randomized Preference Trial

    ERIC Educational Resources Information Center

    Joyce, Ted; Remler, Dahlia K.; Jaeger, David A.; Altindag, Onur; O'Connell, Stephen D.; Crockett, Sean

    2017-01-01

    Randomized experiments provide unbiased estimates of treatment effects, but are costly and time consuming. We demonstrate how a randomized experiment can be leveraged to measure selection bias by conducting a subsequent observational study that is identical in every way except that subjects choose their treatment--a quasi-doubly randomized…

  18. Family-based training program improves brain function, cognition, and behavior in lower socioeconomic status preschoolers

    PubMed Central

    Neville, Helen J.; Stevens, Courtney; Pakulak, Eric; Bell, Theodore A.; Fanning, Jessica; Klein, Scott; Isbell, Elif

    2013-01-01

    Using information from research on the neuroplasticity of selective attention and on the central role of successful parenting in child development, we developed and rigorously assessed a family-based training program designed to improve brain systems for selective attention in preschool children. One hundred forty-one lower socioeconomic status preschoolers enrolled in a Head Start program were randomly assigned to the training program, Head Start alone, or an active control group. Electrophysiological measures of children’s brain functions supporting selective attention, standardized measures of cognition, and parent-reported child behaviors all favored children in the treatment program relative to both control groups. Positive changes were also observed in the parents themselves. Effect sizes ranged from one-quarter to half of a standard deviation. These results lend impetus to the further development and broader implementation of evidence-based education programs that target at-risk families. PMID:23818591

  19. Effectiveness of a new health care organization model in primary care for chronic cardiovascular disease patients based on a multifactorial intervention: the PROPRESE randomized controlled trial

    PubMed Central

    2013-01-01

    Background To evaluate the effectiveness of a new multifactorial intervention to improve health care for chronic ischemic heart disease patients in primary care. The strategy has two components: a) organizational for the patient/professional relationship and b) training for professionals. Methods/design Experimental study. Randomized clinical trial. Follow-up period: one year. Study setting: primary care, multicenter (15 health centers). For the intervention group 15 health centers are selected from those participating in ESCARVAL study. Once the center agreed to participate patients are randomly selected from the total amount of patients with ischemic heart disease registered in the electronic health records. For the control group a random sample of patients with ischemic heart disease is selected from all 72 health centers electronic records. Intervention components: a) Organizational intervention on the patient/professional relationship. Centered on the Chronic Care Model, the Stanford Expert Patient Program and the Kaiser Permanente model: Teamwork, informed and active patient, decision making shared with the patient, recommendations based on clinical guidelines, single electronic medical history per patient that allows the use of indicators for risk monitoring and stratification. b) Formative strategy for professionals: 4 face-to-face training workshops (one every 3 months), monthly update clinical sessions, online tutorial by a cardiologist, availability through the intranet of the action protocol and related documents. Measurements: Blood pressure, blood glucose, HbA1c, lipid profile and smoking. Frequent health care visits. Number of hospitalizations related to vascular disease. Therapeutic compliance. Drug use. Discussion This study aims to evaluate the efficacy of a multifactorial intervention strategy involving patients with ischemic heart disease for the improvement of the degree of control of the cardiovascular risk factors and of the quality of life, number of visits, and number of hospitalizations. Trial registration NCT01826929 PMID:23915267

  20. Do Evidence-Based Youth Psychotherapies Outperform Usual Clinical Care? A Multilevel Meta-Analysis

    PubMed Central

    Weisz, John R.; Kuppens, Sofie; Eckshtain, Dikla; Ugueto, Ana M.; Hawley, Kristin M.; Jensen-Doss, Amanda

    2013-01-01

    Context Research across four decades has produced numerous empirically-tested evidence-based psychotherapies (EBPs) for youth psychopathology, developed to improve upon usual clinical interventions. Advocates argue that these should replace usual care; but do the EBPs produce better outcomes than usual care? Objective This question was addressed in a meta-analysis of 52 randomized trials directly comparing EBPs to usual care. Analyses assessed the overall effect of EBPs vs. usual care, and candidate moderators; multilevel analysis was used to address the dependency among effect sizes that is common but typically unaddressed in psychotherapy syntheses. Data Sources The PubMed, PsychINFO, and Dissertation Abstracts International databases were searched for studies from January 1, 1960 – December 31, 2010. Study Selection 507 randomized youth psychotherapy trials were identified. Of these, the 52 studies that compared EBPs to usual care were included in the meta-analysis. Data Extraction Sixteen variables (participant, treatment, and study characteristics) were extracted from each study, and effect sizes were calculated for all EBP versus usual care comparisons. Data Synthesis EBPs outperformed usual care. Mean effect size was 0.29; the probability was 58% that a randomly selected youth receiving an EBP would be better off after treatment than a randomly selected youth receiving usual care. Three variables moderated treatment benefit: Effect sizes decreased for studies conducted outside North America, for studies in which all participants were impaired enough to qualify for diagnoses, and for outcomes reported by people other than the youths and parents in therapy. For certain key groups (e.g., studies using clinically referred samples and diagnosed samples), significant EBP effects were not demonstrated. Conclusions EBPs outperformed usual care, but the EBP advantage was modest and moderated by youth, location, and assessment characteristics. There is room for improvement in EBPs, both in the magnitude and range of their benefit, relative to usual care. PMID:23754332

  1. A model-based 'varimax' sampling strategy for a heterogeneous population.

    PubMed

    Akram, Nuzhat A; Farooqi, Shakeel R

    2014-01-01

    Sampling strategies are planned to enhance the homogeneity of a sample, hence to minimize confounding errors. A sampling strategy was developed to minimize the variation within population groups. Karachi, the largest urban agglomeration in Pakistan, was used as a model population. Blood groups ABO and Rh factor were determined for 3000 unrelated individuals selected through simple random sampling. Among them five population groups, namely Balochi, Muhajir, Pathan, Punjabi and Sindhi, based on paternal ethnicity were identified. An index was designed to measure the proportion of admixture at parental and grandparental levels. Population models based on index score were proposed. For validation, 175 individuals selected through stratified random sampling were genotyped for the three STR loci CSF1PO, TPOX and TH01. ANOVA showed significant differences across the population groups for blood groups and STR loci distribution. Gene diversity was higher across the sub-population model than in the agglomerated population. At parental level gene diversities are significantly higher across No admixture models than Admixture models. At grandparental level the difference was not significant. A sub-population model with no admixture at parental level was justified for sampling the heterogeneous population of Karachi.

  2. Selection response and genetic parameters for residual feed intake in Yorkshire swine.

    PubMed

    Cai, W; Casey, D S; Dekkers, J C M

    2008-02-01

    Residual feed intake (RFI) is a measure of feed efficiency defined as the difference between the observed feed intake and that predicted from the average requirements for growth and maintenance. The objective of this study was to evaluate the response in a selection experiment consisting of a line selected for low RFI and a random control line and to estimate the genetic parameters for RFI and related production and carcass traits. Beginning with random allocation of purebred Yorkshire littermates, in each generation, electronically measured ADFI, ADG, and ultrasound backfat (BF) were evaluated during a approximately 40- to approximately 115-kg of BW test period on approximately 90 boars from first parity and approximately 90 gilts from second parity sows of the low RFI line. After evaluation of first parity boars, approximately 12 boars and approximately 70 gilts from the low RFI line were selected to produce approximately 50 litters for the next generation. Approximately 30 control line litters were produced by random selection and mating. Selection was on EBV for RFI from an animal model analysis of ADFI, with on-test group and sex (fixed), pen within group and litter (random), and covariates for interactions of on- and off-test BW, on-test age, ADG, and BF with generations. The RFI explained 34% of phenotypic variation in ADFI. After 4 generations of selection, estimates of heritability for RFI, ADFI, ADG, feed efficiency (FE, which is the reciprocal of the feed conversion ratio and equals ADG/ ADFI), and ultrasound-predicted BF, LM area (LMA), and intramuscular fat (IMF) were 0.29, 0.51, 0.42, 0.17, 0.68, 0.57, and 0.28, respectively; predicted responses based on average EBV in the low RFI line were -114, -202, and -39 g/d for RFI (= 0.9 phenotypic SD), ADFI (0.9 SD), and ADG (0.4 SD), respectively, and 1.56% for FE (0.5 SD), -0.37 mm for BF (0.1 SD), 0.35 cm(2) for LMA (0.1 SD), and -0.10% for IMF (0.3 SD). Direct phenotypic comparison of the low RFI and control lines based on 92 low RFI and 76 control gilts from the second parity of generation 4 showed that selection had significantly decreased RFI by 96 g/d (P = 0.002) and ADFI by 165 g/d (P < 0.0001). The low RFI line also had 33 g/d lower ADG (P = 0.022), 1.36% greater FE (P = 0.09), and 1.99 mm less BF (P = 0.013). There was not a significant difference in LMA and other carcass traits, including subjective marbling score, despite a large observed difference in ultrasound-predicted IMF (-1.05% with P < 0.0001). In conclusion, RFI is a heritable trait, and selection for low RFI has significantly decreased the feed required for a given rate of growth and backfat.

  3. Mate choice theory and the mode of selection in sexual populations.

    PubMed

    Carson, Hampton L

    2003-05-27

    Indirect new data imply that mate and/or gamete choice are major selective forces driving genetic change in sexual populations. The system dictates nonrandom mating, an evolutionary process requiring both revised genetic theory and new data on heritability of characters underlying Darwinian fitness. Successfully reproducing individuals represent rare selections from among vigorous, competing survivors of preadult natural selection. Nonrandom mating has correlated demographic effects: reduced effective population size, inbreeding, low gene flow, and emphasis on deme structure. Characters involved in choice behavior at reproduction appear based on quantitative trait loci. This variability serves selection for fitness within the population, having only an incidental relationship to the origin of genetically based reproductive isolation between populations. The claim that extensive hybridization experiments with Drosophila indicate that selection favors a gradual progression of "isolating mechanisms" is flawed, because intra-group random mating is assumed. Over deep time, local sexual populations are strong, independent genetic systems that use rich fields of variable polygenic components of fitness. The sexual reproduction system thus particularizes, in small subspecific populations, the genetic basis of the grand adaptive sweep of selective evolutionary change, much as Darwin proposed.

  4. SNP selection and classification of genome-wide SNP data using stratified sampling random forests.

    PubMed

    Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K

    2012-09-01

    For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.

  5. Evolution in fluctuating environments: decomposing selection into additive components of the Robertson-Price equation.

    PubMed

    Engen, Steinar; Saether, Bernt-Erik

    2014-03-01

    We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  6. An adaptive incremental approach to constructing ensemble classifiers: application in an information-theoretic computer-aided decision system for detection of masses in mammograms.

    PubMed

    Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D

    2009-07-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC = 0.905 +/- 0.024) in performance as compared to the original IT-CAD system (AUC = 0.865 +/- 0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters.

  7. Differential evolution enhanced with multiobjective sorting-based mutation operators.

    PubMed

    Wang, Jiahai; Liao, Jianjun; Zhou, Ying; Cai, Yiqiao

    2014-12-01

    Differential evolution (DE) is a simple and powerful population-based evolutionary algorithm. The salient feature of DE lies in its mutation mechanism. Generally, the parents in the mutation operator of DE are randomly selected from the population. Hence, all vectors are equally likely to be selected as parents without selective pressure at all. Additionally, the diversity information is always ignored. In order to fully exploit the fitness and diversity information of the population, this paper presents a DE framework with multiobjective sorting-based mutation operator. In the proposed mutation operator, individuals in the current population are firstly sorted according to their fitness and diversity contribution by nondominated sorting. Then parents in the mutation operators are proportionally selected according to their rankings based on fitness and diversity, thus, the promising individuals with better fitness and diversity have more opportunity to be selected as parents. Since fitness and diversity information is simultaneously considered for parent selection, a good balance between exploration and exploitation can be achieved. The proposed operator is applied to original DE algorithms, as well as several advanced DE variants. Experimental results on 48 benchmark functions and 12 real-world application problems show that the proposed operator is an effective approach to enhance the performance of most DE algorithms studied.

  8. Using Place-Based Random Assignment and Comparative Interrupted Time-Series Analysis To Evaluate the Jobs-Plus Employment Program for Public Housing Residents.

    ERIC Educational Resources Information Center

    Bloom, Howard S.; Rico, James A.

    This paper describes a place-based research demonstration program to promote and sustain employment among residents of selected public housing developments in U.S. cities. Because all eligible residents of the participating public housing developments were free to take part in the program, it was not possible to study its impacts in a classical…

  9. Effectiveness of Family, Child, and Family-Child Based Intervention on ADHD Symptoms of Students with Disabilities

    ERIC Educational Resources Information Center

    Malekpour, Mokhtar; Aghababaei, Sara; Hadi, Samira

    2014-01-01

    The aim of the present study was to investigate and compare the effectiveness of family, child, and family-child based intervention on the rate of ADHD symptoms in third grade students. The population for this study was all of students with ADHD diagnoses in the city of Isfahan, Iran. The multistage random sampling method was used to select the 60…

  10. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  11. Inverse probability weighting for covariate adjustment in randomized studies.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling

    2014-02-20

    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.

  12. A simple rule for the evolution of cooperation on graphs and social networks.

    PubMed

    Ohtsuki, Hisashi; Hauert, Christoph; Lieberman, Erez; Nowak, Martin A

    2006-05-25

    A fundamental aspect of all biological systems is cooperation. Cooperative interactions are required for many levels of biological organization ranging from single cells to groups of animals. Human society is based to a large extent on mechanisms that promote cooperation. It is well known that in unstructured populations, natural selection favours defectors over cooperators. There is much current interest, however, in studying evolutionary games in structured populations and on graphs. These efforts recognize the fact that who-meets-whom is not random, but determined by spatial relationships or social networks. Here we describe a surprisingly simple rule that is a good approximation for all graphs that we have analysed, including cycles, spatial lattices, random regular graphs, random graphs and scale-free networks: natural selection favours cooperation, if the benefit of the altruistic act, b, divided by the cost, c, exceeds the average number of neighbours, k, which means b/c > k. In this case, cooperation can evolve as a consequence of 'social viscosity' even in the absence of reputation effects or strategic complexity.

  13. 3D statistical shape models incorporating 3D random forest regression voting for robust CT liver segmentation

    NASA Astrophysics Data System (ADS)

    Norajitra, Tobias; Meinzer, Hans-Peter; Maier-Hein, Klaus H.

    2015-03-01

    During image segmentation, 3D Statistical Shape Models (SSM) usually conduct a limited search for target landmarks within one-dimensional search profiles perpendicular to the model surface. In addition, landmark appearance is modeled only locally based on linear profiles and weak learners, altogether leading to segmentation errors from landmark ambiguities and limited search coverage. We present a new method for 3D SSM segmentation based on 3D Random Forest Regression Voting. For each surface landmark, a Random Regression Forest is trained that learns a 3D spatial displacement function between the according reference landmark and a set of surrounding sample points, based on an infinite set of non-local randomized 3D Haar-like features. Landmark search is then conducted omni-directionally within 3D search spaces, where voxelwise forest predictions on landmark position contribute to a common voting map which reflects the overall position estimate. Segmentation experiments were conducted on a set of 45 CT volumes of the human liver, of which 40 images were randomly chosen for training and 5 for testing. Without parameter optimization, using a simple candidate selection and a single resolution approach, excellent results were achieved, while faster convergence and better concavity segmentation were observed, altogether underlining the potential of our approach in terms of increased robustness from distinct landmark detection and from better search coverage.

  14. Application of Adaptive Design Methodology in Development of a Long-Acting Glucagon-Like Peptide-1 Analog (Dulaglutide): Statistical Design and Simulations

    PubMed Central

    Skrivanek, Zachary; Berry, Scott; Berry, Don; Chien, Jenny; Geiger, Mary Jane; Anderson, James H.; Gaydos, Brenda

    2012-01-01

    Background Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. Methods To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. Results Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). Conclusions This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm—including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design. PMID:23294775

  15. The effects of probiotic and synbiotic supplementation on metabolic syndrome indices in adults at risk of type 2 diabetes: study protocol for a randomized controlled trial.

    PubMed

    Kassaian, Nazila; Aminorroaya, Ashraf; Feizi, Awat; Jafari, Parvaneh; Amini, Masoud

    2017-03-29

    The incidence of type 2 diabetes, cardiovascular diseases, and obesity has been rising dramatically; however, their pathogenesis is particularly intriguing. Recently, dysbiosis of the intestinal microbiota has emerged as a new candidate that may be linked to metabolic diseases. We hypothesize that selective modulation of the intestinal microbiota by probiotic or synbiotic supplementation may improve metabolic dysfunction and prevent diabetes in prediabetics. In this study, a synthesis and study of synbiotics will be carried out for the first time in Iran. In a randomized triple-blind controlled clinical trial, 120 adults with impaired glucose tolerance based on the inclusion criteria will be selected by a simple random sampling method and will be randomly allocated to 6 months of 6 g/d probiotic, synbiotic or placebo. The fecal abundance of bacteria, blood pressure, height, weight, and waist and hip circumferences will be measured at baseline and following treatment. Also, plasma lipid profiles, HbA1C, fasting plasma glucose, and insulin levels, will be measured and insulin resistance (HOMA-IR) and beta-cell function (HOMA-B) will be calculated at baseline and will be repeated at months 3, 6, 12, and 18. The data will be compared within and between groups using statistical methods. The results of this trial could contribute to the evidence-based clinical guidelines that address gut microbiota manipulation to maximize health benefits in prevention and management of metabolic syndrome in prediabetes. Iranian Registry of Clinical Trials: IRCT201511032321N2 . Registered on 27 February 2016.

  16. Understanding Randomness and its Impact on Student Learning: Lessons Learned from Building the Biology Concept Inventory (BCI)

    PubMed Central

    Garvin-Doxas, Kathy

    2008-01-01

    While researching student assumptions for the development of the Biology Concept Inventory (BCI; http://bioliteracy.net), we found that a wide class of student difficulties in molecular and evolutionary biology appears to be based on deep-seated, and often unaddressed, misconceptions about random processes. Data were based on more than 500 open-ended (primarily) college student responses, submitted online and analyzed through our Ed's Tools system, together with 28 thematic and think-aloud interviews with students, and the responses of students in introductory and advanced courses to questions on the BCI. Students believe that random processes are inefficient, whereas biological systems are very efficient. They are therefore quick to propose their own rational explanations for various processes, from diffusion to evolution. These rational explanations almost always make recourse to a driver, e.g., natural selection in evolution or concentration gradients in molecular biology, with the process taking place only when the driver is present, and ceasing when the driver is absent. For example, most students believe that diffusion only takes place when there is a concentration gradient, and that the mutational processes that change organisms occur only in response to natural selection pressures. An understanding that random processes take place all the time and can give rise to complex and often counterintuitive behaviors is almost totally absent. Even students who have had advanced or college physics, and can discuss diffusion correctly in that context, cannot make the transfer to biological processes, and passing through multiple conventional biology courses appears to have little effect on their underlying beliefs. PMID:18519614

  17. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    USGS Publications Warehouse

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a Bayesian hierarchical discrete-choice model for resource selection can provide managers with 2 components of population-level inference: average population selection and variability of selection. Both components are necessary to make sound management decisions based on animal selection.

  18. Influence of Problem Based Learning on Critical Thinking Skills and Competence Class VIII SMPN 1 Gunuang Omeh, 2016/2017

    NASA Astrophysics Data System (ADS)

    Aswan, D. M.; Lufri, L.; Sumarmin, R.

    2018-04-01

    This research intends to determine the effect of Problem Based Learning models on students' critical thinking skills and competences. This study was a quasi-experimental research. The population of the study was the students of class VIII SMPN 1 Subdistrict Gunuang Omeh. Random sample selection is done by randomizing the class. Sample class that was chosen VIII3 as an experimental class given that treatment study based on problems and class VIII1 as control class that treatment usually given study. Instrument that used to consist of critical thinking test, cognitive tests, observation sheet of affective and psychomotor. Independent t-test and Mann Whitney U test was used for the analysis. Results showed that there was significant difference (sig <0.05) between control and experimental group. The conclusion of this study was Problem Based Learning models affected the students’ critical thinking skills and competences.

  19. Identifying highly connected counties compensates for resource limitations when evaluating national spread of an invasive pathogen.

    PubMed

    Sutrave, Sweta; Scoglio, Caterina; Isard, Scott A; Hutchinson, J M Shawn; Garrett, Karen A

    2012-01-01

    Surveying invasive species can be highly resource intensive, yet near-real-time evaluations of invasion progress are important resources for management planning. In the case of the soybean rust invasion of the United States, a linked monitoring, prediction, and communication network saved U.S. soybean growers approximately $200 M/yr. Modeling of future movement of the pathogen (Phakopsora pachyrhizi) was based on data about current disease locations from an extensive network of sentinel plots. We developed a dynamic network model for U.S. soybean rust epidemics, with counties as nodes and link weights a function of host hectarage and wind speed and direction. We used the network model to compare four strategies for selecting an optimal subset of sentinel plots, listed here in order of increasing performance: random selection, zonal selection (based on more heavily weighting regions nearer the south, where the pathogen overwinters), frequency-based selection (based on how frequently the county had been infected in the past), and frequency-based selection weighted by the node strength of the sentinel plot in the network model. When dynamic network properties such as node strength are characterized for invasive species, this information can be used to reduce the resources necessary to survey and predict invasion progress.

  20. The Coalescent Process in Models with Selection

    PubMed Central

    Kaplan, N. L.; Darden, T.; Hudson, R. R.

    1988-01-01

    Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685

  1. A Secure and Robust Object-Based Video Authentication System

    NASA Astrophysics Data System (ADS)

    He, Dajun; Sun, Qibin; Tian, Qi

    2004-12-01

    An object-based video authentication system, which combines watermarking, error correction coding (ECC), and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART) coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT) coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI).

  2. Red-shouldered hawk nesting habitat preference in south Texas

    USGS Publications Warehouse

    Strobel, Bradley N.; Boal, Clint W.

    2010-01-01

    We examined nesting habitat preference by red-shouldered hawks Buteo lineatus using conditional logistic regression on characteristics measured at 27 occupied nest sites and 68 unused sites in 2005–2009 in south Texas. We measured vegetation characteristics of individual trees (nest trees and unused trees) and corresponding 0.04-ha plots. We evaluated the importance of tree and plot characteristics to nesting habitat selection by comparing a priori tree-specific and plot-specific models using Akaike's information criterion. Models with only plot variables carried 14% more weight than models with only center tree variables. The model-averaged odds ratios indicated red-shouldered hawks selected to nest in taller trees and in areas with higher average diameter at breast height than randomly available within the forest stand. Relative to randomly selected areas, each 1-m increase in nest tree height and 1-cm increase in the plot average diameter at breast height increased the probability of selection by 85% and 10%, respectively. Our results indicate that red-shouldered hawks select nesting habitat based on vegetation characteristics of individual trees as well as the 0.04-ha area surrounding the tree. Our results indicate forest management practices resulting in tall forest stands with large average diameter at breast height would benefit red-shouldered hawks in south Texas.

  3. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  4. Enhancing the Selection of Backoff Interval Using Fuzzy Logic over Wireless Ad Hoc Networks

    PubMed Central

    Ranganathan, Radha; Kannan, Kathiravan

    2015-01-01

    IEEE 802.11 is the de facto standard for medium access over wireless ad hoc network. The collision avoidance mechanism (i.e., random binary exponential backoff—BEB) of IEEE 802.11 DCF (distributed coordination function) is inefficient and unfair especially under heavy load. In the literature, many algorithms have been proposed to tune the contention window (CW) size. However, these algorithms make every node select its backoff interval between [0, CW] in a random and uniform manner. This randomness is incorporated to avoid collisions among the nodes. But this random backoff interval can change the optimal order and frequency of channel access among competing nodes which results in unfairness and increased delay. In this paper, we propose an algorithm that schedules the medium access in a fair and effective manner. This algorithm enhances IEEE 802.11 DCF with additional level of contention resolution that prioritizes the contending nodes according to its queue length and waiting time. Each node computes its unique backoff interval using fuzzy logic based on the input parameters collected from contending nodes through overhearing. We evaluate our algorithm against IEEE 802.11, GDCF (gentle distributed coordination function) protocols using ns-2.35 simulator and show that our algorithm achieves good performance. PMID:25879066

  5. Random mutagenesis of the hyperthermophilic archaeon Pyrococcus furiosus using in vitro mariner transposition and natural transformation.

    PubMed

    Guschinskaya, Natalia; Brunel, Romain; Tourte, Maxime; Lipscomb, Gina L; Adams, Michael W W; Oger, Philippe; Charpentier, Xavier

    2016-11-08

    Transposition mutagenesis is a powerful tool to identify the function of genes, reveal essential genes and generally to unravel the genetic basis of living organisms. However, transposon-mediated mutagenesis has only been successfully applied to a limited number of archaeal species and has never been reported in Thermococcales. Here, we report random insertion mutagenesis in the hyperthermophilic archaeon Pyrococcus furiosus. The strategy takes advantage of the natural transformability of derivatives of the P. furiosus COM1 strain and of in vitro Mariner-based transposition. A transposon bearing a genetic marker is randomly transposed in vitro in genomic DNA that is then used for natural transformation of P. furiosus. A small-scale transposition reaction routinely generates several hundred and up to two thousands transformants. Southern analysis and sequencing showed that the obtained mutants contain a single and random genomic insertion. Polyploidy has been reported in Thermococcales and P. furiosus is suspected of being polyploid. Yet, about half of the mutants obtained on the first selection are homozygous for the transposon insertion. Two rounds of isolation on selective medium were sufficient to obtain gene conversion in initially heterozygous mutants. This transposition mutagenesis strategy will greatly facilitate functional exploration of the Thermococcales genomes.

  6. Effects of Selected Meditative Asanas on Kinaesthetic Perception and Speed of Movement

    ERIC Educational Resources Information Center

    Singh, Kanwaljeet; Bal, Baljinder S.; Deol, Nishan S.

    2009-01-01

    Study aim: To assess the effects of selected meditative "asanas" on kinesthetic perception and movement speed. Material and methods: Thirty randomly selected male students aged 18-24 years volunteered to participate in the study. They were randomly assigned into two groups: A (medidative) and B (control). The Nelson's movement speed and…

  7. Random one-of-N selector

    DOEpatents

    Kronberg, J.W.

    1993-04-20

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  8. Random one-of-N selector

    DOEpatents

    Kronberg, James W.

    1993-01-01

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  9. Shared Genetic Influences on Negative Emotionality and Major Depression/Conduct Disorder Comorbidity

    ERIC Educational Resources Information Center

    Tackett, Jennifer L.; Waldman, Irwin D.; Van Hulle, Carol A.; Lahey, Benjamin B.

    2011-01-01

    Objective: To investigate whether genetic contributions to major depressive disorder and conduct disorder comorbidity are shared with genetic influences on negative emotionality. Method: Primary caregivers of 2,022 same- and opposite-sex twin pairs 6 to 18 years of age comprised a population-based sample. Participants were randomly selected across…

  10. Does Intensity Matter? Preschoolers' Print Knowledge Development within a Classroom-Based Intervention

    ERIC Educational Resources Information Center

    McGinty, Anita S.; Breit-Smith, Allison; Fan, Xitao; Justice, Laura M.; Kaderavek, Joan N.

    2011-01-01

    The present study examined the extent to which two dimensions of intervention intensity, ("dose frequency" and "dose") of a 30-week print-referencing intervention related to the print knowledge development of 367 randomly selected children from 55 preschool classrooms. "Dose frequency" refers to the number of intervention sessions implemented per…

  11. An Excel[TM] Model of a Radioactive Series

    ERIC Educational Resources Information Center

    Andrews, D. G. H.

    2009-01-01

    A computer model of the decay of a radioactive series, written in Visual Basic in Excel[TM], is presented. The model is based on the random selection of cells in an array. The results compare well with the theoretical equations. The model is a useful tool in teaching this aspect of radioactivity. (Contains 4 figures.)

  12. PREVALENCE OF SKELETAL AND EYE MALFORMATIONS IN FROGS FROM THE NORTH-CENTRAL UNITED STATES: ESTIMATIONS BASED ON COLLECTIONS FROM RANDOMLY SELECTED SITES. (R825867)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  13. Positive Approaches Toward Student Discipline.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany.

    This report presents a number of discipline policy recommendations based on the results of a survey of students, teachers, and administrators in 60 randomly selected high schools in New York State. The bulk of the report is contained in the appendix and presents exemplary discipline programs in public and private secondary schools in New York.…

  14. Who Gets Care? Mental Health Service Use Following a School-Based Suicide Prevention Program

    ERIC Educational Resources Information Center

    Kataoka, Sheryl; Stein, Bradley D.; Nadeem, Erum; Wong, Marleen

    2007-01-01

    Objective: To examine symptomatology and mental health service use following students' contact with a large urban school district's suicide prevention program. Method: In 2001 school district staff conducted telephone interviews with 95 randomly selected parents approximately 5 months following their child's contact with the district's suicide…

  15. The Effects of Portfolio Assessment on Writing of EFL Students

    ERIC Educational Resources Information Center

    Nezakatgoo, Behzad

    2011-01-01

    The primary focus of this study was to determine the effect of portfolio assessment on final examination scores of EFL students' writing skill. To determine the impact of portfolio-based writing assessment 40 university students who enrolled in composition course were initially selected and divided randomly into two experimental and control…

  16. An Integrated Framework for the Analysis of Adolescent Cigarette Smoking in Middle School Latino Youth

    ERIC Educational Resources Information Center

    Guilamo-Ramos, Vincent; Dittus, Patricia; Holloway, Ian; Bouris, Alida; Crossett, Linda

    2011-01-01

    A framework based on five major theories of health behavior was used to identify the correlates of adolescent cigarette smoking. The framework emphasizes intentions to smoke cigarettes, factors that influence these intentions, and factors that moderate the intention-behavior relationship. Five hundred sixteen randomly selected Latino middle school…

  17. Predictors of Short-Term Treatment Outcomes among California's Proposition 36 Participants

    ERIC Educational Resources Information Center

    Hser, Yih-Ing; Evans, Elizabeth; Teruya, Cheryl; Huang, David; Anglin, M. Douglas

    2007-01-01

    California's voter-initiated Proposition 36 offers non-violent drug offenders community-based treatment as an alternative to incarceration or probation without treatment. This article reports short-term treatment outcomes subsequent to this major shift in drug policy. Data are from 1104 individuals randomly selected from all Proposition 36…

  18. Personal and Interpersonal Correlates of Bullying Behaviors among Korean Middle School Students

    ERIC Educational Resources Information Center

    Lee, Chang-Hun

    2010-01-01

    This study simultaneously investigates personal and interpersonal traits that were found to be important factors of bullying behavior using data collected from 1,238 randomly selected Korean middle school students. Using a modified and expanded definition of bullying based on a more culturally sensitive approach to bullying, this study categorizes…

  19. Design, Baseline Results of Irbid Longitudinal, School-Based Smoking Study

    ERIC Educational Resources Information Center

    Mzayek, Fawaz; Khader, Yousef; Eissenberg, Thomas; Ward, Kenneth D.; Maziak, Wasim

    2011-01-01

    Objective: To compare patterns of water pipe and cigarette smoking in an eastern Mediterranean country. Methods: In 2008, 1781 out of 1877 seventh graders enrolled in 19 randomly selected schools in Irbid, Jordan, were surveyed. Results: Experimentation with and current water pipe smoking were more prevalent than cigarette smoking (boys: 38.7% vs…

  20. Structural Relationships between Variables of Elementary School Students' Intention of Accepting Digital Textbooks

    ERIC Educational Resources Information Center

    Joo, Young Ju; Joung, Sunyoung; Choi, Se-Bin; Lim, Eugene; Go, Kyung Yi

    2014-01-01

    The purpose of this study is to explore variables affecting the acceptance of digital textbooks in the elementary school environment and provide basic information and resources to increase the intention of acceptance. Based on the above research purposes. Surveys were conducted using Google Docs targeting randomly selected elementary school…

  1. Family Factors Predicting Categories of Suicide Risk

    ERIC Educational Resources Information Center

    Randell, Brooke P.; Wang, Wen-Ling; Herting, Jerald R.; Eggert, Leona L.

    2006-01-01

    We compared family risk and protective factors among potential high school dropouts with and without suicide-risk behaviors (SRB) and examined the extent to which these factors predict categories of SRB. Subjects were randomly selected from among potential dropouts in 14 high schools. Based upon suicide-risk status, 1,083 potential high school…

  2. Evaluation of Residential Consumers Knowledge of Wireless Network Security and Its Correlation with Identity Theft

    ERIC Educational Resources Information Center

    Kpaduwa, Fidelis Iheanyi

    2010-01-01

    This current quantitative correlational research study evaluated the residential consumers' knowledge of wireless network security and its relationship with identity theft. Data analysis was based on a sample of 254 randomly selected students. All the study participants completed a survey questionnaire designed to measure their knowledge of…

  3. Investigating Students' Beliefs about Arabic Language Programs at Kuwait University

    ERIC Educational Resources Information Center

    Al-Shaye, Shaye S.

    2009-01-01

    The current study attempted to identify students' of Arabic programs beliefs about their chosen programs. To achieve this purpose, a survey was developed to collect the data from randomly selected students in liberal-arts and education-based programs at Kuwait University. The results showed that students were statistically differentiated as a…

  4. Views on the Efficacy and Ethics of Punishment: Results from a National Survey

    ERIC Educational Resources Information Center

    DiGennaro Reed, Florence D.; Lovett, Benjamin J.

    2008-01-01

    Punishment-based interventions are among the most controversial treatments in the applied behavior analysis literature. The controversy concerns both the efficacy and the ethics of punishment. Five hundred randomly selected members of the Association for Behavior Analysis were sent a survey concerning their views on the efficacy and ethics of…

  5. Improved training for target detection using Fukunaga-Koontz transform and distance classifier correlation filter

    NASA Astrophysics Data System (ADS)

    Elbakary, M. I.; Alam, M. S.; Aslan, M. S.

    2008-03-01

    In a FLIR image sequence, a target may disappear permanently or may reappear after some frames and crucial information such as direction, position and size related to the target are lost. If the target reappears at a later frame, it may not be tracked again because the 3D orientation, size and location of the target might be changed. To obtain information about the target before disappearing and to detect the target after reappearing, distance classifier correlation filter (DCCF) is trained manualy by selecting a number of chips randomly. This paper introduces a novel idea to eliminates the manual intervention in training phase of DCCF. Instead of selecting the training chips manually and selecting the number of the training chips randomly, we adopted the K-means algorithm to cluster the training frames and based on the number of clusters we select the training chips such that a training chip for each cluster. To detect and track the target after reappearing in the field-ofview ,TBF and DCCF are employed. The contduced experiemnts using real FLIR sequences show results similar to the traditional agorithm but eleminating the manual intervention is the advantage of the proposed algorithm.

  6. Phenotyping: Using Machine Learning for Improved Pairwise Genotype Classification Based on Root Traits

    PubMed Central

    Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris

    2016-01-01

    Phenotyping local crop cultivars is becoming more and more important, as they are an important genetic source for breeding – especially in regard to inherent root system architectures. Machine learning algorithms are promising tools to assist in the analysis of complex data sets; novel approaches are need to apply them on root phenotyping data of mature plants. A greenhouse experiment was conducted in large, sand-filled columns to differentiate 16 European Pisum sativum cultivars based on 36 manually derived root traits. Through combining random forest and support vector machine models, machine learning algorithms were successfully used for unbiased identification of most distinguishing root traits and subsequent pairwise cultivar differentiation. Up to 86% of pea cultivar pairs could be distinguished based on top five important root traits (Timp5) – Timp5 differed widely between cultivar pairs. Selecting top important root traits (Timp) provided a significant improved classification compared to using all available traits or randomly selected trait sets. The most frequent Timp of mature pea cultivars was total surface area of lateral roots originating from tap root segments at 0–5 cm depth. The high classification rate implies that culturing did not lead to a major loss of variability in root system architecture in the studied pea cultivars. Our results illustrate the potential of machine learning approaches for unbiased (root) trait selection and cultivar classification based on rather small, complex phenotypic data sets derived from pot experiments. Powerful statistical approaches are essential to make use of the increasing amount of (root) phenotyping information, integrating the complex trait sets describing crop cultivars. PMID:27999587

  7. A Rewritable, Random-Access DNA-Based Storage System.

    PubMed

    Yazdi, S M Hossein Tabatabaei; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica

    2015-09-18

    We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.

  8. A Rewritable, Random-Access DNA-Based Storage System

    NASA Astrophysics Data System (ADS)

    Tabatabaei Yazdi, S. M. Hossein; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica

    2015-09-01

    We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.

  9. Convergent evolution of adenosine aptamers spanning bacterial, human, and random sequences revealed by structure-based bioinformatics and genomic SELEX

    PubMed Central

    Vu, Michael M. K.; Jameson, Nora E.; Masuda, Stuart J.; Lin, Dana; Larralde-Ridaura, Rosa; Lupták, Andrej

    2012-01-01

    SUMMARY Aptamers are structured macromolecules in vitro evolved to bind molecular targets, whereas in nature they form the ligand-binding domains of riboswitches. Adenosine aptamers of a single structural family were isolated several times from random pools but they have not been identified in genomic sequences. We used two unbiased methods, structure-based bioinformatics and human genome-based in vitro selection, to identify aptamers that form the same adenosine-binding structure in a bacterium, and several vertebrates, including humans. Two of the human aptamers map to introns of RAB3C and FGD3 genes. The RAB3C aptamer binds ATP with dissociation constants about ten times lower than physiological ATP concentration, while the minimal FGD3 aptamer binds ATP only co-transcriptionally. PMID:23102219

  10. Population differentiation in Pacific salmon: local adaptation, genetic drift, or the environment?

    USGS Publications Warehouse

    Adkison, Milo D.

    1995-01-01

    Morphological, behavioral, and life-history differences between Pacific salmon (Oncorhynchus spp.) populations are commonly thought to reflect local adaptation, and it is likewise common to assume that salmon populations separated by small distances are locally adapted. Two alternatives to local adaptation exist: random genetic differentiation owing to genetic drift and founder events, and genetic homogeneity among populations, in which differences reflect differential trait expression in differing environments. Population genetics theory and simulations suggest that both alternatives are possible. With selectively neutral alleles, genetic drift can result in random differentiation despite many strays per generation. Even weak selection can prevent genetic drift in stable populations; however, founder effects can result in random differentiation despite selective pressures. Overlapping generations reduce the potential for random differentiation. Genetic homogeneity can occur despite differences in selective regimes when straying rates are high. In sum, localized differences in selection should not always result in local adaptation. Local adaptation is favored when population sizes are large and stable, selection is consistent over large areas, selective diffeentials are large, and straying rates are neither too high nor too low. Consideration of alternatives to local adaptation would improve both biological research and salmon conservation efforts.

  11. Methods for the guideline-based development of quality indicators--a systematic review

    PubMed Central

    2012-01-01

    Background Quality indicators (QIs) are used in many healthcare settings to measure, compare, and improve quality of care. For the efficient development of high-quality QIs, rigorous, approved, and evidence-based development methods are needed. Clinical practice guidelines are a suitable source to derive QIs from, but no gold standard for guideline-based QI development exists. This review aims to identify, describe, and compare methodological approaches to guideline-based QI development. Methods We systematically searched medical literature databases (Medline, EMBASE, and CINAHL) and grey literature. Two researchers selected publications reporting methodological approaches to guideline-based QI development. In order to describe and compare methodological approaches used in these publications, we extracted detailed information on common steps of guideline-based QI development (topic selection, guideline selection, extraction of recommendations, QI selection, practice test, and implementation) to predesigned extraction tables. Results From 8,697 hits in the database search and several grey literature documents, we selected 48 relevant references. The studies were of heterogeneous type and quality. We found no randomized controlled trial or other studies comparing the ability of different methodological approaches to guideline-based development to generate high-quality QIs. The relevant publications featured a wide variety of methodological approaches to guideline-based QI development, especially regarding guideline selection and extraction of recommendations. Only a few studies reported patient involvement. Conclusions Further research is needed to determine which elements of the methodological approaches identified, described, and compared in this review are best suited to constitute a gold standard for guideline-based QI development. For this research, we provide a comprehensive groundwork. PMID:22436067

  12. Computer-aided system of evaluation for population-based all-in-one service screening (CASE-PASS): from study design to outcome analysis with bias adjustment.

    PubMed

    Chen, Li-Sheng; Yen, Amy Ming-Fang; Duffy, Stephen W; Tabar, Laszlo; Lin, Wen-Chou; Chen, Hsiu-Hsi

    2010-10-01

    Population-based routine service screening has gained popularity following an era of randomized controlled trials. The evaluation of these service screening programs is subject to study design, data availability, and the precise data analysis for adjusting bias. We developed a computer-aided system that allows the evaluation of population-based service screening to unify these aspects and facilitate and guide the program assessor to efficiently perform an evaluation. This system underpins two experimental designs: the posttest-only non-equivalent design and the one-group pretest-posttest design and demonstrates the type of data required at both the population and individual levels. Three major analyses were developed that included a cumulative mortality analysis, survival analysis with lead-time adjustment, and self-selection bias adjustment. We used SAS AF software to develop a graphic interface system with a pull-down menu style. We demonstrate the application of this system with data obtained from a Swedish population-based service screen and a population-based randomized controlled trial for the screening of breast, colorectal, and prostate cancer, and one service screening program for cervical cancer with Pap smears. The system provided automated descriptive results based on the various sources of available data and cumulative mortality curves corresponding to the study designs. The comparison of cumulative survival between clinically and screen-detected cases without a lead-time adjustment are also demonstrated. The intention-to-treat and noncompliance analysis with self-selection bias adjustments are also shown to assess the effectiveness of the population-based service screening program. Model validation was composed of a comparison between our adjusted self-selection bias estimates and the empirical results on effectiveness reported in the literature. We demonstrate a computer-aided system allowing the evaluation of population-based service screening programs with an adjustment for self-selection and lead-time bias. This is achieved by providing a tutorial guide from the study design to the data analysis, with bias adjustment. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Agent Reward Shaping for Alleviating Traffic Congestion

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian

    2006-01-01

    Traffic congestion problems provide a unique environment to study how multi-agent systems promote desired system level behavior. What is particularly interesting in this class of problems is that no individual action is intrinsically "bad" for the system but that combinations of actions among agents lead to undesirable outcomes, As a consequence, agents need to learn how to coordinate their actions with those of other agents, rather than learn a particular set of "good" actions. This problem is ubiquitous in various traffic problems, including selecting departure times for commuters, routes for airlines, and paths for data routers. In this paper we present a multi-agent approach to two traffic problems, where far each driver, an agent selects the most suitable action using reinforcement learning. The agent rewards are based on concepts from collectives and aim to provide the agents with rewards that are both easy to learn and that if learned, lead to good system level behavior. In the first problem, we study how agents learn the best departure times of drivers in a daily commuting environment and how following those departure times alleviates congestion. In the second problem, we study how agents learn to select desirable routes to improve traffic flow and minimize delays for. all drivers.. In both sets of experiments,. agents using collective-based rewards produced near optimal performance (93-96% of optimal) whereas agents using system rewards (63-68%) barely outperformed random action selection (62-64%) and agents using local rewards (48-72%) performed worse than random in some instances.

  14. Large-scale randomized clinical trials of bioactives and nutrients in relation to human health and disease prevention - Lessons from the VITAL and COSMOS trials.

    PubMed

    Rautiainen, Susanne; Sesso, Howard D; Manson, JoAnn E

    2017-12-29

    Several bioactive compounds and nutrients in foods have physiological properties that are beneficial for human health. While nutrients typically have clear definitions with established levels of recommended intakes, bioactive compounds often lack such a definition. Although a food-based approach is often the optimal approach to ensure adequate intake of bioactives and nutrients, these components are also often produced as dietary supplements. However, many of these supplements are not sufficiently studied and have an unclear role in chronic disease prevention. Randomized trials are considered the gold standard of study designs, but have not been fully applied to understand the effects of bioactives and nutrients. We review the specific role of large-scale trials to test whether bioactives and nutrients have an effect on health outcomes through several crucial components of trial design, including selection of intervention, recruitment, compliance, outcome selection, and interpretation and generalizability of study findings. We will discuss these components in the context of two randomized clinical trials, the VITamin D and OmegA-3 TriaL (VITAL) and the COcoa Supplement and Multivitamin Outcomes Study (COSMOS). We will mainly focus on dietary supplements of bioactives and nutrients while also emphasizing the need for translation and integration with food-based trials that are of vital importance within nutritional research. Copyright © 2017. Published by Elsevier Ltd.

  15. Colorectal Adenomas in Participants of the SELECT Randomized Trial of Selenium and Vitamin E for Prostate Cancer Prevention

    PubMed Central

    Lance, Peter; Alberts, David S.; Thompson, Patricia A.; Fales, Liane; Wang, Fang; Jose, Jerilyn San; Jacobs, Elizabeth T.; Goodman, Phyllis J.; Darke, Amy K.; Yee, Monica; Minasian, Lori; Thompson, Ian M.; Roe, Denise J.

    2017-01-01

    Selenium and vitamin E micronutrients have been advocated for the prevention of colorectal cancer. Colorectal adenoma occurrence was used as a surrogate for colorectal cancer in an ancillary study to the Selenium and Vitamin E Cancer Prevention Trial (SELECT) for prostate cancer prevention. The primary objective was to measure the effect of selenium (as selenomethionine) on colorectal adenomas occurrence, with the effect of vitamin E (as alpha tocopherol) supplementation on colorectal adenoma occurrence considered as a secondary objective. Participants who underwent lower endoscopy while in SELECT were identified from a subgroup of the 35,533 men randomized in the trial. Adenoma occurrence was ascertained from the endoscopy and pathology reports for these procedures. Relative risk (RR) estimates and 95% confidence intervals (CI) of adenoma occurrence were generated comparing those randomized to selenium versus placebo and to vitamin E versus placebo based on the full factorial design. Evaluable endoscopy information was obtained for 6,546 participants, of whom 2,286 had 1+ adenomas. Apart from 21 flexible sigmoidoscopies, all the procedures yielding adenomas were colonoscopies. Adenomas occurred in 34.2% and 35.7%, respectively, of participants whose intervention included or did not include selenium. Compared with placebo, the RR for adenoma occurrence in participants randomized to selenium was 0.96 (95% CI, 0.90–1.02; P = 0.194). Vitamin E did not affect adenoma occurrence compared to placebo (RR = 1.03, 95% CI, 0.96–1.10; P = 0.38). Neither selenium nor vitamin E supplementation can be recommended for colorectal adenoma prevention. PMID:27777235

  16. Habitat selection of a declining white-tailed deer herd in the central Black Hills, South Dakota and Wyoming

    NASA Astrophysics Data System (ADS)

    Deperno, Christopher Shannon

    Habitat selection, survival rates, the Black Hills National Forest Habitat Capability Model (HABCAP), and the USDA Forest Service Geographic Information System (GIS) data base were evaluated for a declining white-tailed deer (Odocoileus virginianus dacotensis) herd in the central Black Hills of South Dakota and Wyoming. From July 1993 through July 1996, 73 adult and yearling female and 12 adult and yearling male white-tailed deer were radiocollared and visually monitored. Habitat information was collected at 4,662 white-tailed deer locations and 1,087 random locations. Natural mortality (71%) was the primary cause of female mortality, followed by harvest (22.5%) and accidental causes (6.5%). More females died in spring (53.2%) than in fall (22.6%), winter (14.5%), or summer (9.7%). Male mortality resulted from hunting in fall (66.7%) and natural causes in spring (33.3%). Survival rates for all deer by year were 62.1% in 1993, 51.1% in 1994, 56.4% in 1995, and 53.9% in 1996 and were similar (P = 0.691) across years. During winter, white-tailed deer selected ponderosa pine- (Pinus ponderosa ) deciduous and burned pine cover types. Overstory-understory habitats selected included pine/grass-forb, pine/bearberry (Arctostaphylos uva-ursi), pine/snowberry (Symphoricarpos albus), burned pine/grass-forb, and pine/shrub habitats. Structural stages selected included sapling-pole pine stands with >70% canopy cover, burned pine sapling-pole and saw-timber stands with <40% canopy cover. Bedding locations were represented by saw-timber pine structural stages with >40% canopy cover and all sapling-pole pine structural stages; sapling-pole stands with >70% canopy cover received the greatest use. White-tailed deer primarily fed in pine saw-timber structural stage with less than 40% canopy cover. Overall, selected habitats contained lower amounts of grass/forb, shrubs, and litter than random locations. Male and female deer generally bedded in areas that were characterized by greater horizontal cover than feeding and random sites. When feeding and bedding sites were combined males selected areas that were characterized by greater levels of horizontal cover than females. During summer, white-tailed deer selected pine-deciduous, aspen (Populus tremuloides), aspen-coniferous, spruce (Picea glauca), and spruce-deciduous cover types. Overstory-understory habitats selected included pine/juniper (Juniperus communis), aspen/shrubs, spruce/juniper, and spruce/shrub habitats. Structural stages selected included pine, aspen, and spruce sapling pole stands with all levels (0--40%, 41--70%, 71--100%) of canopy cover. All habitat types (i.e., pine, aspen, and spruce) were used as bedding locations with pine sapling-pole structural stages with >70% canopy cover used most, whereas pine saw-timber structural stage with less than 40% canopy cover was primarily used for feeding. Females bedded in areas that were characterized by greater horizontal cover than feeding and random sites, whereas male feeding sites had greater horizontal cover characteristics than bedding or random locations.

  17. Selection of forest canopy gaps by male Cerulean Warblers in West Virginia

    USGS Publications Warehouse

    Perkins, Kelly A.; Wood, Petra Bohall

    2014-01-01

    Forest openings, or canopy gaps, are an important resource for many forest songbirds, such as Cerulean Warblers (Setophaga cerulea). We examined canopy gap selection by this declining species to determine if male Cerulean Warblers selected particular sizes, vegetative heights, or types of gaps. We tested whether these parameters differed among territories, territory core areas, and randomly-placed sample plots. We used enhanced territory mapping techniques (burst sampling) to define habitat use within the territory. Canopy gap densities were higher within core areas of territories than within territories or random plots, indicating that Cerulean Warblers selected habitat within their territories with the highest gap densities. Selection of regenerating gaps with woody vegetation >12 m within the gap, and canopy heights >24 m surrounding the gap, occurred within territory core areas. These findings differed between two sites indicating that gap selection may vary based on forest structure. Differences were also found regarding the placement of territories with respect to gaps. Larger gaps, such as wildlife food plots, were located on the periphery of territories more often than other types and sizes of gaps, while smaller gaps, such as treefalls, were located within territory boundaries more often than expected. The creations of smaller canopy gaps, <100 m2, within dense stands are likely compatible with forest management for this species.

  18. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  19. What can platinum offer yet in the treatment of PS2 NSCLC patients? A systematic review and meta-analysis.

    PubMed

    Bronte, Giuseppe; Rolfo, Christian; Passiglia, Francesco; Rizzo, Sergio; Gil-Bazo, Ignacio; Fiorentino, Eugenio; Cajozzo, Massimo; Van Meerbeeck, Jan P; Lequaglie, Cosimo; Santini, Daniele; Pauwels, Patrick; Russo, Antonio

    2015-09-01

    Randomized phase III trials showed interesting, but conflicting results, regarding the treatment of NSCLC, PS2 population. This meta-analysis aims to review all randomized trials comparing platinum-based doublets and single-agents in NSCLC PS2 patients. Data from all published randomized trials, comparing efficacy and safety of platinum-based doublets to single agents in untreated NSCLC, PS2 patients, were collected. Pooled ORs were calculated for the 1-year Survival-Rate (1y-SR), Overall Response Rate (ORR), and grade 3-4 (G3-4) hematologic toxicities. Six eligible trials (741 patients) were selected. Pooled analysis showed a significant improvement in ORR (OR: 3.243; 95% CI: 1.883-5.583) and 1y-SR (OR: 1.743; 95% CI: 1.203-2.525) in favor of platinum-based doublets. G3-4 hematological toxicities were also more frequent in this group. This meta-analysis suggests that platinum-combination regimens are superior to singleagent both in terms of ORR and survival-rate with increase of severe hematological toxicities. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Effectiveness of Mindfulness-Based Stress Reduction Bibliotherapy: A Preliminary Randomized Controlled Trial.

    PubMed

    Hazlett-Stevens, Holly; Oren, Yelena

    2017-06-01

    This randomized controlled investigation examined the effectiveness of a self-help bibliotherapy format of the evidence-based mindfulness-based stress reduction (MBSR) intervention. College students seeking stress reduction were randomly assigned to a 10-week MBSR bibliotherapy intervention group (n = 47) or a no-treatment control group (n = 45). Self-report measures were collected at baseline and postintervention. A total of 25 bibliotherapy and 43 control group participants provided final data following the intervention period. Compared to the control group, bibliotherapy participants reported increased mindfulness following the intervention. Significant decreases on measures of depression, anxiety, stress, perceived stress, and anxiety sensitivity also were reported postintervention as well as increased quality of life in physical health, psychological, and environmental domains. No statistically significant group effects were found for social relationships quality of life domain, worry, and experiential avoidance measures. This MBSR workbook may provide an acceptable and effective alternative for motivated individuals seeking to reduce stress, at least for a select group of individuals who are willing and able to sustain participation in the intervention. © 2016 Wiley Periodicals, Inc.

  1. A nationwide population-based study on incidence and cost of non-fatal injuries in Iran.

    PubMed

    Hafezi-Nejad, Nima; Rahimi-Movaghar, Afarin; Motevalian, Abbas; Amin-Esmaeili, Masoumeh; Sharifi, Vandad; Hajebi, Ahmad; Radgoodarzi, Reza; Hefazi, Mitra; Eslami, Vahid; Saadat, Soheil; Rahimi-Movaghar, Vafa

    2014-10-01

    Elucidating the epidemiological status of injuries is a critical component of preventive strategies in countries with high incidence of injuries, like Iran. Population-based surveys are able to estimate all types of non-fatal injuries. This study protocol is the core unit in describing Iran's national cost and epidemiology of non-fatal injuries, and also as a guide for other studies. In a cross-sectional study, 1525 primary sampling units are randomly selected with probability proportional to size regarding the number of households in each enumeration area based on Iran's 2006 national census. Six of the households are randomly selected. One member of each household is chosen using Kish Grid tables. In all, 9150 subjects are selected. Data on demographics are collected. For each injury during the past three months, activity, place, mechanism, site, type and the place of treatment are coded to match the International Classification of Diseases, 10th revision 2012 (ICD10-2012) classifications. Subjects are contacted via telephone to obtain data on cost of injury. Finally, sampling weights are calculated so that data for each respondent can be inflated to represent other individuals in Iran. Quality control and quality assurance issues are discussed. Our objectives will describe the present impact and the future priorities of injury prevention in Iran. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  2. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    NASA Astrophysics Data System (ADS)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  3. Prevalence and magnitude of acidosis sequelae to rice-based feeding regimen followed in Tamil Nadu, India.

    PubMed

    Murugeswari, Rathinam; Valli, Chinnamani; Karunakaran, Raman; Leela, Venkatasubramanian; Pandian, Amaresan Serma Saravana

    2018-04-01

    In Tamil Nadu, a southern state of India, rice is readily available at a low cost, hence, is cooked (cooking akin to human consumption) and fed irrationally to cross-bred dairy cattle with poor productivity. Hence, a study was carried out with the objective to examine the prevalence of acidosis sequelae to rice-based feeding regimen and assess its magnitude. A survey was conducted in all the 32 districts of Tamil Nadu, by randomly selecting two blocks per districts and from each block five villages were randomly selected. From each of the selected village, 10 dairy farmers belonging to the unorganized sector, owning one or two cross-bred dairy cows in early and mid-lactation were randomly selected so that a sample size of 100 farmers per district was maintained. The feeding regimen, milk yield was recorded, and occurrence of acidosis and incidence of laminitis were ascertained by the veterinarian with the confirmative test to determine the impact of feeding cooked rice to cows. It is observed that 71.5% of farmers in unorganized sector feed cooked rice to their cattle. The incidence of acidosis progressively increased significantly (p<0.05) from 29.00% in cows fed with 0.5 kg of cooked rice to 69.23% in cows fed with more than 2.5 kg of cooked rice. However, the incidence of acidosis remained significantly (p<0.05) as low as 9.9% in cows fed feeding regimen without cooked rice which is suggestive of a correlation between excessive feeding cooked rice and onset of acidosis. Further, the noticeable difference in the incidence of acidosis observed between feeding cooked rice and those fed without rice and limited intake of oil cake indicates that there is a mismatch between energy and protein supply to these cattle. Among cooked rice-based diet, the incidence of laminitis increased progressively (p<0.05) from 9.2% to 37.9% with the increase in the quantum of cooked rice in the diet. The study points out the importance of protein supplementation in rice-based feeding regimen to set right the mismatched supply between nitrogen and fermentable organic matter in the rumen. This research has practical implications for animal health, welfare, nutrition, and management.

  4. Prevalence and magnitude of acidosis sequelae to rice-based feeding regimen followed in Tamil Nadu, India

    PubMed Central

    Murugeswari, Rathinam; Valli, Chinnamani; Karunakaran, Raman; Leela, Venkatasubramanian; Pandian, Amaresan Serma Saravana

    2018-01-01

    Background and Aim In Tamil Nadu, a southern state of India, rice is readily available at a low cost, hence, is cooked (cooking akin to human consumption) and fed irrationally to cross-bred dairy cattle with poor productivity. Hence, a study was carried out with the objective to examine the prevalence of acidosis sequelae to rice-based feeding regimen and assess its magnitude. Materials and Methods A survey was conducted in all the 32 districts of Tamil Nadu, by randomly selecting two blocks per districts and from each block five villages were randomly selected. From each of the selected village, 10 dairy farmers belonging to the unorganized sector, owning one or two cross-bred dairy cows in early and mid-lactation were randomly selected so that a sample size of 100 farmers per district was maintained. The feeding regimen, milk yield was recorded, and occurrence of acidosis and incidence of laminitis were ascertained by the veterinarian with the confirmative test to determine the impact of feeding cooked rice to cows. Results It is observed that 71.5% of farmers in unorganized sector feed cooked rice to their cattle. The incidence of acidosis progressively increased significantly (p<0.05) from 29.00% in cows fed with 0.5 kg of cooked rice to 69.23% in cows fed with more than 2.5 kg of cooked rice. However, the incidence of acidosis remained significantly (p<0.05) as low as 9.9% in cows fed feeding regimen without cooked rice which is suggestive of a correlation between excessive feeding cooked rice and onset of acidosis. Further, the noticeable difference in the incidence of acidosis observed between feeding cooked rice and those fed without rice and limited intake of oil cake indicates that there is a mismatch between energy and protein supply to these cattle. Among cooked rice-based diet, the incidence of laminitis increased progressively (p<0.05) from 9.2% to 37.9% with the increase in the quantum of cooked rice in the diet. Conclusion The study points out the importance of protein supplementation in rice-based feeding regimen to set right the mismatched supply between nitrogen and fermentable organic matter in the rumen. This research has practical implications for animal health, welfare, nutrition, and management PMID:29805211

  5. 2GETHER - The Dual Protection Project: Design and rationale of a randomized controlled trial to increase dual protection strategy selection and adherence among African American adolescent females

    PubMed Central

    Ewing, Alexander C.; Kottke, Melissa J.; Kraft, Joan Marie; Sales, Jessica M.; Brown, Jennifer L.; Goedken, Peggy; Wiener, Jeffrey; Kourtis, Athena P.

    2018-01-01

    Background African American adolescent females are at elevated risk for unintended pregnancy and sexually transmitted infections (STIs). Dual protection (DP) is defined as concurrent prevention of pregnancy and STIs. This can be achieved by abstinence, consistent condom use, or the dual methods of condoms plus an effective non-barrier contraceptive. Previous clinic-based interventions showed short-term effects on increasing dual method use, but evidence of sustained effects on dual method use and decreased incident pregnancies and STIs are lacking. Methods/Design This manuscript describes the 2GETHER Project. 2GETHER is a randomized controlled trial of a multi-component intervention to increase dual protection use among sexually active African American females aged 14–19 years not desiring pregnancy at a Title X clinic in Atlanta, GA. The intervention is clinic-based and includes a culturally tailored interactive multimedia component and counseling sessions, both to assist in selection of a DP method and to reinforce use of the DP method. The participants are randomized to the study intervention or the standard of care, and followed for 12 months to evaluate how the intervention influences DP method selection and adherence, pregnancy and STI incidence, and participants’ DP knowledge, intentions, and self-efficacy. Discussion The 2GETHER Project is a novel trial to reduce unintended pregnancies and STIs among African American adolescents. The intervention is unique in the comprehensive and complementary nature of its components and its individual tailoring of provider-patient interaction. If the trial interventions are shown to be effective, then it will be reasonable to assess their scalability and applicability in other populations. PMID:28007634

  6. Designs for Testing Group-Based Interventions with Limited Numbers of Social Units: The Dynamic Wait-Listed and Regression Point Displacement Designs.

    PubMed

    Wyman, Peter A; Henry, David; Knoblauch, Shannon; Brown, C Hendricks

    2015-10-01

    The dynamic wait-listed design (DWLD) and regression point displacement design (RPDD) address several challenges in evaluating group-based interventions when there is a limited number of groups. Both DWLD and RPDD utilize efficiencies that increase statistical power and can enhance balance between community needs and research priorities. The DWLD blocks on more time units than traditional wait-listed designs, thereby increasing the proportion of a study period during which intervention and control conditions can be compared, and can also improve logistics of implementing intervention across multiple sites and strengthen fidelity. We discuss DWLDs in the larger context of roll-out randomized designs and compare it with its cousin the Stepped Wedge design. The RPDD uses archival data on the population of settings from which intervention unit(s) are selected to create expected posttest scores for units receiving intervention, to which actual posttest scores are compared. High pretest-posttest correlations give the RPDD statistical power for assessing intervention impact even when one or a few settings receive intervention. RPDD works best when archival data are available over a number of years prior to and following intervention. If intervention units were not randomly selected, propensity scores can be used to control for non-random selection factors. Examples are provided of the DWLD and RPDD used to evaluate, respectively, suicide prevention training (QPR) in 32 schools and a violence prevention program (CeaseFire) in two Chicago police districts over a 10-year period. How DWLD and RPDD address common threats to internal and external validity, as well as their limitations, are discussed.

  7. 2GETHER - The Dual Protection Project: Design and rationale of a randomized controlled trial to increase dual protection strategy selection and adherence among African American adolescent females.

    PubMed

    Ewing, Alexander C; Kottke, Melissa J; Kraft, Joan Marie; Sales, Jessica M; Brown, Jennifer L; Goedken, Peggy; Wiener, Jeffrey; Kourtis, Athena P

    2017-03-01

    African American adolescent females are at elevated risk for unintended pregnancy and sexually transmitted infections (STIs). Dual protection (DP) is defined as concurrent prevention of pregnancy and STIs. This can be achieved by abstinence, consistent condom use, or the dual methods of condoms plus an effective non-barrier contraceptive. Previous clinic-based interventions showed short-term effects on increasing dual method use, but evidence of sustained effects on dual method use and decreased incident pregnancies and STIs are lacking. This manuscript describes the 2GETHER Project. 2GETHER is a randomized controlled trial of a multi-component intervention to increase dual protection use among sexually active African American females aged 14-19years not desiring pregnancy at a Title X clinic in Atlanta, GA. The intervention is clinic-based and includes a culturally tailored interactive multimedia component and counseling sessions, both to assist in selection of a DP method and to reinforce use of the DP method. The participants are randomized to the study intervention or the standard of care, and followed for 12months to evaluate how the intervention influences DP method selection and adherence, pregnancy and STI incidence, and participants' DP knowledge, intentions, and self-efficacy. The 2GETHER Project is a novel trial to reduce unintended pregnancies and STIs among African American adolescents. The intervention is unique in the comprehensive and complementary nature of its components and its individual tailoring of provider-patient interaction. If the trial interventions are shown to be effective, then it will be reasonable to assess their scalability and applicability in other populations. Published by Elsevier Inc.

  8. Early Colorectal Cancer Detected by Machine Learning Model Using Gender, Age, and Complete Blood Count Data.

    PubMed

    Hornbrook, Mark C; Goshen, Ran; Choman, Eran; O'Keeffe-Rosetti, Maureen; Kinar, Yaron; Liles, Elizabeth G; Rust, Kristal C

    2017-10-01

    Machine learning tools identify patients with blood counts indicating greater likelihood of colorectal cancer and warranting colonoscopy referral. To validate a machine learning colorectal cancer detection model on a US community-based insured adult population. Eligible colorectal cancer cases (439 females, 461 males) with complete blood counts before diagnosis were identified from Kaiser Permanente Northwest Region's Tumor Registry. Control patients (n = 9108) were randomly selected from KPNW's population who had no cancers, received at ≥1 blood count, had continuous enrollment from 180 days prior to the blood count through 24 months after the count, and were aged 40-89. For each control, one blood count was randomly selected as the pseudo-colorectal cancer diagnosis date for matching to cases, and assigned a "calendar year" based on the count date. For each calendar year, 18 controls were randomly selected to match the general enrollment's 10-year age groups and lengths of continuous enrollment. Prediction performance was evaluated by area under the curve, specificity, and odds ratios. Area under the receiver operating characteristics curve for detecting colorectal cancer was 0.80 ± 0.01. At 99% specificity, the odds ratio for association of a high-risk detection score with colorectal cancer was 34.7 (95% CI 28.9-40.4). The detection model had the highest accuracy in identifying right-sided colorectal cancers. ColonFlag ® identifies individuals with tenfold higher risk of undiagnosed colorectal cancer at curable stages (0/I/II), flags colorectal tumors 180-360 days prior to usual clinical diagnosis, and is more accurate at identifying right-sided (compared to left-sided) colorectal cancers.

  9. Effect of expanding medicaid for parents on children's health insurance coverage: lessons from the Oregon experiment.

    PubMed

    DeVoe, Jennifer E; Marino, Miguel; Angier, Heather; O'Malley, Jean P; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J; Bailey, Steffani R; Gallia, Charles; Gold, Rachel

    2015-01-01

    In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon's randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. To estimate the effect on a child's health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. Oregon Experiment randomized natural experiment assessing the results of Oregon's 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child's Medicaid or Children's Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children's coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. Children's Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent's selection date. In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent's selection compared with children whose parents were not selected (adjusted odds ratio [AOR]=1.18; 95% CI, 1.10-1.27). The effect remained significant during months 7 to 12 (AOR=1.11; 95% CI, 1.03-1.19); months 13 to 18 showed a positive but not significant effect (AOR=1.07; 95% CI, 0.99-1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR=2.37; 95% CI, 2.14-2.64). Children's odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents' access to Medicaid coverage and their children's coverage.

  10. Territory and nest site selection patterns by Grasshopper Sparrows in southeastern Arizona

    USGS Publications Warehouse

    Ruth, Janet M.; Skagen, Susan K.

    2017-01-01

    Grassland bird populations are showing some of the greatest rates of decline of any North American birds, prompting measures to protect and improve important habitat. We assessed how vegetation structure and composition, habitat features often targeted for management, affected territory and nest site selection by Grasshopper Sparrows (Ammodramus savannarum ammolegus) in southeastern Arizona. To identify features important to males establishing territories, we compared vegetation characteristics of known territories and random samples on 2 sites over 5 years. We examined habitat selection patterns of females by comparing characteristics of nest sites with territories over 3 years. Males selected territories in areas of sparser vegetation structure and more tall shrubs (>2 m) than random plots on the site with low shrub densities. Males did not select territories based on the proportion of exotic grasses. Females generally located nest sites in areas with lower small shrub (1–2 m tall) densities than territories overall when possible and preferentially selected native grasses for nest construction. Whether habitat selection was apparent depended upon the range of vegetation structure that was available. We identified an upper threshold above which grass structure seemed to be too high and dense for Grasshopper Sparrows. Our results suggest that some management that reduces vegetative structure may benefit this species in desert grasslands at the nest and territory scale. However, we did not assess initial male habitat selection at a broader landscape scale where their selection patterns may be different and could be influenced by vegetation density and structure outside the range of values sampled in this study.

  11. Training set selection for the prediction of essential genes.

    PubMed

    Cheng, Jian; Xu, Zhao; Wu, Wenwu; Zhao, Li; Li, Xiangchen; Liu, Yanlin; Tao, Shiheng

    2014-01-01

    Various computational models have been developed to transfer annotations of gene essentiality between organisms. However, despite the increasing number of microorganisms with well-characterized sets of essential genes, selection of appropriate training sets for predicting the essential genes of poorly-studied or newly sequenced organisms remains challenging. In this study, a machine learning approach was applied reciprocally to predict the essential genes in 21 microorganisms. Results showed that training set selection greatly influenced predictive accuracy. We determined four criteria for training set selection: (1) essential genes in the selected training set should be reliable; (2) the growth conditions in which essential genes are defined should be consistent in training and prediction sets; (3) species used as training set should be closely related to the target organism; and (4) organisms used as training and prediction sets should exhibit similar phenotypes or lifestyles. We then analyzed the performance of an incomplete training set and an integrated training set with multiple organisms. We found that the size of the training set should be at least 10% of the total genes to yield accurate predictions. Additionally, the integrated training sets exhibited remarkable increase in stability and accuracy compared with single sets. Finally, we compared the performance of the integrated training sets with the four criteria and with random selection. The results revealed that a rational selection of training sets based on our criteria yields better performance than random selection. Thus, our results provide empirical guidance on training set selection for the identification of essential genes on a genome-wide scale.

  12. Spectrum of walk matrix for Koch network and its application

    NASA Astrophysics Data System (ADS)

    Xie, Pinchen; Lin, Yuan; Zhang, Zhongzhi

    2015-06-01

    Various structural and dynamical properties of a network are encoded in the eigenvalues of walk matrix describing random walks on the network. In this paper, we study the spectra of walk matrix of the Koch network, which displays the prominent scale-free and small-world features. Utilizing the particular architecture of the network, we obtain all the eigenvalues and their corresponding multiplicities. Based on the link between the eigenvalues of walk matrix and random target access time defined as the expected time for a walker going from an arbitrary node to another one selected randomly according to the steady-state distribution, we then derive an explicit solution to the random target access time for random walks on the Koch network. Finally, we corroborate our computation for the eigenvalues by enumerating spanning trees in the Koch network, using the connection governing eigenvalues and spanning trees, where a spanning tree of a network is a subgraph of the network, that is, a tree containing all the nodes.

  13. Delivering successful randomized controlled trials in surgery: Methods to optimize collaboration and study design.

    PubMed

    Blencowe, Natalie S; Cook, Jonathan A; Pinkney, Thomas; Rogers, Chris; Reeves, Barnaby C; Blazeby, Jane M

    2017-04-01

    Randomized controlled trials in surgery are notoriously difficult to design and conduct due to numerous methodological and cultural challenges. Over the last 5 years, several UK-based surgical trial-related initiatives have been funded to address these issues. These include the development of Surgical Trials Centers and Surgical Specialty Leads (individual surgeons responsible for championing randomized controlled trials in their specialist fields), both funded by the Royal College of Surgeons of England; networks of research-active surgeons in training; and investment in methodological research relating to surgical randomized controlled trials (to address issues such as recruitment, blinding, and the selection and standardization of interventions). This article discusses these initiatives more in detail and provides exemplar cases to illustrate how the methodological challenges have been tackled. The initiatives have surpassed expectations, resulting in a renaissance in surgical research throughout the United Kingdom, such that the number of patients entering surgical randomized controlled trials has doubled.

  14. Computational intelligence-based polymerase chain reaction primer selection based on a novel teaching-learning-based optimisation.

    PubMed

    Cheng, Yu-Huei

    2014-12-01

    Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.

  15. Teacher Education, Motivation, Compensation, Workplace Support, and Links to Quality of Center-Based Child Care and Teachers' Intention to Stay in the Early Childhood Profession

    ERIC Educational Resources Information Center

    Torquati, Julia C.; Raikes, Helen; Huddleston-Casas, Catherine A.

    2007-01-01

    The purposes of this study were to present a conceptual model for selection into the early childhood profession and to test the model using contemporaneous assessments. A stratified random sample of center-based child care providers in 4 Midwestern states (n=964) participated in a telephone interview, and 223 were also assessed with the Early…

  16. Control and design heat flux bending in thermal devices with transformation optics.

    PubMed

    Xu, Guoqiang; Zhang, Haochun; Jin, Yan; Li, Sen; Li, Yao

    2017-04-17

    We propose a fundamental latent function of control heat transfer and heat flux density vectors at random positions on thermal materials by applying transformation optics. The expressions for heat flux bending are obtained, and the factors influencing them are investigated in both 2D and 3D cloaking schemes. Under certain conditions, more than one degree of freedom of heat flux bending exists corresponding to the temperature gradients of the 3D domain. The heat flux path can be controlled in random space based on the geometrical azimuths, radial positions, and thermal conductivity ratios of the selected materials.

  17. Can Random Mutation Mimic Design?: A Guided Inquiry Laboratory for Undergraduate Students

    PubMed Central

    Kalinowski, Steven T.; Taper, Mark L.; Metz, Anneke M.

    2006-01-01

    Complex biological structures, such as the human eye, have been interpreted as evidence for a creator for over three centuries. This raises the question of whether random mutation can create such adaptations. In this article, we present an inquiry-based laboratory experiment that explores this question using paper airplanes as a model organism. The main task for students in this investigation is to figure out how to simulate paper airplane evolution (including reproduction, inheritance, mutation, and selection). In addition, the lab requires students to practice analytic thinking and to carefully delineate the implications of their results. PMID:16951065

  18. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  19. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  20. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  1. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  2. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  3. Physiology is rocking the foundations of evolutionary biology.

    PubMed

    Noble, Denis

    2013-08-01

    The 'Modern Synthesis' (Neo-Darwinism) is a mid-20th century gene-centric view of evolution, based on random mutations accumulating to produce gradual change through natural selection. Any role of physiological function in influencing genetic inheritance was excluded. The organism became a mere carrier of the real objects of selection, its genes. We now know that genetic change is far from random and often not gradual. Molecular genetics and genome sequencing have deconstructed this unnecessarily restrictive view of evolution in a way that reintroduces physiological function and interactions with the environment as factors influencing the speed and nature of inherited change. Acquired characteristics can be inherited, and in a few but growing number of cases that inheritance has now been shown to be robust for many generations. The 21st century can look forward to a new synthesis that will reintegrate physiology with evolutionary biology.

  4. Genetic discovery in Xylella fastidiosa through sequence analysis of selected randomly amplified polymorphic DNAs.

    PubMed

    Chen, Jianchi; Civerolo, Edwin L; Jarret, Robert L; Van Sluys, Marie-Anne; de Oliveira, Mariana C

    2005-02-01

    Xylella fastidiosa causes many important plant diseases including Pierce's disease (PD) in grape and almond leaf scorch disease (ALSD). DNA-based methodologies, such as randomly amplified polymorphic DNA (RAPD) analysis, have been playing key roles in genetic information collection of the bacterium. This study further analyzed the nucleotide sequences of selected RAPDs from X. fastidiosa strains in conjunction with the available genome sequence databases and unveiled several previously unknown novel genetic traits. These include a sequence highly similar to those in the phage family of Podoviridae. Genome comparisons among X. fastidiosa strains suggested that the "phage" is currently active. Two other RAPDs were also related to horizontal gene transfer: one was part of a broadly distributed cryptic plasmid and the other was associated with conjugal transfer. One RAPD inferred a genomic rearrangement event among X. fastidiosa PD strains and another identified a single nucleotide polymorphism of evolutionary value.

  5. Does rational selection of training and test sets improve the outcome of QSAR modeling?

    PubMed

    Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander

    2012-10-22

    Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.

  6. Improvement of a popcorn population using selection indexes from a fourth cycle of recurrent selection program carried out in two different environments.

    PubMed

    Amaral Júnior, A T; Freitas Júnior, S P; Rangel, R M; Pena, G F; Ribeiro, R M; Morais, R C; Schuelter, A R

    2010-03-02

    We estimated genetic gains for popcorn varieties using selection indexes in a fourth cycle of intrapopulation recurrent selection developed in the campus of the Universidade Estadual do Norte Fluminense. Two hundred full-sib families were obtained from the popcorn population UNB-2U of the third recurrent selection cycle. The progenies were evaluated in a randomized block design with two replications at sites in two different environments: the Colégio Estadual Agrícola Antônio Sarlo, in Campos dos Goytacazes, and the Empresa de Pesquisa Agropecuária do Estado do Rio de Janeiro (PESAGRO-RIO), in Itaocara, both in the State of Rio de Janeiro. There were significant differences between families within sets in all traits, indicating genetic variability that could be exploited in future cycles. Thirty full-sib families were selected to continue the program. The selection indexes used to predict the gains were those of Mulamba and Mock, Smith and Hazel. The best results were obtained with the Mulamba and Mock index, which allowed the prediction of negative gains for the traits number of diseased ears and ears attacked by pests, number of broken plants and lodging, as well as ears with poor husk cover. It also provided higher gains for popping expansion and grain yield than with the other indexes, giving values of 10.55 and 8.50%, respectively, based on tentatively assigned random weights.

  7. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    PubMed

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Peculiarities of the statistics of spectrally selected fluorescence radiation in laser-pumped dye-doped random media

    NASA Astrophysics Data System (ADS)

    Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.

    2018-04-01

    We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.

  9. Randomized controlled trials of simulation-based interventions in Emergency Medicine: a methodological review.

    PubMed

    Chauvin, Anthony; Truchot, Jennifer; Bafeta, Aida; Pateron, Dominique; Plaisance, Patrick; Yordanov, Youri

    2018-04-01

    The number of trials assessing Simulation-Based Medical Education (SBME) interventions has rapidly expanded. Many studies show that potential flaws in design, conduct and reporting of randomized controlled trials (RCTs) can bias their results. We conducted a methodological review of RCTs assessing a SBME in Emergency Medicine (EM) and examined their methodological characteristics. We searched MEDLINE via PubMed for RCT that assessed a simulation intervention in EM, published in 6 general and internal medicine and in the top 10 EM journals. The Cochrane Collaboration risk of Bias tool was used to assess risk of bias, intervention reporting was evaluated based on the "template for intervention description and replication" checklist, and methodological quality was evaluated by the Medical Education Research Study Quality Instrument. Reports selection and data extraction was done by 2 independents researchers. From 1394 RCTs screened, 68 trials assessed a SBME intervention. They represent one quarter of our sample. Cardiopulmonary resuscitation (CPR) is the most frequent topic (81%). Random sequence generation and allocation concealment were performed correctly in 66 and 49% of trials. Blinding of participants and assessors was performed correctly in 19 and 68%. Risk of attrition bias was low in three-quarters of the studies (n = 51). Risk of selective reporting bias was unclear in nearly all studies. The mean MERQSI score was of 13.4/18.4% of the reports provided a description allowing the intervention replication. Trials assessing simulation represent one quarter of RCTs in EM. Their quality remains unclear, and reproducing the interventions appears challenging due to reporting issues.

  10. Clustering of financial time series with application to index and enhanced index tracking portfolio

    NASA Astrophysics Data System (ADS)

    Dose, Christian; Cincotti, Silvano

    2005-09-01

    A stochastic-optimization technique based on time series cluster analysis is described for index tracking and enhanced index tracking problems. Our methodology solves the problem in two steps, i.e., by first selecting a subset of stocks and then setting the weight of each stock as a result of an optimization process (asset allocation). Present formulation takes into account constraints on the number of stocks and on the fraction of capital invested in each of them, whilst not including transaction costs. Computational results based on clustering selection are compared to those of random techniques and show the importance of clustering in noise reduction and robust forecasting applications, in particular for enhanced index tracking.

  11. Assessing different measures of population-level vaccine protection using a case-control study.

    PubMed

    Ali, Mohammad; You, Young Ae; Kanungo, Suman; Manna, Byomkesh; Deen, Jacqueline L; Lopez, Anna Lena; Wierzba, Thomas F; Bhattacharya, Sujit K; Sur, Dipika; Clemens, John D

    2015-11-27

    Case-control studies have not been examined for their utility in assessing population-level vaccine protection in individually randomized trials. We used the data of a randomized, placebo-controlled trial of a cholera vaccine to compare the results of case-control analyses with those of cohort analyses. Cases of cholera were selected from the trial population followed for three years following dosing. For each case, we selected 4 age-matched controls who had not developed cholera. For each case and control, GIS was used to calculate vaccine coverage of individuals in a surrounding "virtual" cluster. Specific selection strategies were used to evaluate the vaccine protective effects. 66,900 out of 108,389 individuals received two doses of the assigned regimen. For direct protection among subjects in low vaccine coverage clusters, we observed 78% (95% CI: 47-91%) protection in a cohort analysis and 84% (95% CI: 60-94%) in case-control analysis after adjusting for confounding factors. Using our GIS-based approach, estimated indirect protection was 52% (95% CI: 10-74%) in cohort and 76% (95% CI: 47-89%) in case control analysis. Estimates of total and overall effectiveness were similar for cohort and case-control analyses. The findings show that case-control analyses of individually randomized vaccine trials may be used to evaluate direct as well as population-level vaccine protection. Copyright © 2015. Published by Elsevier Ltd.

  12. Holographic memories with encryption-selectable function

    NASA Astrophysics Data System (ADS)

    Su, Wei-Chia; Lee, Xuan-Hao

    2006-03-01

    Volume holographic storage has received increasing attention owing to its potential high storage capacity and access rate. In the meanwhile, encrypted holographic memory using random phase encoding technique is attractive for an optical community due to growing demand for protection of information. In this paper, encryption-selectable holographic storage algorithms in LiNbO 3 using angular multiplexing are proposed and demonstrated. Encryption-selectable holographic memory is an advance concept of security storage for content protection. It offers more flexibility to encrypt the data or not optionally during the recording processes. In our system design, the function of encryption and non-encryption storage is switched by a random phase pattern and a uniform phase pattern. Based on a 90-degree geometry, the input patterns including the encryption and non-encryption storage are stored via angular multiplexing with reference plane waves at different incident angles. Image is encrypted optionally by sliding the ground glass into one of the recording waves or removing it away in each exposure. The ground glass is a key for encryption. Besides, it is also an important key available for authorized user to decrypt the encrypted information.

  13. The risk-stratified osteoporosis strategy evaluation study (ROSE): a randomized prospective population-based study. Design and baseline characteristics.

    PubMed

    Rubin, Katrine Hass; Holmberg, Teresa; Rothmann, Mette Juel; Høiberg, Mikkel; Barkmann, Reinhard; Gram, Jeppe; Hermann, Anne Pernille; Bech, Mickael; Rasmussen, Ole; Glüer, Claus C; Brixen, Kim

    2015-02-01

    The risk-stratified osteoporosis strategy evaluation study (ROSE) is a randomized prospective population-based study investigating the effectiveness of a two-step screening program for osteoporosis in women. This paper reports the study design and baseline characteristics of the study population. 35,000 women aged 65-80 years were selected at random from the population in the Region of Southern Denmark and-before inclusion-randomized to either a screening group or a control group. As first step, a self-administered questionnaire regarding risk factors for osteoporosis based on FRAX(®) was issued to both groups. As second step, subjects in the screening group with a 10-year probability of major osteoporotic fractures ≥15% were offered a DXA scan. Patients diagnosed with osteoporosis from the DXA scan were advised to see their GP and discuss pharmaceutical treatment according to Danish National guidelines. The primary outcome is incident clinical fractures as evaluated through annual follow-up using the Danish National Patient Registry. The secondary outcomes are cost-effectiveness, participation rate, and patient preferences. 20,904 (60%) women participated and included in the baseline analyses (10,411 in screening and 10,949 in control group). The mean age was 71 years. As expected by randomization, the screening and control groups had similar baseline characteristics. Screening for osteoporosis is at present not evidence based according to the WHO screening criteria. The ROSE study is expected to provide knowledge of the effectiveness of a screening strategy that may be implemented in health care systems to prevent fractures.

  14. Mixed models for selection of Jatropha progenies with high adaptability and yield stability in Brazilian regions.

    PubMed

    Teodoro, P E; Bhering, L L; Costa, R D; Rocha, R B; Laviola, B G

    2016-08-19

    The aim of this study was to estimate genetic parameters via mixed models and simultaneously to select Jatropha progenies grown in three regions of Brazil that meet high adaptability and stability. From a previous phenotypic selection, three progeny tests were installed in 2008 in the municipalities of Planaltina-DF (Midwest), Nova Porteirinha-MG (Southeast), and Pelotas-RS (South). We evaluated 18 families of half-sib in a randomized block design with three replications. Genetic parameters were estimated using restricted maximum likelihood/best linear unbiased prediction. Selection was based on the harmonic mean of the relative performance of genetic values method in three strategies considering: 1) performance in each environment (with interaction effect); 2) performance in each environment (with interaction effect); and 3) simultaneous selection for grain yield, stability and adaptability. Accuracy obtained (91%) reveals excellent experimental quality and consequently safety and credibility in the selection of superior progenies for grain yield. The gain with the selection of the best five progenies was more than 20%, regardless of the selection strategy. Thus, based on the three selection strategies used in this study, the progenies 4, 11, and 3 (selected in all environments and the mean environment and by adaptability and phenotypic stability methods) are the most suitable for growing in the three regions evaluated.

  15. Prediction of drug synergy in cancer using ensemble-based machine learning techniques

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder

    2018-04-01

    Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.

  16. Predicting CD4 count changes among patients on antiretroviral treatment: Application of data mining techniques.

    PubMed

    Kebede, Mihiretu; Zegeye, Desalegn Tigabu; Zeleke, Berihun Megabiaw

    2017-12-01

    To monitor the progress of therapy and disease progression, periodic CD4 counts are required throughout the course of HIV/AIDS care and support. The demand for CD4 count measurement is increasing as ART programs expand over the last decade. This study aimed to predict CD4 count changes and to identify the predictors of CD4 count changes among patients on ART. A cross-sectional study was conducted at the University of Gondar Hospital from 3,104 adult patients on ART with CD4 counts measured at least twice (baseline and most recent). Data were retrieved from the HIV care clinic electronic database and patients` charts. Descriptive data were analyzed by SPSS version 20. Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology was followed to undertake the study. WEKA version 3.8 was used to conduct a predictive data mining. Before building the predictive data mining models, information gain values and correlation-based Feature Selection methods were used for attribute selection. Variables were ranked according to their relevance based on their information gain values. J48, Neural Network, and Random Forest algorithms were experimented to assess model accuracies. The median duration of ART was 191.5 weeks. The mean CD4 count change was 243 (SD 191.14) cells per microliter. Overall, 2427 (78.2%) patients had their CD4 counts increased by at least 100 cells per microliter, while 4% had a decline from the baseline CD4 value. Baseline variables including age, educational status, CD8 count, ART regimen, and hemoglobin levels predicted CD4 count changes with predictive accuracies of J48, Neural Network, and Random Forest being 87.1%, 83.5%, and 99.8%, respectively. Random Forest algorithm had a superior performance accuracy level than both J48 and Artificial Neural Network. The precision, sensitivity and recall values of Random Forest were also more than 99%. Nearly accurate prediction results were obtained using Random Forest algorithm. This algorithm could be used in a low-resource setting to build a web-based prediction model for CD4 count changes. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Impact of retreatment with an artemisinin-based combination on malaria incidence and its potential selection of resistant strains: study protocol for a randomized controlled clinical trial

    PubMed Central

    2013-01-01

    Background Artemisinin-based combination therapy is currently recommended by the World Health Organization as first-line treatment of uncomplicated malaria. Recommendations were adapted in 2010 regarding rescue treatment in case of treatment failure. Instead of quinine monotherapy, it should be combined with an antibiotic with antimalarial properties; alternatively, another artemisinin-based combination therapy may be used. However, for informing these policy changes, no clear evidence is yet available. The need to provide the policy makers with hard data on the appropriate rescue therapy is obvious. We hypothesize that the efficacy of the same artemisinin-based combination therapy used as rescue treatment is as efficacious as quinine + clindamycin or an alternative artemisinin-based combination therapy, without the risk of selecting drug resistant strains. Design We embed a randomized, open label, three-arm clinical trial in a longitudinal cohort design following up children with uncomplicated malaria until they are malaria parasite free for 4 weeks. The study is conducted in both the Democratic Republic of Congo and Uganda and performed in three steps. In the first step, the pre-randomized controlled trial (RCT) phase, children aged 12 to 59 months with uncomplicated malaria are treated with the recommended first-line drug and constitute a cohort that is passively followed up for 42 days. If the patients experience an uncomplicated malaria episode between days 14 and 42 of follow-up, they are randomized either to quinine + clindamycin, or an alternative artemisinin-based combination therapy, or the same first-line artemisinin-based combination therapy to be followed up for 28 additional days. If between days 14 and 28 the patients experience a recurrent parasitemia, they are retreated with the recommended first-line regimen and actively followed up for another 28 additional days (step three; post-RCT phase). The same methodology is followed for each subsequent failure. In any case, all patients without an infection at day 28 are classified as treatment successes and reach a study endpoint. The RCT phase allows the comparison of the safety and efficacy of three rescue treatments. The prolonged follow-up of all children until they are 28 days parasite-free allows us to assess epidemiological-, host- and parasite-related predictors for repeated malaria infection. Trial registration NCT01374581 and PACTR201203000351114 PMID:24059911

  18. Piecewise SALT sampling for estimating suspended sediment yields

    Treesearch

    Robert B. Thomas

    1989-01-01

    A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...

  19. Fast Track Randomized Controlled Trial to Prevent Externalizing Psychiatric Disorders: Findings from Grades 3 to 9

    ERIC Educational Resources Information Center

    Journal of the American Academy of Child & Adolescent Psychiatry, 2007

    2007-01-01

    Objective: This study tests the efficacy of the Fast Track Program in preventing antisocial behavior and psychiatric disorders among groups varying in initial risk. Method: Schools within four sites (Durham, NC; Nashville, TN; Seattle, WA; and rural central Pennsylvania) were selected as high-risk institutions based on neighborhood crime and…

  20. Evaluating the Impact of an Academic Teacher Development Program: Practical Realities of an Evidence-Based Study

    ERIC Educational Resources Information Center

    Rathbun, Gail A.; Leatherman, Jane; Jensen, Rebecca

    2017-01-01

    This study aimed to assess the impact of an entire academic teacher development programme at a Midwestern masters comprehensive university in the United States over a period of five years by examining changes in teaching and student outcomes of nine randomly selected programme participants. Researchers analysed syllabi, course evaluations, grade…

  1. Statewide Assessment of Unmet Student Financial Need.

    ERIC Educational Resources Information Center

    Marks, Joseph L.

    The extent to which there may be financial barriers to postsecondary attendance in Georgia was assessed. Attention was limited to in-state undergraduate students who applied for aid during the 1979-80 academic year on the basis of need and who received some form of aid. The findings are based on a sample of over 4,600 randomly selected student…

  2. Health Outcomes in Adolescence: Associations with Family, Friends and School Engagement

    ERIC Educational Resources Information Center

    Carter, Melissa; McGee, Rob; Taylor, Barry; Williams, Sheila

    2007-01-01

    Aim: To examine the associations between connectedness to family and friends, and school engagement, and selected health compromising and health promoting behaviours in a sample of New Zealand adolescents. Methods: A web-based survey was designed and administered to a random sample of 652 Year 11 students aged 16 years from all Dunedin (NZ) high…

  3. Comparative Analysis of Teacher Trainee Students' eLearning Technology (ELT) Readiness towards Promoting Global Curriculum Best Practice

    ERIC Educational Resources Information Center

    Ogwu, Edna N.

    2016-01-01

    This study compares teacher trainee students (TTSs), electronic learning technology (ELT) readiness, competence as well as their constraints to ELT readiness using 373 University education students' from Botswana and Nigeria that are randomly selected. Data was descriptively analysed based on the research objectives and hypotheses using mean…

  4. Self-Perception and Satisfaction with School: Effects of Ninth-Grade Placement.

    ERIC Educational Resources Information Center

    Rodgers, Philip L.; Zsiray, Stephen W., Jr.

    The placement of ninth-grade students has often been a decision based upon available resources and not the best interests of the students. This study provides some evidence of the different influences of three types of ninth-grade placement. Self-perception data were collected for 161 randomly selected eighth-grade students in middle school…

  5. Texas School Survey of Substance Abuse: Grades 7-12. 1992.

    ERIC Educational Resources Information Center

    Liu, Liang Y.; Fredlund, Eric V.

    The 1992 Texas School Survey results for secondary students are based on data collected from a sample of 73,073 students in grades 7 through 12. Students were randomly selected from school districts throughout the state using a multi-stage probability design. The procedure ensured that students living in metropolitan and rural areas of Texas are…

  6. Preliminary Findings on Gender Based Fear Reactions in Communication Apprehension Writings.

    ERIC Educational Resources Information Center

    Stowell, Jessica; Furlong, Cathy

    A study examined some of the reasons behind communication apprehension. The participants were 240 students (120 men and 120 women) from a southern community college enrolled in the basic public speaking course. Their writings were collected over a period of 7 years and selected randomly for analysis. The second week of the semester, students were…

  7. Incidence, Type and Intensity of Abuse in Street Children in India

    ERIC Educational Resources Information Center

    Mathur, Meena; Rathore, Prachi; Mathur, Monika

    2009-01-01

    Objective: The aims of this cross-sectional survey were to examine the prevalence, type and intensity of abuse in street children in Jaipur city, India. Method: Based on purposive random sampling, 200 street children, inclusive of equal number of boys and girls, were selected from the streets of Jaipur city, India, and administered an in-depth…

  8. Unraveling Probation Officers' Practices with Youths with Histories of Trauma and Stressful Life Events

    ERIC Educational Resources Information Center

    Maschi, Tina; Schwalbe, Craig S.

    2012-01-01

    This study examines how probation officers' (POs) knowledge of juveniles' trauma influences probation practices. The study was conducted with POs who responded to a Web-based survey ("n" = 308). The POs were directed to randomly select one juvenile from their caseload and to complete the Probation Practices Assessment Survey to assess their…

  9. Distance Education Research Priorities for Australia: A Study of the Opinions of Distance Educators and Practitioners.

    ERIC Educational Resources Information Center

    Jegede, Olugbemiro J.

    A group of 56 randomly selected members of the International Council for Distance Education who were based in Australia were surveyed regarding their opinions on distance education research priorities for Australia. A five-page questionnaire was used to gather biographical details about respondents and opinions regarding available level of…

  10. Need to Improve Efficiency of Reserve Training. Report to the Congress.

    ERIC Educational Resources Information Center

    Comptroller General of the U.S., Washington, DC.

    The report discusses the need to vary the training of Reserve and Guard units by skill and readiness requirements and to make more efficient use of training time. It contains recommendations to the Secretaries of Defense, Transportation, Army, Navy, and Air Force. The review was based on questionnaires mailed to 2,209 randomly selected reservists…

  11. The Stability and Reliability of a Modified Work Components Study Questionnaire in the Educational Organization.

    ERIC Educational Resources Information Center

    Miskel, Cecil; Heller, Leonard E.

    The investigation attempted to establish the factorial validity and reliability of an industrial selection device based on Herzberg's theory of work motivation related to the school organization. The questionnaire was reworded to reflect an educational work situation; and a random sample of 197 students, 118 administrators, and 432 teachers was…

  12. Assessing Changes in Socioemotional Adjustment across Early School Transitions--New National Scales for Children at Risk

    ERIC Educational Resources Information Center

    McDermott, Paul A.; Watkins, Marley W.; Rovine, Michael J.; Rikoon, Samuel H.

    2013-01-01

    This article reports the development and evidence for validity and application of the Adjustment Scales for Early Transition in Schooling (ASETS). Based on primary analyses of data from the Head Start Impact Study, a nationally representative sample (N = 3077) of randomly selected children from low-income households is configured to inform…

  13. Design of a Computer-Controlled, Random-Access Slide Projector Interface. Final Report (April 1974 - November 1974).

    ERIC Educational Resources Information Center

    Kirby, Paul J.; And Others

    The design, development, test, and evaluation of an electronic hardware device interfacing a commercially available slide projector with a plasma panel computer terminal is reported. The interface device allows an instructional computer program to select slides for viewing based upon the lesson student situation parameters of the instructional…

  14. Implementation of Structured Inquiry Based Model Learning toward Students' Understanding of Geometry

    ERIC Educational Resources Information Center

    Salim, Kalbin; Tiawa, Dayang Hjh

    2015-01-01

    The purpose of this study is implementation of a structured inquiry learning model in instruction of geometry. The model used is a model with a quasi-experimental study amounted to two classes of samples selected from the population of the ten classes with cluster random sampling technique. Data collection tool consists of a test item…

  15. Gender, Discrimination Beliefs, Group-Based Guilt, and Responses to Affirmative Action for Australian Women

    ERIC Educational Resources Information Center

    Boeckmann, Robert J.; Feather, N. T.

    2007-01-01

    Views of a selection committee's decision to promote a woman over a man on the basis of affirmative action were studied in a random sample of Australians (118 men and 111 women). The relations between perceptions of workplace gender discrimination, feelings of collective responsibility and guilt for discrimination, and judgments of entitlement to…

  16. Multicultural Training in Doctoral School Psychology Programs: In Search of the Model Program?

    ERIC Educational Resources Information Center

    Kearns, Tori; Ford, Laurie; Brown, Kimberly

    The multicultural training (MCT) of APA-accredited School Psychology programs was studied. The sample included faculty and students from five programs nominated for strong MCT and five comparison programs randomly selected from the list of remaining APA-accredited programs. Program training was evaluated using a survey based on APA guidelines for…

  17. Foreign Policy News in the 1980 Presidential Election Campaign.

    ERIC Educational Resources Information Center

    Stovall, James Glen

    A survey was conducted to determine the extent and content of newspaper coverage of foreign policy issues in the 1980 United States presidential campaign. Fifty daily newspapers from every region of the country were selected randomly based on circulation. A list of 757 news events was divided into party and nonparty events, and the party events…

  18. Knowledge of Millennium Development Goals among University Faculty in Uganda and Kenya

    ERIC Educational Resources Information Center

    Wamala, Robert; Nabachwa, Mary Sonko; Chamberlain, Jean; Nakalembe, Eva

    2012-01-01

    This article examines the level of knowledge of the Millennium Development Goals (MDGs) among university faculty. The assessment is based on data from 197 academic unit or faculty heads randomly selected from universities in Uganda and Kenya. Frequency distributions and logistic regression were used for analysis. Slightly more than one in three…

  19. Visual Literacy and the Integration of Parametric Modeling in the Problem-Based Curriculum

    ERIC Educational Resources Information Center

    Assenmacher, Matthew Benedict

    2013-01-01

    This quasi-experimental study investigated the application of visual literacy skills in the form of parametric modeling software in relation to traditional forms of sketching. The study included two groups of high school technical design students. The control and experimental groups involved in the study consisted of two randomly selected groups…

  20. Adoption of Library 2.0 Functionalities by Academic Libraries and Users: A Knowledge Management Perspective

    ERIC Educational Resources Information Center

    Kim, Yong-Mi; Abbas, June

    2010-01-01

    This study investigates the adoption of Library 2.0 functionalities by academic libraries and users through a knowledge management perspective. Based on randomly selected 230 academic library Web sites and 184 users, the authors found RSS and blogs are widely adopted by academic libraries while users widely utilized the bookmark function.…

  1. EXSPRT: An Expert Systems Approach to Computer-Based Adaptive Testing.

    ERIC Educational Resources Information Center

    Frick, Theodore W.; And Others

    Expert systems can be used to aid decision making. A computerized adaptive test (CAT) is one kind of expert system, although it is not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. EXSPRT-R uses random selection of test items,…

  2. Teaching Evolution at A-Level: Is "Intelligent Design" a Scientific Theory That Merits Inclusion in the Biology Syllabus?

    ERIC Educational Resources Information Center

    Freeland, Peter

    2013-01-01

    Charles Darwin supposed that evolution involved a process of gradual change, generated randomly, with the selection and retention over many generations of survival-promoting features. Some theists have never accepted this idea. "Intelligent design" is a relatively recent theory, supposedly based on scientific evidence, which attempts to…

  3. Global STEM Navigators

    ERIC Educational Resources Information Center

    Dalimonte, Cathy

    2013-01-01

    In the STEM classroom, students can work in collaborative teams to build those essential skills needed for the 21st-century world. In project-based learning (PBL), teams of four to six students are often randomly selected to describe a realistic situation that may occur in today's workplace; this may be done by counting off in fours, fives,…

  4. Effective Learning Systems through Blended Teaching Modules in Adult Secondary Education Systems in Developing Nations: Need for Partnership

    ERIC Educational Resources Information Center

    Ike, Eucharia; Okechukwu, Ibeh Bartholomew

    2015-01-01

    We investigated methodological lessons in randomly selected adult secondary schools to construct a case for international partnership while examining education development in Nigeria. Standard database and web-based searches were conducted for publications between 1985 and 2012 on learning systems. This paper presents its absence and finds a heavy…

  5. Improving Students' Report Writing Quality in an EAP Context: Group versus Individual

    ERIC Educational Resources Information Center

    Ali, Holi Ibrahim Holi

    2012-01-01

    This paper looks into report writing quality on both individual and group bases in an EAP context. A total of 100 EFL students at post foundation level in a University College in Oman, and 15 EFL teachers were selected randomly. Questionnaires were administered to investigate their perceptions and experiences with report writing quality on…

  6. Results of a Survey about Homework and Homework Hotlines for Elementary School Students.

    ERIC Educational Resources Information Center

    Singh, Bulwant

    Reported are responses of fourth-, fifth-, and sixth-grade students, their parents and teachers to a survey conducted to determine the need for a homework hotline. Discussion is based on data from 379 randomly selected parents of students in intermediate elementary grades of 21 elementary schools, 333 elementary school teachers, and 392 randomly…

  7. Knowledge about HIV and AIDS among Young South Africans in the Capricorn District, Limpopo Province

    ERIC Educational Resources Information Center

    Melwa, Irene T.; Oduntan, Olalekan A.

    2012-01-01

    Objective: To assess the basic knowledge about HIV and AIDS among young South Africans in the Capricorn District of Limpopo Province, South Africa. Design: A questionnaire-based cohort study, involving data collection from senior high school students. Setting: Randomly selected high schools in the Capricorn District, Limpopo Province, South…

  8. Is Littoral Habitat Affected by Residential Development and Land Use in Watersheds of Wisconsin Lakes?

    Treesearch

    Martin J. Jennings; Edward E. Emmons; Gene R. Hatzenbeler; Clayton Edwards; Michael A. Bozek

    2003-01-01

    We measured differences in nearshore littoral zone habitat among lakes with different amounts of residential development and different patterns of watershed land use. Sampling stations were located at randomly selected sites within the nearshore littoral zone of limnologically similar lakes. An index of development density (based on counts of residential structures)...

  9. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    PubMed Central

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307

  10. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness.

    PubMed

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.

  11. Fluorescence Excitation Spectroscopy for Phytoplankton Species Classification Using an All-Pairs Method: Characterization of a System with Unexpectedly Low Rank.

    PubMed

    Rekully, Cameron M; Faulkner, Stefan T; Lachenmyer, Eric M; Cunningham, Brady R; Shaw, Timothy J; Richardson, Tammi L; Myrick, Michael L

    2018-03-01

    An all-pairs method is used to analyze phytoplankton fluorescence excitation spectra. An initial set of nine phytoplankton species is analyzed in pairwise fashion to select two optical filter sets, and then the two filter sets are used to explore variations among a total of 31 species in a single-cell fluorescence imaging photometer. Results are presented in terms of pair analyses; we report that 411 of the 465 possible pairings of the larger group of 31 species can be distinguished using the initial nine-species-based selection of optical filters. A bootstrap analysis based on the larger data set shows that the distribution of possible pair separation results based on a randomly selected nine-species initial calibration set is strongly peaked in the 410-415 pair separation range, consistent with our experimental result. Further, the result for filter selection using all 31 species is also 411 pair separations; The set of phytoplankton fluorescence excitation spectra is intuitively high in rank due to the number and variety of pigments that contribute to the spectrum. However, the results in this report are consistent with an effective rank as determined by a variety of heuristic and statistical methods in the range of 2-3. These results are reviewed in consideration of how consistent the filter selections are from model to model for the data presented here. We discuss the common observation that rank is generally found to be relatively low even in many seemingly complex circumstances, so that it may be productive to assume a low rank from the beginning. If a low-rank hypothesis is valid, then relatively few samples are needed to explore an experimental space. Under very restricted circumstances for uniformly distributed samples, the minimum number for an initial analysis might be as low as 8-11 random samples for 1-3 factors.

  12. THE SELECTION OF A NATIONAL RANDOM SAMPLE OF TEACHERS FOR EXPERIMENTAL CURRICULUM EVALUATION.

    ERIC Educational Resources Information Center

    WELCH, WAYNE W.; AND OTHERS

    MEMBERS OF THE EVALUATION SECTION OF HARVARD PROJECT PHYSICS, DESCRIBING WHAT IS SAID TO BE THE FIRST ATTEMPT TO SELECT A NATIONAL RANDOM SAMPLE OF (HIGH SCHOOL PHYSICS) TEACHERS, LIST THE STEPS AS (1) PURCHASE OF A LIST OF PHYSICS TEACHERS FROM THE NATIONAL SCIENCE TEACHERS ASSOCIATION (MOST COMPLETE AVAILABLE), (2) SELECTION OF 136 NAMES BY A…

  13. Literature-based discovery of diabetes- and ROS-related targets

    PubMed Central

    2010-01-01

    Background Reactive oxygen species (ROS) are known mediators of cellular damage in multiple diseases including diabetic complications. Despite its importance, no comprehensive database is currently available for the genes associated with ROS. Methods We present ROS- and diabetes-related targets (genes/proteins) collected from the biomedical literature through a text mining technology. A web-based literature mining tool, SciMiner, was applied to 1,154 biomedical papers indexed with diabetes and ROS by PubMed to identify relevant targets. Over-represented targets in the ROS-diabetes literature were obtained through comparisons against randomly selected literature. The expression levels of nine genes, selected from the top ranked ROS-diabetes set, were measured in the dorsal root ganglia (DRG) of diabetic and non-diabetic DBA/2J mice in order to evaluate the biological relevance of literature-derived targets in the pathogenesis of diabetic neuropathy. Results SciMiner identified 1,026 ROS- and diabetes-related targets from the 1,154 biomedical papers (http://jdrf.neurology.med.umich.edu/ROSDiabetes/). Fifty-three targets were significantly over-represented in the ROS-diabetes literature compared to randomly selected literature. These over-represented targets included well-known members of the oxidative stress response including catalase, the NADPH oxidase family, and the superoxide dismutase family of proteins. Eight of the nine selected genes exhibited significant differential expression between diabetic and non-diabetic mice. For six genes, the direction of expression change in diabetes paralleled enhanced oxidative stress in the DRG. Conclusions Literature mining compiled ROS-diabetes related targets from the biomedical literature and led us to evaluate the biological relevance of selected targets in the pathogenesis of diabetic neuropathy. PMID:20979611

  14. Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.

  15. A telephonic mindfulness-based intervention for persons with sickle cell disease: study protocol for a randomized controlled trial.

    PubMed

    Williams, Hants; Silva, Susan; Simmons, Leigh Ann; Tanabe, Paula

    2017-05-15

    One of the most difficult symptoms for persons with sickle cell disease (SCD) to manage is chronic pain. Chronic pain impacts approximately one-third of persons with SCD and is associated with increased pain intensity, pain behavior, and frequency and duration of hospital visits. A promising category of nonpharmacological interventions for managing both physical and affective components of pain are mindfulness-based interventions (MBIs). The primary aim of this study is to conduct a randomized controlled study to evaluate the acceptability and feasibility, as well as to determine the preliminary efficacy, of a telephonic MBI for adults with SCD who have chronic pain. We will enroll 60 adult patients with SCD and chronic pain at an outpatient comprehensive SCD center in the southeastern United States. Patients will be randomized to either an MBI or a wait-listed control group. The MBI group will complete a six-session (60 minutes), telephonically delivered, group-based MBI program. The feasibility, acceptability, and efficacy of the MBI regarding pain catastrophizing will be assessed by administering questionnaires at baseline and weeks 1, 3, and 6. In addition, ten randomly selected MBI participants will complete semistructured interviews to help determine intervention acceptability. In this study protocol, we report detailed methods of the randomized controlled trial. Findings of this study will be useful to determine the acceptability, feasibility, and efficacy of an MBI for persons with SCD and chronic pain. ClinicalTrials.gov identifier: NCT02394587 . Registered on 9 February 2015.

  16. An improved label propagation algorithm based on node importance and random walk for community detection

    NASA Astrophysics Data System (ADS)

    Ma, Tianren; Xia, Zhengyou

    2017-05-01

    Currently, with the rapid development of information technology, the electronic media for social communication is becoming more and more popular. Discovery of communities is a very effective way to understand the properties of complex networks. However, traditional community detection algorithms consider the structural characteristics of a social organization only, with more information about nodes and edges wasted. In the meanwhile, these algorithms do not consider each node on its merits. Label propagation algorithm (LPA) is a near linear time algorithm which aims to find the community in the network. It attracts many scholars owing to its high efficiency. In recent years, there are more improved algorithms that were put forward based on LPA. In this paper, an improved LPA based on random walk and node importance (NILPA) is proposed. Firstly, a list of node importance is obtained through calculation. The nodes in the network are sorted in descending order of importance. On the basis of random walk, a matrix is constructed to measure the similarity of nodes and it avoids the random choice in the LPA. Secondly, a new metric IAS (importance and similarity) is calculated by node importance and similarity matrix, which we can use to avoid the random selection in the original LPA and improve the algorithm stability. Finally, a test in real-world and synthetic networks is given. The result shows that this algorithm has better performance than existing methods in finding community structure.

  17. Random forest feature selection, fusion and ensemble strategy: Combining multiple morphological MRI measures to discriminate among healhy elderly, MCI, cMCI and alzheimer's disease patients: From the alzheimer's disease neuroimaging initiative (ADNI) database.

    PubMed

    Dimitriadis, S I; Liparas, Dimitris; Tsolaki, Magda N

    2018-05-15

    In the era of computer-assisted diagnostic tools for various brain diseases, Alzheimer's disease (AD) covers a large percentage of neuroimaging research, with the main scope being its use in daily practice. However, there has been no study attempting to simultaneously discriminate among Healthy Controls (HC), early mild cognitive impairment (MCI), late MCI (cMCI) and stable AD, using features derived from a single modality, namely MRI. Based on preprocessed MRI images from the organizers of a neuroimaging challenge, 3 we attempted to quantify the prediction accuracy of multiple morphological MRI features to simultaneously discriminate among HC, MCI, cMCI and AD. We explored the efficacy of a novel scheme that includes multiple feature selections via Random Forest from subsets of the whole set of features (e.g. whole set, left/right hemisphere etc.), Random Forest classification using a fusion approach and ensemble classification via majority voting. From the ADNI database, 60 HC, 60 MCI, 60 cMCI and 60 CE were used as a training set with known labels. An extra dataset of 160 subjects (HC: 40, MCI: 40, cMCI: 40 and AD: 40) was used as an external blind validation dataset to evaluate the proposed machine learning scheme. In the second blind dataset, we succeeded in a four-class classification of 61.9% by combining MRI-based features with a Random Forest-based Ensemble Strategy. We achieved the best classification accuracy of all teams that participated in this neuroimaging competition. The results demonstrate the effectiveness of the proposed scheme to simultaneously discriminate among four groups using morphological MRI features for the very first time in the literature. Hence, the proposed machine learning scheme can be used to define single and multi-modal biomarkers for AD. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Accurate Diabetes Risk Stratification Using Machine Learning: Role of Missing Value and Outliers.

    PubMed

    Maniruzzaman, Md; Rahman, Md Jahanur; Al-MehediHasan, Md; Suri, Harman S; Abedin, Md Menhazul; El-Baz, Ayman; Suri, Jasjit S

    2018-04-10

    Diabetes mellitus is a group of metabolic diseases in which blood sugar levels are too high. About 8.8% of the world was diabetic in 2017. It is projected that this will reach nearly 10% by 2045. The major challenge is that when machine learning-based classifiers are applied to such data sets for risk stratification, leads to lower performance. Thus, our objective is to develop an optimized and robust machine learning (ML) system under the assumption that missing values or outliers if replaced by a median configuration will yield higher risk stratification accuracy. This ML-based risk stratification is designed, optimized and evaluated, where: (i) the features are extracted and optimized from the six feature selection techniques (random forest, logistic regression, mutual information, principal component analysis, analysis of variance, and Fisher discriminant ratio) and combined with ten different types of classifiers (linear discriminant analysis, quadratic discriminant analysis, naïve Bayes, Gaussian process classification, support vector machine, artificial neural network, Adaboost, logistic regression, decision tree, and random forest) under the hypothesis that both missing values and outliers when replaced by computed medians will improve the risk stratification accuracy. Pima Indian diabetic dataset (768 patients: 268 diabetic and 500 controls) was used. Our results demonstrate that on replacing the missing values and outliers by group median and median values, respectively and further using the combination of random forest feature selection and random forest classification technique yields an accuracy, sensitivity, specificity, positive predictive value, negative predictive value and area under the curve as: 92.26%, 95.96%, 79.72%, 91.14%, 91.20%, and 0.93, respectively. This is an improvement of 10% over previously developed techniques published in literature. The system was validated for its stability and reliability. RF-based model showed the best performance when outliers are replaced by median values.

  19. Simultaneous selection for cowpea (Vigna unguiculata L.) genotypes with adaptability and yield stability using mixed models.

    PubMed

    Torres, F E; Teodoro, P E; Rodrigues, E V; Santos, A; Corrêa, A M; Ceccon, G

    2016-04-29

    The aim of this study was to select erect cowpea (Vigna unguiculata L.) genotypes simultaneously for high adaptability, stability, and yield grain in Mato Grosso do Sul, Brazil using mixed models. We conducted six trials of different cowpea genotypes in 2005 and 2006 in Aquidauana, Chapadão do Sul, Dourados, and Primavera do Leste. The experimental design was randomized complete blocks with four replications and 20 genotypes. Genetic parameters were estimated by restricted maximum likelihood/best linear unbiased prediction, and selection was based on the harmonic mean of the relative performance of genetic values method using three strategies: selection based on the predicted breeding value, having considered the performance mean of the genotypes in all environments (no interaction effect); the performance in each environment (with an interaction effect); and the simultaneous selection for grain yield, stability, and adaptability. The MNC99542F-5 and MNC99-537F-4 genotypes could be grown in various environments, as they exhibited high grain yield, adaptability, and stability. The average heritability of the genotypes was moderate to high and the selective accuracy was 82%, indicating an excellent potential for selection.

  20. Endocytotic uptake of HPMA-based polymers by different cancer cells: impact of extracellular acidosis and hypoxia

    PubMed Central

    Gündel, Daniel; Allmeroth, Mareli; Reime, Sarah; Zentel, Rudolf; Thews, Oliver

    2017-01-01

    Background Polymeric nanoparticles allow to selectively transport chemotherapeutic drugs to the tumor tissue. These nanocarriers have to be taken up into the cells to release the drug. In addition, tumors often show pathological metabolic characteristics (hypoxia and acidosis) which might affect the polymer endocytosis. Materials and methods Six different N-(2-hydroxypropyl)methacrylamide (HPMA)-based polymer structures (homopolymer as well as random and block copolymers with lauryl methacrylate containing hydrophobic side chains) varying in molecular weight and size were analyzed in two different tumor models. The cellular uptake of fluorescence-labeled polymers was measured under hypoxic (pO2 ≈1.5 mmHg) and acidic (pH 6.6) conditions. By using specific inhibitors, different endocytotic routes (macropinocytosis and clathrin-mediated, dynamin-dependent, cholesterol-dependent endocytosis) were analyzed separately. Results The current results revealed that the polymer uptake depends on the molecular structure, molecular weight and tumor line used. In AT1 cells, the uptake of random copolymer was five times stronger than the homopolymer, whereas in Walker-256 cells, the uptake of all polymers was much stronger, but this was independent of the molecular structure and size. Acidosis increased the uptake of random copolymer in AT1 cells but reduced the intracellular accumulation of homopolymer and block copolymer. Hypoxia reduced the uptake of all polymers in Walker-256 cells. Hydrophilic polymers (homopolymer and block copolymer) were taken up by all endocytotic routes studied, whereas the more lipophilic random copolymer seemed to be taken up preferentially by cholesterol- and dynamin-dependent endocytosis. Conclusion The study indicates that numerous parameters of the polymer (structure, size) and of the tumor (perfusion, vascular permeability, pH, pO2) modulate drug delivery, which makes it difficult to select the appropriate polymer for the individual patient. PMID:28831253

  1. Endocytotic uptake of HPMA-based polymers by different cancer cells: impact of extracellular acidosis and hypoxia.

    PubMed

    Gündel, Daniel; Allmeroth, Mareli; Reime, Sarah; Zentel, Rudolf; Thews, Oliver

    2017-01-01

    Polymeric nanoparticles allow to selectively transport chemotherapeutic drugs to the tumor tissue. These nanocarriers have to be taken up into the cells to release the drug. In addition, tumors often show pathological metabolic characteristics (hypoxia and acidosis) which might affect the polymer endocytosis. Six different N -(2-hydroxypropyl)methacrylamide (HPMA)-based polymer structures (homopolymer as well as random and block copolymers with lauryl methacrylate containing hydrophobic side chains) varying in molecular weight and size were analyzed in two different tumor models. The cellular uptake of fluorescence-labeled polymers was measured under hypoxic (pO 2 ≈1.5 mmHg) and acidic (pH 6.6) conditions. By using specific inhibitors, different endocytotic routes (macropinocytosis and clathrin-mediated, dynamin-dependent, cholesterol-dependent endocytosis) were analyzed separately. The current results revealed that the polymer uptake depends on the molecular structure, molecular weight and tumor line used. In AT1 cells, the uptake of random copolymer was five times stronger than the homopolymer, whereas in Walker-256 cells, the uptake of all polymers was much stronger, but this was independent of the molecular structure and size. Acidosis increased the uptake of random copolymer in AT1 cells but reduced the intracellular accumulation of homopolymer and block copolymer. Hypoxia reduced the uptake of all polymers in Walker-256 cells. Hydrophilic polymers (homopolymer and block copolymer) were taken up by all endocytotic routes studied, whereas the more lipophilic random copolymer seemed to be taken up preferentially by cholesterol- and dynamin-dependent endocytosis. The study indicates that numerous parameters of the polymer (structure, size) and of the tumor (perfusion, vascular permeability, pH, pO 2 ) modulate drug delivery, which makes it difficult to select the appropriate polymer for the individual patient.

  2. Benchmarking protein classification algorithms via supervised cross-validation.

    PubMed

    Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor

    2008-04-24

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.

  3. Point process statistics in atom probe tomography.

    PubMed

    Philippe, T; Duguay, S; Grancher, G; Blavette, D

    2013-09-01

    We present a review of spatial point processes as statistical models that we have designed for the analysis and treatment of atom probe tomography (APT) data. As a major advantage, these methods do not require sampling. The mean distance to nearest neighbour is an attractive approach to exhibit a non-random atomic distribution. A χ(2) test based on distance distributions to nearest neighbour has been developed to detect deviation from randomness. Best-fit methods based on first nearest neighbour distance (1 NN method) and pair correlation function are presented and compared to assess the chemical composition of tiny clusters. Delaunay tessellation for cluster selection has been also illustrated. These statistical tools have been applied to APT experiments on microelectronics materials. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Evaluation of some random effects methodology applicable to bird ringing data

    USGS Publications Warehouse

    Burnham, K.P.; White, Gary C.

    2002-01-01

    Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.

  5. Curvature correction of retinal OCTs using graph-based geometry detection

    NASA Astrophysics Data System (ADS)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-05-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.

  6. Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach

    PubMed Central

    Kudisthalert, Wasu

    2018-01-01

    Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912

  7. An adaptive incremental approach to constructing ensemble classifiers: Application in an information-theoretic computer-aided decision system for detection of masses in mammograms

    PubMed Central

    Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.

    2009-01-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC=0.905±0.024) in performance as compared to the original IT-CAD system (AUC=0.865±0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters. PMID:19673196

  8. Extracorporeal shock wave therapy for calcific and noncalcific tendonitis of the rotator cuff: a systematic review.

    PubMed

    Harniman, Elaine; Carette, Simon; Kennedy, Carol; Beaton, Dorcas

    2004-01-01

    The authors conducted a systematic review to assess the effectiveness of extracorporeal shock wave therapy (ESWT) for the treatment of calcific and noncalcific tendonitis of the rotator cuff. Conservative treatment for rotator cuff tendonitis includes physiotherapy, nonsteroidal antiinflammatory drugs, and corticosteroid injections. If symptoms persist with conservative treatment, surgery is often considered. Extracorporeal shock wave therapy has been suggested as a treatment alternative for chronic rotator cuff tendonitis, which may decrease the need for surgery. Articles for this review were identified by electronically searching Medline, EMBASE, Cumulative Index to Nursing & Allied Health Literature (CINAHL), and Evidence Based Medicine (EBM) and hand-screening references. Two reviewers selected the trials that met the inclusion criteria, extracted the data, and assessed the methodological quality of the selected trials. Finally, the strength of scientific evidence was appraised. Evidence was classified as strong, moderate, limited, or conflicting. Sixteen trials met the inclusion criteria. There were only five randomized, controlled trials and all involved chronic (>/=3 months) conditions, three for calcific tendonitis and two for noncalcific tendonitis. For randomized, controlled trials, two (40%) were of high quality, one (33%) for calcific tendonitis and one (50%) for noncalcific tendonitis. The 11 nonrandomized trials included nine that involved calcific tendonitis and two that involved both calcific and noncalcific tendonitis. Common problem areas were sample size, randomization, blinding, treatment provider bias, and outcome measures. There is moderate evidence that high-energy ESWT is effective in treating chronic calcific rotator cuff tendonitis when the shock waves are focused at the calcified deposit. There is moderate evidence that low-energy ESWT is not effective for treating chronic noncalcific rotator cuff tendonitis, although this conclusion is based on only one high-quality study, which was underpowered. High-quality randomized, controlled trials are needed with larger sample sizes, better randomization and blinding, and better outcome measures.

  9. Integrative approach for inference of gene regulatory networks using lasso-based random featuring and application to psychiatric disorders.

    PubMed

    Kim, Dongchul; Kang, Mingon; Biswas, Ashis; Liu, Chunyu; Gao, Jean

    2016-08-10

    Inferring gene regulatory networks is one of the most interesting research areas in the systems biology. Many inference methods have been developed by using a variety of computational models and approaches. However, there are two issues to solve. First, depending on the structural or computational model of inference method, the results tend to be inconsistent due to innately different advantages and limitations of the methods. Therefore the combination of dissimilar approaches is demanded as an alternative way in order to overcome the limitations of standalone methods through complementary integration. Second, sparse linear regression that is penalized by the regularization parameter (lasso) and bootstrapping-based sparse linear regression methods were suggested in state of the art methods for network inference but they are not effective for a small sample size data and also a true regulator could be missed if the target gene is strongly affected by an indirect regulator with high correlation or another true regulator. We present two novel network inference methods based on the integration of three different criteria, (i) z-score to measure the variation of gene expression from knockout data, (ii) mutual information for the dependency between two genes, and (iii) linear regression-based feature selection. Based on these criterion, we propose a lasso-based random feature selection algorithm (LARF) to achieve better performance overcoming the limitations of bootstrapping as mentioned above. In this work, there are three main contributions. First, our z score-based method to measure gene expression variations from knockout data is more effective than similar criteria of related works. Second, we confirmed that the true regulator selection can be effectively improved by LARF. Lastly, we verified that an integrative approach can clearly outperform a single method when two different methods are effectively jointed. In the experiments, our methods were validated by outperforming the state of the art methods on DREAM challenge data, and then LARF was applied to inferences of gene regulatory network associated with psychiatric disorders.

  10. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Evolution of tag-based cooperation on Erdős-Rényi random graphs

    NASA Astrophysics Data System (ADS)

    Lima, F. W. S.; Hadzibeganovic, Tarik; Stauffer, Dietrich

    2014-12-01

    Here, we study an agent-based model of the evolution of tag-mediated cooperation on Erdős-Rényi random graphs. In our model, agents with heritable phenotypic traits play pairwise Prisoner's Dilemma-like games and follow one of the four possible strategies: Ethnocentric, altruistic, egoistic and cosmopolitan. Ethnocentric and cosmopolitan strategies are conditional, i.e. their selection depends upon the shared phenotypic similarity among interacting agents. The remaining two strategies are always unconditional, meaning that egoists always defect while altruists always cooperate. Our simulations revealed that ethnocentrism can win in both early and later evolutionary stages on directed random graphs when reproduction of artificial agents was asexual; however, under the sexual mode of reproduction on a directed random graph, we found that altruists dominate initially for a rather short period of time, whereas ethnocentrics and egoists suppress other strategists and compete for dominance in the intermediate and later evolutionary stages. Among our results, we also find surprisingly regular oscillations which are not damped in the course of time even after half a million Monte Carlo steps. Unlike most previous studies, our findings highlight conditions under which ethnocentrism is less stable or suppressed by other competing strategies.

  12. A new feedback image encryption scheme based on perturbation with dynamical compound chaotic sequence cipher generator

    NASA Astrophysics Data System (ADS)

    Tong, Xiaojun; Cui, Minggen; Wang, Zhu

    2009-07-01

    The design of the new compound two-dimensional chaotic function is presented by exploiting two one-dimensional chaotic functions which switch randomly, and the design is used as a chaotic sequence generator which is proved by Devaney's definition proof of chaos. The properties of compound chaotic functions are also proved rigorously. In order to improve the robustness against difference cryptanalysis and produce avalanche effect, a new feedback image encryption scheme is proposed using the new compound chaos by selecting one of the two one-dimensional chaotic functions randomly and a new image pixels method of permutation and substitution is designed in detail by array row and column random controlling based on the compound chaos. The results from entropy analysis, difference analysis, statistical analysis, sequence randomness analysis, cipher sensitivity analysis depending on key and plaintext have proven that the compound chaotic sequence cipher can resist cryptanalytic, statistical and brute-force attacks, and especially it accelerates encryption speed, and achieves higher level of security. By the dynamical compound chaos and perturbation technology, the paper solves the problem of computer low precision of one-dimensional chaotic function.

  13. Age-related Cataract in a Randomized Trial of Selenium and Vitamin E in Men: The SELECT Eye Endpoints (SEE) Study

    PubMed Central

    Christen, William G.; Glynn, Robert J.; Gaziano, J. Michael; Darke, Amy K.; Crowley, John J.; Goodman, Phyllis J.; Lippman, Scott M.; Lad, Thomas E.; Bearden, James D.; Goodman, Gary E.; Minasian, Lori M.; Thompson, Ian M.; Blanke, Charles D.; Klein, Eric A.

    2014-01-01

    Importance Observational studies suggest a role for dietary nutrients such as vitamin E and selenium in cataract prevention. However, the results of randomized trials of vitamin E supplements and cataract have been disappointing, and are not yet available for selenium. Objective To test whether long-term supplementation with selenium and vitamin E affects the incidence of cataract in a large cohort of men. Design, Setting, and Participants The SELECT Eye Endpoints (SEE) study was an ancillary study of the SWOG-coordinated Selenium and Vitamin E Cancer Prevention Trial (SELECT), a randomized, placebo-controlled, four arm trial of selenium and vitamin E conducted among 35,533 men aged 50 years and older for African Americans and 55 and older for all other men, at 427 participating sites in the US, Canada, and Puerto Rico. A total of 11,267 SELECT participants from 128 SELECT sites participated in the SEE ancillary study. Intervention Individual supplements of selenium (200 µg/d from L-selenomethionine) and vitamin E (400 IU/d of all rac-α-tocopheryl acetate). Main Outcome Measures Incident cataract, defined as a lens opacity, age-related in origin, responsible for a reduction in best-corrected visual acuity to 20/30 or worse based on self-report confirmed by medical record review, and cataract extraction, defined as the surgical removal of an incident cataract. Results During a mean (SD) of 5.6 (1.2) years of treatment and follow-up, 389 cases of cataract were documented. There were 185 cataracts in the selenium group and 204 in the no selenium group (hazard ratio [HR], 0.91; 95 percent confidence interval [CI], 0.75 to 1.11; P=.37). For vitamin E, there were 197 cases in the treated group and 192 in the placebo group (HR, 1.02; CI, 0.84 to 1.25; P=.81). Similar results were observed for cataract extraction. Conclusions and Relevance These randomized trial data from a large cohort of apparently healthy men indicate that long-term daily supplementation with selenium and/or vitamin E is unlikely to have a large beneficial effect on age-related cataract. PMID:25232809

  14. How many records should be used in ASCE/SEI-7 ground motion scaling procedure?

    USGS Publications Warehouse

    Reyes, Juan C.; Kalkan, Erol

    2012-01-01

    U.S. national building codes refer to the ASCE/SEI-7 provisions for selecting and scaling ground motions for use in nonlinear response history analysis of structures. Because the limiting values for the number of records in the ASCE/SEI-7 are based on engineering experience, this study examines the required number of records statistically, such that the scaled records provide accurate, efficient, and consistent estimates of “true” structural responses. Based on elastic–perfectly plastic and bilinear single-degree-of-freedom systems, the ASCE/SEI-7 scaling procedure is applied to 480 sets of ground motions; the number of records in these sets varies from three to ten. As compared to benchmark responses, it is demonstrated that the ASCE/SEI-7 scaling procedure is conservative if fewer than seven ground motions are employed. Utilizing seven or more randomly selected records provides more accurate estimate of the responses. Selecting records based on their spectral shape and design spectral acceleration increases the accuracy and efficiency of the procedure.

  15. Effect of Expanding Medicaid for Parents on Children’s Health Insurance Coverage

    PubMed Central

    DeVoe, Jennifer E.; Marino, Miguel; Angier, Heather; O’Malley, Jean P.; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J.; Bailey, Steffani R.; Gallia, Charles; Gold, Rachel

    2016-01-01

    IMPORTANCE In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon’s randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. OBJECTIVE To estimate the effect on a child’s health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. DESIGN, SETTING, AND PARTICIPANTS Oregon Experiment randomized natural experiment assessing the results of Oregon’s 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child’s Medicaid or Children’s Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children’s coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14 409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. EXPOSURES For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. MAIN OUTCOMES AND MEASURES Children’s Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent’s selection date. RESULTS In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent’s selection compared with children whose parents were not selected (adjusted odds ratio [AOR] = 1.18; 95% CI, 1.10–1.27). The effect remained significant during months 7 to 12 (AOR = 1.11; 95% CI, 1.03–1.19); months 13 to 18 showed a positive but not significant effect (AOR = 1.07; 95% CI, 0.99–1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR = 2.37; 95% CI, 2.14–2.64). CONCLUSIONS AND RELEVANCE Children’s odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents’ access to Medicaid coverage and their children’s coverage. PMID:25561041

  16. Design of a mobile brain computer interface-based smart multimedia controller.

    PubMed

    Tseng, Kevin C; Lin, Bor-Shing; Wong, Alice May-Kuen; Lin, Bor-Shyh

    2015-03-06

    Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user's physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user's physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user's EEG feature and select music according his/her state. The relationship between the user's state and music sorted by listener's preference was also examined in this study. The experimental results show that real-time music biofeedback according a user's EEG feature may positively improve the user's attention state.

  17. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: An instrumental variables re-analysis of randomized clinical trials

    PubMed Central

    Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.

    2014-01-01

    Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504

  18. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: an instrumental variables re-analysis of randomized clinical trials.

    PubMed

    Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H

    2014-11-01

    Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.

  19. The variability of software scoring of the CDMAM phantom associated with a limited number of images

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying J.; Van Metter, Richard

    2007-03-01

    Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.

  20. Colorectal Adenomas in Participants of the SELECT Randomized Trial of Selenium and Vitamin E for Prostate Cancer Prevention.

    PubMed

    Lance, Peter; Alberts, David S; Thompson, Patricia A; Fales, Liane; Wang, Fang; San Jose, Jerilyn; Jacobs, Elizabeth T; Goodman, Phyllis J; Darke, Amy K; Yee, Monica; Minasian, Lori; Thompson, Ian M; Roe, Denise J

    2017-01-01

    Selenium and vitamin E micronutrients have been advocated for the prevention of colorectal cancer. Colorectal adenoma occurrence was used as a surrogate for colorectal cancer in an ancillary study to the Selenium and Vitamin E Cancer Prevention Trial (SELECT) for prostate cancer prevention. The primary objective was to measure the effect of selenium (as selenomethionine) on colorectal adenomas occurrence, with the effect of vitamin E (as α-tocopherol) supplementation on colorectal adenoma occurrence considered as a secondary objective. Participants who underwent lower endoscopy while in SELECT were identified from a subgroup of the 35,533 men randomized in the trial. Adenoma occurrence was ascertained from the endoscopy and pathology reports for these procedures. Relative Risk (RR) estimates and 95% confidence intervals (CI) of adenoma occurrence were generated comparing those randomized to selenium versus placebo and to vitamin E versus placebo based on the full factorial design. Evaluable endoscopy information was obtained for 6,546 participants, of whom 2,286 had 1+ adenomas. Apart from 21 flexible sigmoidoscopies, all the procedures yielding adenomas were colonoscopies. Adenomas occurred in 34.2% and 35.7%, respectively, of participants whose intervention included or did not include selenium. Compared with placebo, the RR for adenoma occurrence in participants randomized to selenium was 0.96 (95% CI, 0.90-1.02; P = 0.194). Vitamin E did not affect adenoma occurrence compared with placebo (RR = 1.03; 95% CI, 0.96-1.10; P = 0.38). Neither selenium nor vitamin E supplementation can be recommended for colorectal adenoma prevention. Cancer Prev Res; 10(1); 45-54. ©2016 AACR. ©2016 American Association for Cancer Research.

  1. Weight distributions for turbo codes using random and nonrandom permutations

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Divsalar, D.

    1995-01-01

    This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.

  2. Random sampling of elementary flux modes in large-scale metabolic networks.

    PubMed

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  3. The prognostic impact of cancer stem-like cell biomarker aldehyde dehydrogenase-1 (ALDH1) in ovarian cancer: A meta-analysis.

    PubMed

    Ruscito, Ilary; Darb-Esfahani, Silvia; Kulbe, Hagen; Bellati, Filippo; Zizzari, Ilaria Grazia; Rahimi Koshkaki, Hassan; Napoletano, Chiara; Caserta, Donatella; Rughetti, Aurelia; Kessler, Mirjana; Sehouli, Jalid; Nuti, Marianna; Braicu, Elena Ioana

    2018-05-10

    To investigate the association of cancer stem cell biomarker aldehyde dehydrogenase-1 (ALDH1) with ovarian cancer patients' prognosis and clinico-pathological characteristics. The electronic searches were performed in January 2018 through the databases PubMed, MEDLINE and Scopus by searching the terms: "ovarian cancer" AND "immunohistochemistry" AND ["aldehyde dehydrogenase-1" OR "ALDH1" OR "cancer stem cell"]. Studies evaluating the impact of ALDH1 expression on ovarian cancer survival and clinico-pathological variables were selected. 233 studies were retrieved. Thirteen studies including 1885 patients met all selection criteria. ALDH1-high expression was found to be significantly associated with poor 5-year OS (OR = 3.46; 95% CI: 1.61-7.42; P = 0.001, random effects model) and 5-year PFS (OR = 2.14; 95% CI: 1.11-4.13; P = 0.02, random effects model) in ovarian cancer patients. No correlation between ALDH1 expression and tumor histology (OR = 0.60; 95% CI: 0.36-1.02; P = 0.06, random effects model), FIGO Stage (OR = 0.65; 95% CI: 0.33-1.30; P = 0.22, random effects model), tumor grading (OR = 0.76; 95% CI: 0.40-1.45; P = 0.41, random effects model) lymph nodal status (OR = 2.05; 95% CI: 0.81-5.18; P = 0.13, random effects model) or patients' age at diagnosis (OR = 0.83; 95% CI: 0.54-1.29; P = 0.41, fixed effects model) was identified. Basing on the available evidence, this meta-analysis showed that high levels of ALDH1 expression correlate with worse OS and PFS in ovarian cancer patients. Copyright © 2018. Published by Elsevier Inc.

  4. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  5. Optical Addressing Electronic Tongue Based on Low Selective Photovoltaic Transducer with Nanoporous Silicon Layer

    NASA Astrophysics Data System (ADS)

    Litvinenko, S. V.; Bielobrov, D. O.; Lysenko, V.; Skryshevsky, V. A.

    2016-08-01

    The electronic tongue based on the array of low selective photovoltaic (PV) sensors and principal component analysis is proposed for detection of various alcohol solutions. A sensor array is created at the forming of p-n junction on silicon wafer with porous silicon layer on the opposite side. A dynamical set of sensors is formed due to the inhomogeneous distribution of the surface recombination rate at this porous silicon side. The sensitive to molecular adsorption photocurrent is induced at the scanning of this side by laser beam. Water, ethanol, iso-propanol, and their mixtures were selected for testing. It is shown that the use of the random dispersion of surface recombination rates on different spots of the rear side of p-n junction and principal component analysis of PV signals allows identifying mentioned liquid substances and their mixtures.

  6. Predicting the continuum between corridors and barriers to animal movements using Step Selection Functions and Randomized Shortest Paths.

    PubMed

    Panzacchi, Manuela; Van Moorter, Bram; Strand, Olav; Saerens, Marco; Kivimäki, Ilkka; St Clair, Colleen C; Herfindal, Ivar; Boitani, Luigi

    2016-01-01

    The loss, fragmentation and degradation of habitat everywhere on Earth prompts increasing attention to identifying landscape features that support animal movement (corridors) or impedes it (barriers). Most algorithms used to predict corridors assume that animals move through preferred habitat either optimally (e.g. least cost path) or as random walkers (e.g. current models), but neither extreme is realistic. We propose that corridors and barriers are two sides of the same coin and that animals experience landscapes as spatiotemporally dynamic corridor-barrier continua connecting (separating) functional areas where individuals fulfil specific ecological processes. Based on this conceptual framework, we propose a novel methodological approach that uses high-resolution individual-based movement data to predict corridor-barrier continua with increased realism. Our approach consists of two innovations. First, we use step selection functions (SSF) to predict friction maps quantifying corridor-barrier continua for tactical steps between consecutive locations. Secondly, we introduce to movement ecology the randomized shortest path algorithm (RSP) which operates on friction maps to predict the corridor-barrier continuum for strategic movements between functional areas. By modulating the parameter Ѳ, which controls the trade-off between exploration and optimal exploitation of the environment, RSP bridges the gap between algorithms assuming optimal movements (when Ѳ approaches infinity, RSP is equivalent to LCP) or random walk (when Ѳ → 0, RSP → current models). Using this approach, we identify migration corridors for GPS-monitored wild reindeer (Rangifer t. tarandus) in Norway. We demonstrate that reindeer movement is best predicted by an intermediate value of Ѳ, indicative of a movement trade-off between optimization and exploration. Model calibration allows identification of a corridor-barrier continuum that closely fits empirical data and demonstrates that RSP outperforms models that assume either optimality or random walk. The proposed approach models the multiscale cognitive maps by which animals likely navigate real landscapes and generalizes the most common algorithms for identifying corridors. Because suboptimal, but non-random, movement strategies are likely widespread, our approach has the potential to predict more realistic corridor-barrier continua for a wide range of species. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  7. Evolving artificial metalloenzymes via random mutagenesis

    NASA Astrophysics Data System (ADS)

    Yang, Hao; Swartz, Alan M.; Park, Hyun June; Srivastava, Poonam; Ellis-Guardiola, Ken; Upp, David M.; Lee, Gihoon; Belsare, Ketaki; Gu, Yifan; Zhang, Chen; Moellering, Raymond E.; Lewis, Jared C.

    2018-03-01

    Random mutagenesis has the potential to optimize the efficiency and selectivity of protein catalysts without requiring detailed knowledge of protein structure; however, introducing synthetic metal cofactors complicates the expression and screening of enzyme libraries, and activity arising from free cofactor must be eliminated. Here we report an efficient platform to create and screen libraries of artificial metalloenzymes (ArMs) via random mutagenesis, which we use to evolve highly selective dirhodium cyclopropanases. Error-prone PCR and combinatorial codon mutagenesis enabled multiplexed analysis of random mutations, including at sites distal to the putative ArM active site that are difficult to identify using targeted mutagenesis approaches. Variants that exhibited significantly improved selectivity for each of the cyclopropane product enantiomers were identified, and higher activity than previously reported ArM cyclopropanases obtained via targeted mutagenesis was also observed. This improved selectivity carried over to other dirhodium-catalysed transformations, including N-H, S-H and Si-H insertion, demonstrating that ArMs evolved for one reaction can serve as starting points to evolve catalysts for others.

  8. Service-Oriented Node Scheduling Scheme for Wireless Sensor Networks Using Markov Random Field Model

    PubMed Central

    Cheng, Hongju; Su, Zhihuang; Lloret, Jaime; Chen, Guolong

    2014-01-01

    Future wireless sensor networks are expected to provide various sensing services and energy efficiency is one of the most important criterions. The node scheduling strategy aims to increase network lifetime by selecting a set of sensor nodes to provide the required sensing services in a periodic manner. In this paper, we are concerned with the service-oriented node scheduling problem to provide multiple sensing services while maximizing the network lifetime. We firstly introduce how to model the data correlation for different services by using Markov Random Field (MRF) model. Secondly, we formulate the service-oriented node scheduling issue into three different problems, namely, the multi-service data denoising problem which aims at minimizing the noise level of sensed data, the representative node selection problem concerning with selecting a number of active nodes while determining the services they provide, and the multi-service node scheduling problem which aims at maximizing the network lifetime. Thirdly, we propose a Multi-service Data Denoising (MDD) algorithm, a novel multi-service Representative node Selection and service Determination (RSD) algorithm, and a novel MRF-based Multi-service Node Scheduling (MMNS) scheme to solve the above three problems respectively. Finally, extensive experiments demonstrate that the proposed scheme efficiently extends the network lifetime. PMID:25384005

  9. Validity and practicability of smartphone-based photographic food records for estimating energy and nutrient intake.

    PubMed

    Kong, Kaimeng; Zhang, Lulu; Huang, Lisu; Tao, Yexuan

    2017-05-01

    Image-assisted dietary assessment methods are frequently used to record individual eating habits. This study tested the validity of a smartphone-based photographic food recording approach by comparing the results obtained with those of a weighed food record. We also assessed the practicality of the method by using it to measure the energy and nutrient intake of college students. The experiment was implemented in two phases, each lasting 2 weeks. In the first phase, a labelled menu and a photograph database were constructed. The energy and nutrient content of 31 randomly selected dishes in three different portion sizes were then estimated by the photograph-based method and compared with a weighed food record. In the second phase, we combined the smartphone-based photographic method with the WeChat smartphone application and applied this to 120 randomly selected participants to record their energy and nutrient intake. The Pearson correlation coefficients for energy, protein, fat, and carbohydrate content between the weighed and the photographic food record were 0.997, 0.936, 0.996, and 0.999, respectively. Bland-Altman plots showed good agreement between the two methods. The estimated protein, fat, and carbohydrate intake by participants was in accordance with values in the Chinese Residents' Nutrition and Chronic Disease report (2015). Participants expressed satisfaction with the new approach and the compliance rate was 97.5%. The smartphone-based photographic dietary assessment method combined with the WeChat instant messaging application was effective and practical for use by young people.

  10. Fingerprint recognition of alien invasive weeds based on the texture character and machine learning

    NASA Astrophysics Data System (ADS)

    Yu, Jia-Jia; Li, Xiao-Li; He, Yong; Xu, Zheng-Hao

    2008-11-01

    Multi-spectral imaging technique based on texture analysis and machine learning was proposed to discriminate alien invasive weeds with similar outline but different categories. The objectives of this study were to investigate the feasibility of using Multi-spectral imaging, especially the near-infrared (NIR) channel (800 nm+/-10 nm) to find the weeds' fingerprints, and validate the performance with specific eigenvalues by co-occurrence matrix. Veronica polita Pries, Veronica persica Poir, longtube ground ivy, Laminum amplexicaule Linn. were selected in this study, which perform different effect in field, and are alien invasive species in China. 307 weed leaves' images were randomly selected for the calibration set, while the remaining 207 samples for the prediction set. All images were pretreated by Wallis filter to adjust the noise by uneven lighting. Gray level co-occurrence matrix was applied to extract the texture character, which shows density, randomness correlation, contrast and homogeneity of texture with different algorithms. Three channels (green channel by 550 nm+/-10 nm, red channel by 650 nm+/-10 nm and NIR channel by 800 nm+/-10 nm) were respectively calculated to get the eigenvalues.Least-squares support vector machines (LS-SVM) was applied to discriminate the categories of weeds by the eigenvalues from co-occurrence matrix. Finally, recognition ratio of 83.35% by NIR channel was obtained, better than the results by green channel (76.67%) and red channel (69.46%). The prediction results of 81.35% indicated that the selected eigenvalues reflected the main characteristics of weeds' fingerprint based on multi-spectral (especially by NIR channel) and LS-SVM model.

  11. Gossip-Based Dissemination

    NASA Astrophysics Data System (ADS)

    Friedman, Roy; Kermarrec, Anne-Marie; Miranda, Hugo; Rodrigues, Luís

    Gossip-based networking has emerged as a viable approach to disseminate information reliably and efficiently in large-scale systems. Initially introduced for database replication [222], the applicability of the approach extends much further now. For example, it has been applied for data aggregation [415], peer sampling [416] and publish/subscribe systems [845]. Gossip-based protocols rely on a periodic peer-wise exchange of information in wired systems. By changing the way each peer is selected for the gossip communication, and which data are exchanged and processed [451], gossip systems can be used to perform different distributed tasks, such as, among others: overlay maintenance, distributed computation, and information dissemination (a collection of papers on gossip can be found in [451]). In a wired setting, the peer sampling service, allowing for a random or specific peer selection, is often provided as an independent service, able to operate independently from other gossip-based services [416].

  12. Design Of Computer Based Test Using The Unified Modeling Language

    NASA Astrophysics Data System (ADS)

    Tedyyana, Agus; Danuri; Lidyawati

    2017-12-01

    The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.

  13. The quality of reporting of randomized controlled trials of traditional Chinese medicine: a survey of 13 randomly selected journals from mainland China.

    PubMed

    Wang, Gang; Mao, Bing; Xiong, Ze-Yu; Fan, Tao; Chen, Xiao-Dong; Wang, Lei; Liu, Guan-Jian; Liu, Jia; Guo, Jia; Chang, Jing; Wu, Tai-Xiang; Li, Ting-Qian

    2007-07-01

    The number of randomized controlled trials (RCTs) of traditional Chinese medicine (TCM) is increasing. However, there have been few systematic assessments of the quality of reporting of these trials. This study was undertaken to evaluate the quality of reporting of RCTs in TCM journals published in mainland China from 1999 to 2004. Thirteen TCM journals were randomly selected by stratified sampling of the approximately 100 TCM journals published in mainland China. All issues of the selected journals published from 1999 to 2004 were hand-searched according to guidelines from the Cochrane Centre. All reviewers underwent training in the evaluation of RCTs at the Chinese Centre of Evidence-based Medicine. A comprehensive quality assessment of each RCT was completed using a modified version of the Consolidated Standards of Reporting Trials (CONSORT) checklist (total of 30 items) and the Jadad scale. Disagreements were resolved by consensus. Seven thousand four hundred twenty-two RCTs were identified. The proportion of published RCTs relative to all types of published clinical trials increased significantly over the period studied, from 18.6% in 1999 to 35.9% in 2004 (P < 0.001). The mean (SD) Jadad score was 1.03 (0.61) overall. One RCT had a Jadad score of 5 points; 14 had a score of 4 points; and 102 had a score of 3 points. The mean (SD) Jadad score was 0.85 (0.53) in 1999 (746 RCTs) and 1.20 (0.62) in 2004 (1634 RCTs). Across all trials, 39.4% of the items on the modified CONSORT checklist were reported, which was equivalent to 11.82 (5.78) of the 30 items. Some important methodologic components of RCTs were incompletely reported, such as sample-size calculation (reported in 1.1% of RCTs), randomization sequence (7.9%), allocation concealment (0.3 %), implementation of the random-allocation sequence (0%), and analysis of intention to treat (0%). The findings of this study indicate that the quality of reporting of RCTs of TCM has improved, but remains poor.

  14. DRIFTSEL: an R package for detecting signals of natural selection in quantitative traits.

    PubMed

    Karhunen, M; Merilä, J; Leinonen, T; Cano, J M; Ovaskainen, O

    2013-07-01

    Approaches and tools to differentiate between natural selection and genetic drift as causes of population differentiation are of frequent demand in evolutionary biology. Based on the approach of Ovaskainen et al. (2011), we have developed an R package (DRIFTSEL) that can be used to differentiate between stabilizing selection, diversifying selection and random genetic drift as causes of population differentiation in quantitative traits when neutral marker and quantitative genetic data are available. Apart from illustrating the use of this method and the interpretation of results using simulated data, we apply the package on data from three-spined sticklebacks (Gasterosteus aculeatus) to highlight its virtues. DRIFTSEL can also be used to perform usual quantitative genetic analyses in common-garden study designs. © 2013 John Wiley & Sons Ltd.

  15. Refernce Conditions for Streams in the Grand Prairie Natural Division of Illinois

    NASA Astrophysics Data System (ADS)

    Sangunett, B.; Dewalt, R.

    2005-05-01

    As part of the Critical Trends Assessment Program (CTAP) of the Illinois Department of Natural Resources (IDNR), 12 potential reference quality stream sites in the Grand Prairie Natural Division were evaluated in May 2004. This agriculturally dominated region, located in east central Illinois, is the most highly modified in the state. The quality of these sites was assessed using a modified Hilsenhoff Biotic Index and species richness of Ephemeroptera, Plecoptera, and Trichoptera (EPT) insect orders and a 12 parameter Habitat Quality Index (HQI). Illinois EPA high quality fish stations, Illinois Natural History Survey insect collection data, and best professional knowledge were used to choose which streams to evaluate. For analysis, reference quality streams were compared to 37 randomly selected meandering streams and 26 randomly selected channelized streams which were assessed by CTAP between 1997 and 2001. The results showed that the reference streams exceeded both taxa richness and habitat quality of randomly selected streams in the region. Both random meandering sites and reference quality sites increased in taxa richness and HQI as stream width increased. Randomly selected channelized streams had about the same taxa richness and HQI regardless of width.

  16. Key Aspects of Nucleic Acid Library Design for in Vitro Selection

    PubMed Central

    Vorobyeva, Maria A.; Davydova, Anna S.; Vorobjev, Pavel E.; Pyshnyi, Dmitrii V.; Venyaminova, Alya G.

    2018-01-01

    Nucleic acid aptamers capable of selectively recognizing their target molecules have nowadays been established as powerful and tunable tools for biospecific applications, be it therapeutics, drug delivery systems or biosensors. It is now generally acknowledged that in vitro selection enables one to generate aptamers to almost any target of interest. However, the success of selection and the affinity of the resulting aptamers depend to a large extent on the nature and design of an initial random nucleic acid library. In this review, we summarize and discuss the most important features of the design of nucleic acid libraries for in vitro selection such as the nature of the library (DNA, RNA or modified nucleotides), the length of a randomized region and the presence of fixed sequences. We also compare and contrast different randomization strategies and consider computer methods of library design and some other aspects. PMID:29401748

  17. Methods and analysis of realizing randomized grouping.

    PubMed

    Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi

    2011-07-01

    Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.

  18. Reach and effectiveness of DVD and in-person diabetes self-management education.

    PubMed

    Glasgow, Russell E; Edwards, Linda L; Whitesides, Holly; Carroll, Nikki; Sanders, Tristan J; McCray, Barbara L

    2009-12-01

    To evaluate the reach and effectiveness of a diabetes self-management DVD compared to classroom-based instruction. A hybrid preference/randomized design was used with participants assigned to Choice v. Randomized and DVD v. Class conditions. One hundred and eighty-nine adults with type 2 diabetes participated. Key outcomes included self-management behaviours, process measures including DVD implementation and hypothesized mediators and clinical risk factors. In the Choice condition, four times as many participants chose the mailed DVD as selected Class-based instruction (38.8 v. 9.4%, p<0.001). At the 6-month follow-up, the DVD produced results generally not significantly different than classroom-based instruction, but a combined Class plus DVD condition did not improve outcomes beyond those produced by the classes alone. The DVD appears to have merit as an efficient and appealing alternative to brief classroom-based diabetes education, and the hybrid design is recommended to provide estimates of programme reach.

  19. Robust local search for spacecraft operations using adaptive noise

    NASA Technical Reports Server (NTRS)

    Fukunaga, Alex S.; Rabideau, Gregg; Chien, Steve

    2004-01-01

    Randomization is a standard technique for improving the performance of local search algorithms for constraint satisfaction. However, it is well-known that local search algorithms are constraints satisfaction. However, it is well-known that local search algorithms are to the noise values selected. We investigate the use of an adaptive noise mechanism in an iterative repair-based planner/scheduler for spacecraft operations. Preliminary results indicate that adaptive noise makes the use of randomized repair moves safe and robust; that is, using adaptive noise makes it possible to consistently achieve, performance comparable with the best tuned noise setting without the need for manually tuning the noise parameter.

  20. Phage display selection of peptides that target calcium-binding proteins.

    PubMed

    Vetter, Stefan W

    2013-01-01

    Phage display allows to rapidly identify peptide sequences with binding affinity towards target proteins, for example, calcium-binding proteins (CBPs). Phage technology allows screening of 10(9) or more independent peptide sequences and can identify CBP binding peptides within 2 weeks. Adjusting of screening conditions allows selecting CBPs binding peptides that are either calcium-dependent or independent. Obtained peptide sequences can be used to identify CBP target proteins based on sequence homology or to quickly obtain peptide-based CBP inhibitors to modulate CBP-target interactions. The protocol described here uses a commercially available phage display library, in which random 12-mer peptides are displayed on filamentous M13 phages. The library was screened against the calcium-binding protein S100B.

  1. Occupational Commonalities: A Base for Course Construction. Paper No. 2219, Journal Series.

    ERIC Educational Resources Information Center

    Dillon, Roy D.; Horner, James T.

    To determine competencies and activities used by workers in a cross section of the statewide labor force, data were obtained from a random sample of 1,500 employed persons drawn from 14 purposively selected index counties in Nebraska. An interview-questionnaire procedure yielded an 87.7 percent response to a checklist of 144 activities, duties,…

  2. The Impact of Pushed Output on Accuracy and Fluency of Iranian EFL Learners' Speaking

    ERIC Educational Resources Information Center

    Sadeghi Beniss, Aram Reza; Edalati Bazzaz, Vahid

    2014-01-01

    The current study attempted to establish baseline quantitative data on the impacts of pushed output on two components of speaking (i.e., accuracy and fluency). To achieve this purpose, 30 female EFL learners were selected from a whole population pool of 50 based on the standard test of IELTS interview and were randomly assigned into an…

  3. Effects of Information and Communication Technology (ICT) on Students' Academic Achievement and Retention in Chemistry at Secondary Level

    ERIC Educational Resources Information Center

    Hussain, Ishtiaq; Suleman, Qaiser; ud Din, M. Naseer; Shafique, Farhan

    2017-01-01

    The current paper investigated the effects of information and communication technology on the students' academic achievement and retention in chemistry. Fifty students of 9th grade were selected randomly from Kohsar Public School and College Latamber Karak. The students were grouped into equivalent groups based on pretest score. In order to…

  4. The National Center on Indigenous Hawaiian Behavioral Health Study of Prevalence of Psychiatric Disorders in Native Hawaiian Adolescents

    ERIC Educational Resources Information Center

    Andrade, Naleen N.; Hishinuma, Earl S.; McDermott, John F., Jr.; Johnson, Ronald C.; Goebert, Deborah A.; Makini, George K., Jr.; Nahulu, Linda B.; Yuen, Noelle Y. C.; McArdle, John J.; Bell, Cathy K.; Carlton, Barry S.; Miyamoto, Robin H.; Nishimura, Stephanie T.; Else, Iwalani R. N.; Guerrero, Anthony P. S.; Darmal, Arsalan; Yates, Alayne; Waldron, Jane A.

    2006-01-01

    Objectives: The prevalence rates of disorders among a community-based sample of Hawaiian youths were determined and compared to previously published epidemiological studies. Method: Using a two-phase design, 7,317 adolescents were surveyed (60% participation rate), from which 619 were selected in a modified random sample during the 1992-1993 to…

  5. Longitudinal Examination of Aggression and Study Skills from Middle to High School: Implications for Dropout Prevention

    ERIC Educational Resources Information Center

    Orpinas, Pamela; Raczynski, Katherine; Hsieh, Hsien-Lin; Nahapetyan, Lusine; Horne, Arthur M.

    2018-01-01

    Background: High school completion provides health and economic benefits. The purpose of this study is to describe dropout rates based on longitudinal trajectories of aggression and study skills using teacher ratings. Methods: The sample consisted of 620 randomly selected sixth graders. Every year from Grade 6 to 12, a teacher completed a…

  6. Lecturers' Awareness and Utilization of Instructional Media in the State-Owned Colleges of Education, South-West Nigeria

    ERIC Educational Resources Information Center

    Fakomogbon, Micheal Ayodele; Olanrewaju, Olatayo Solomon; Soetan, Aderonke Kofo

    2015-01-01

    This paper investigated the awareness and utilization of instructional media (IM) based on gender of the lecturers of tertiary institutions in Nigeria. It was a descriptive type of survey research. All lecturers of Colleges of Education in Southwest geo-political zone of Nigeria formed the population. Some 621 lecturers were randomly selected.…

  7. AUPress: A Comparison of an Open Access University Press with Traditional Presses

    ERIC Educational Resources Information Center

    McGreal, Rory; Chen, Nian-Shing

    2011-01-01

    This study is a comparison of AUPress with three other traditional (non-open access) Canadian university presses. The analysis is based on the rankings that are correlated with book sales on Amazon.com and Amazon.ca. Statistical methods include the sampling of the sales ranking of randomly selected books from each press. The results of one-way…

  8. Peer Network Dynamics and the Amplification of Antisocial to Violent Behavior among Young Adolescents in Public Middle Schools

    ERIC Educational Resources Information Center

    Kornienko, Olga; Dishion, Thomas J.; Ha, Thao

    2018-01-01

    This study examined longitudinal changes in peer network selection and influence associated with self-reported antisocial behavior (AB) and violent behavior (VB) over the course of middle school in a sample of ethnically diverse adolescents. Youth and families were randomly assigned to a school-based intervention focused on the prevention of…

  9. The Effect of Depressive Symptoms on the Association between Functional Status and Social Participation

    ERIC Educational Resources Information Center

    Ostir, Glenn V.; Ottenbacher, Kenneth J.; Fried, Linda P.; Guralnik, Jack M.

    2007-01-01

    The aim of the current study was to examine the interactive effects of depressive symptoms and lower extremity functioning on social participation for a group of moderately to severely disabled older women. The study used a cross-sectional community based sample, enrolled in the Women's Health and Aging Study I, randomly selected from the Centers…

  10. Level of Creative Behavior among Teachers of Public Schools within the Green Line from Their Perspective

    ERIC Educational Resources Information Center

    Naser, Rina Abdallah

    2016-01-01

    The current study seeks to identify the level of creative behavior among teachers of public schools within the Green Line, based on gender, academic qualification, years of experience and level of school. The sample consisted of (502) teachers, selected randomly, from public schools within the Green Line in Israel. The tool utilized is a…

  11. Health Promotion Intervention for Hygienic Disposal of Children's Faeces in a Rural Area of Nigeria

    ERIC Educational Resources Information Center

    Jinadu, M. K.; Adegbenro, C. A.; Esmai, A. O.; Ojo, A. A.; Oyeleye, B. A.

    2007-01-01

    Objective: Community-based health promotion intervention for improving unhygienic disposal of children's faeces was conducted in a rural area of Nigeria. Setting: The study was conducted in Ife South Local Government area of Osun State, Nigeria. Design: The study was conducted in 10 randomly selected rural villages: five control and five active.…

  12. Obeying the Rules or Gaming the System? Delegating Random Selection for Examinations to Head Teachers within an Accountability System

    ERIC Educational Resources Information Center

    Elstad, Eyvind; Turmo, Are

    2011-01-01

    As education systems around the world move towards increased accountability based on performance measures, it is important to investigate the unintended effects of accountability systems. This article seeks to explore the extent to which head teachers in a large Norwegian municipality may resort to gaming the incentive system to boost their…

  13. Body Image, Dieting and Disordered Eating and Activity Practices among Teacher Trainees: Implications for School-Based Health Education and Obesity Prevention Programs

    ERIC Educational Resources Information Center

    Yager, Zali; O'Dea, Jennifer

    2009-01-01

    The aim was to investigate and compare body image, body dissatisfaction, dieting, disordered eating, exercise and eating disorders among trainee health education/physical education (H&PE) and non-H&PE teachers. Participants were 502 trainee teachers randomly selected from class groups at three Australian universities who completed the…

  14. How Is Education Perceived on the Inside?: A Preliminary Study of Adult Males in a Correctional Setting

    ERIC Educational Resources Information Center

    Moeller, Michelle; Day, Scott L.; Rivera, Beverly D.

    2004-01-01

    This study explores a group of inmates' perceptions of their correctional education and environment based on Fetterman's 1994 idea of empowerment evaluation. A group of 16 male inmates were randomly selected from GED and ABE courses in a high minimum correctional facility in Illinois. A self-administered questionnaire included 5 topics:…

  15. Influences of Normalization Method on Biomarker Discovery in Gas Chromatography-Mass Spectrometry-Based Untargeted Metabolomics: What Should Be Considered?

    PubMed

    Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo

    2017-05-16

    Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.

  16. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    EPA Science Inventory

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  17. Training balance with opto-kinetic stimuli in the home: a randomized controlled feasibility study in people with pure cerebellar disease.

    PubMed

    Bunn, Lisa M; Marsden, Jonathan F; Giunti, Paola; Day, Brian L

    2015-02-01

    To investigate the feasibility of a randomized controlled trial of a home-based balance intervention for people with cerebellar ataxia. A randomized controlled trial design. Intervention and assessment took place in the home environment. A total of 12 people with spinocerebellar ataxia type 6 were randomized into a therapy or control group. Both groups received identical assessments at baseline, four and eight weeks. Therapy group participants undertook balance exercises in front of optokinetic stimuli during weeks 4-8, while control group participants received no intervention. Test-retest reliability was analysed from outcome measures collected twice at baseline and four weeks later. Feasibility issues were evaluated using daily diaries and end trial exit interviews. The home-based training intervention with opto-kinetic stimuli was feasible for people with pure ataxia, with one drop-out. Test-retest reliability is strong (intraclass correlation coefficient >0.7) for selected outcome measures evaluating balance at impairment and activity levels. Some measures reveal trends towards improvement for those in the therapy group. Sample size estimations indicate that Bal-SARA scores could detect a clinically significant change of 0.8 points in this functional balance score if 80 people per group were analysed in future trials. Home-based targeted training of functional balance for people with pure cerebellar ataxia is feasible and the outcome measures employed are reliable. © The Author(s) 2014.

  18. The Effects of Total Physical Response by Storytelling and the Traditional Teaching Styles of a Foreign Language in a Selected High School

    ERIC Educational Resources Information Center

    Kariuki, Patrick N. K.; Bush, Elizabeth Danielle

    2008-01-01

    The purpose of this study was to examine the effects of Total Physical Response by Storytelling and the traditional teaching method on a foreign language in a selected high school. The sample consisted of 30 students who were randomly selected and randomly assigned to experimental and control group. The experimental group was taught using Total…

  19. RSAT: regulatory sequence analysis tools.

    PubMed

    Thomas-Chollier, Morgane; Sand, Olivier; Turatsinze, Jean-Valéry; Janky, Rekin's; Defrance, Matthieu; Vervisch, Eric; Brohée, Sylvain; van Helden, Jacques

    2008-07-01

    The regulatory sequence analysis tools (RSAT, http://rsat.ulb.ac.be/rsat/) is a software suite that integrates a wide collection of modular tools for the detection of cis-regulatory elements in genome sequences. The suite includes programs for sequence retrieval, pattern discovery, phylogenetic footprint detection, pattern matching, genome scanning and feature map drawing. Random controls can be performed with random gene selections or by generating random sequences according to a variety of background models (Bernoulli, Markov). Beyond the original word-based pattern-discovery tools (oligo-analysis and dyad-analysis), we recently added a battery of tools for matrix-based detection of cis-acting elements, with some original features (adaptive background models, Markov-chain estimation of P-values) that do not exist in other matrix-based scanning tools. The web server offers an intuitive interface, where each program can be accessed either separately or connected to the other tools. In addition, the tools are now available as web services, enabling their integration in programmatic workflows. Genomes are regularly updated from various genome repositories (NCBI and EnsEMBL) and 682 organisms are currently supported. Since 1998, the tools have been used by several hundreds of researchers from all over the world. Several predictions made with RSAT were validated experimentally and published.

  20. Comparison of Address-based Sampling and Random-digit Dialing Methods for Recruiting Young Men as Controls in a Case-Control Study of Testicular Cancer Susceptibility

    PubMed Central

    Clagett, Bartholt; Nathanson, Katherine L.; Ciosek, Stephanie L.; McDermoth, Monique; Vaughn, David J.; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A.

    2013-01-01

    Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-phone numbers and address-based sampling (ABS), to recruit primarily white men aged 18–55 years into a study of testicular cancer susceptibility conducted in the Philadelphia, Pennsylvania, metropolitan area between 2009 and 2012. With few exceptions, eligible and enrolled controls recruited by means of RDD and ABS were similar with regard to characteristics for which data were collected on the screening survey. While we find ABS to be a comparably effective method of recruiting young males compared with landline RDD, we acknowledge the potential impact that selection bias may have had on our results because of poor overall response rates, which ranged from 11.4% for landline RDD to 1.7% for ABS. PMID:24008901

Top