Sample records for large randomly selected

  1. Population differentiation in Pacific salmon: local adaptation, genetic drift, or the environment?

    USGS Publications Warehouse

    Adkison, Milo D.

    1995-01-01

    Morphological, behavioral, and life-history differences between Pacific salmon (Oncorhynchus spp.) populations are commonly thought to reflect local adaptation, and it is likewise common to assume that salmon populations separated by small distances are locally adapted. Two alternatives to local adaptation exist: random genetic differentiation owing to genetic drift and founder events, and genetic homogeneity among populations, in which differences reflect differential trait expression in differing environments. Population genetics theory and simulations suggest that both alternatives are possible. With selectively neutral alleles, genetic drift can result in random differentiation despite many strays per generation. Even weak selection can prevent genetic drift in stable populations; however, founder effects can result in random differentiation despite selective pressures. Overlapping generations reduce the potential for random differentiation. Genetic homogeneity can occur despite differences in selective regimes when straying rates are high. In sum, localized differences in selection should not always result in local adaptation. Local adaptation is favored when population sizes are large and stable, selection is consistent over large areas, selective diffeentials are large, and straying rates are neither too high nor too low. Consideration of alternatives to local adaptation would improve both biological research and salmon conservation efforts.

  2. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    EPA Science Inventory

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  3. Sampling Large Graphs for Anticipatory Analytics

    DTIC Science & Technology

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  4. Classroom management programs for deaf children in state residential and large public schools.

    PubMed

    Wenkus, M; Rittenhouse, B; Dancer, J

    1999-12-01

    Personnel in 4 randomly selected state residential schools for the deaf and 3 randomly selected large public schools with programs for the deaf were surveyed to assess the types of management or disciplinary programs and strategies currently in use with deaf students and the rated effectiveness of such programs. Several behavioral management programs were identified by respondents, with Assertive Discipline most often listed. Ratings of program effectiveness were generally above average on a number of qualitative criteria.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.

    Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less

  6. School Happiness and School Success: An Investigation across Multiple Grade Levels.

    ERIC Educational Resources Information Center

    Parish, Joycelyn Gay; Parish, Thomas S.; Batt, Steve

    A total of 572 randomly selected sixth-grade students and 908 randomly selected ninth-grade students from a large metropolitan school district in the Midwest were asked to complete a series of survey questions designed to measure the extent to which they were happy while at school, as well as questions concerning the extent to which they treated…

  7. Sampling large random knots in a confined space

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  8. Balancing Participation across Students in Large College Classes via Randomized Participation Credit

    ERIC Educational Resources Information Center

    McCleary, Daniel F.; Aspiranti, Kathleen B.; Foster, Lisa N.; Blondin, Carolyn A.; Gaylon, Charles E.; Yaw, Jared S.; Forbes, Bethany N.; Williams, Robert L.

    2011-01-01

    The study examines the effects of randomized credit on the percentage of students participating at four predefined levels. Students recorded their comments on specially designed record cards, and days were randomly selected for participation credit. This arrangement balanced participation across students while cutting instructor time for recording…

  9. A large-scale cluster randomized trial to determine the effects of community-based dietary sodium reduction--the China Rural Health Initiative Sodium Reduction Study.

    PubMed

    Li, Nicole; Yan, Lijing L; Niu, Wenyi; Labarthe, Darwin; Feng, Xiangxian; Shi, Jingpu; Zhang, Jianxin; Zhang, Ruijuan; Zhang, Yuhong; Chu, Hongling; Neiman, Andrea; Engelgau, Michael; Elliott, Paul; Wu, Yangfeng; Neal, Bruce

    2013-11-01

    Cardiovascular diseases are the leading cause of death and disability in China. High blood pressure caused by excess intake of dietary sodium is widespread and an effective sodium reduction program has potential to improve cardiovascular health. This study is a large-scale, cluster-randomized, trial done in five Northern Chinese provinces. Two counties have been selected from each province and 12 townships in each county making a total of 120 clusters. Within each township one village has been selected for participation with 1:1 randomization stratified by county. The sodium reduction intervention comprises community health education and a food supply strategy based upon providing access to salt substitute. Subsidization of the price of salt substitute was done in 30 intervention villages selected at random. Control villages continued usual practices. The primary outcome for the study is dietary sodium intake level estimated from assays of 24-hour urine. The trial recruited and randomized 120 townships in April 2011. The sodium reduction program was commenced in the 60 intervention villages between May and June of that year with outcome surveys scheduled for October to December 2012. Baseline data collection shows that randomisation achieved good balance across groups. The establishment of the China Rural Health Initiative has enabled the launch of this large-scale trial designed to identify a novel, scalable strategy for reduction of dietary sodium and control of blood pressure. If proved effective, the intervention could plausibly be implemented at low cost in large parts of China and other countries worldwide. © 2013.

  10. Identifying sensitive areas of adaptive observations for prediction of the Kuroshio large meander using a shallow-water model

    NASA Astrophysics Data System (ADS)

    Zou, Guang'an; Wang, Qiang; Mu, Mu

    2016-09-01

    Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.

  11. A LARGE-SCALE CLUSTER RANDOMIZED TRIAL TO DETERMINE THE EFFECTS OF COMMUNITY-BASED DIETARY SODIUM REDUCTION – THE CHINA RURAL HEALTH INITIATIVE SODIUM REDUCTION STUDY

    PubMed Central

    Li, Nicole; Yan, Lijing L.; Niu, Wenyi; Labarthe, Darwin; Feng, Xiangxian; Shi, Jingpu; Zhang, Jianxin; Zhang, Ruijuan; Zhang, Yuhong; Chu, Hongling; Neiman, Andrea; Engelgau, Michael; Elliott, Paul; Wu, Yangfeng; Neal, Bruce

    2013-01-01

    Background Cardiovascular diseases are the leading cause of death and disability in China. High blood pressure caused by excess intake of dietary sodium is widespread and an effective sodium reduction program has potential to improve cardiovascular health. Design This study is a large-scale, cluster-randomized, trial done in five Northern Chinese provinces. Two counties have been selected from each province and 12 townships in each county making a total of 120 clusters. Within each township one village has been selected for participation with 1:1 randomization stratified by county. The sodium reduction intervention comprises community health education and a food supply strategy based upon providing access to salt substitute. Subsidization of the price of salt substitute was done in 30 intervention villages selected at random. Control villages continued usual practices. The primary outcome for the study is dietary sodium intake level estimated from assays of 24 hour urine. Trial status The trial recruited and randomized 120 townships in April 2011. The sodium reduction program was commenced in the 60 intervention villages between May and June of that year with outcome surveys scheduled for October to December 2012. Baseline data collection shows that randomisation achieved good balance across groups. Discussion The establishment of the China Rural Health Initiative has enabled the launch of this large-scale trial designed to identify a novel, scalable strategy for reduction of dietary sodium and control of blood pressure. If proved effective, the intervention could plausibly be implemented at low cost in large parts of China and other countries worldwide. PMID:24176436

  12. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk.

    PubMed

    Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth

    2006-07-01

    This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.

  13. Genetic drift at expanding frontiers promotes gene segregation

    PubMed Central

    Hallatschek, Oskar; Hersen, Pascal; Ramanathan, Sharad; Nelson, David R.

    2007-01-01

    Competition between random genetic drift and natural selection play a central role in evolution: Whereas nonbeneficial mutations often prevail in small populations by chance, mutations that sweep through large populations typically confer a selective advantage. Here, however, we observe chance effects during range expansions that dramatically alter the gene pool even in large microbial populations. Initially well mixed populations of two fluorescently labeled strains of Escherichia coli develop well defined, sector-like regions with fractal boundaries in expanding colonies. The formation of these regions is driven by random fluctuations that originate in a thin band of pioneers at the expanding frontier. A comparison of bacterial and yeast colonies (Saccharomyces cerevisiae) suggests that this large-scale genetic sectoring is a generic phenomenon that may provide a detectable footprint of past range expansions. PMID:18056799

  14. Key Aspects of Nucleic Acid Library Design for in Vitro Selection

    PubMed Central

    Vorobyeva, Maria A.; Davydova, Anna S.; Vorobjev, Pavel E.; Pyshnyi, Dmitrii V.; Venyaminova, Alya G.

    2018-01-01

    Nucleic acid aptamers capable of selectively recognizing their target molecules have nowadays been established as powerful and tunable tools for biospecific applications, be it therapeutics, drug delivery systems or biosensors. It is now generally acknowledged that in vitro selection enables one to generate aptamers to almost any target of interest. However, the success of selection and the affinity of the resulting aptamers depend to a large extent on the nature and design of an initial random nucleic acid library. In this review, we summarize and discuss the most important features of the design of nucleic acid libraries for in vitro selection such as the nature of the library (DNA, RNA or modified nucleotides), the length of a randomized region and the presence of fixed sequences. We also compare and contrast different randomization strategies and consider computer methods of library design and some other aspects. PMID:29401748

  15. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    PubMed Central

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  16. How does epistasis influence the response to selection?

    PubMed Central

    Barton, N H

    2017-01-01

    Much of quantitative genetics is based on the ‘infinitesimal model', under which selection has a negligible effect on the genetic variance. This is typically justified by assuming a very large number of loci with additive effects. However, it applies even when genes interact, provided that the number of loci is large enough that selection on each of them is weak relative to random drift. In the long term, directional selection will change allele frequencies, but even then, the effects of epistasis on the ultimate change in trait mean due to selection may be modest. Stabilising selection can maintain many traits close to their optima, even when the underlying alleles are weakly selected. However, the number of traits that can be optimised is apparently limited to ~4Ne by the ‘drift load', and this is hard to reconcile with the apparent complexity of many organisms. Just as for the mutation load, this limit can be evaded by a particular form of negative epistasis. A more robust limit is set by the variance in reproductive success. This suggests that selection accumulates information most efficiently in the infinitesimal regime, when selection on individual alleles is weak, and comparable with random drift. A review of evidence on selection strength suggests that although most variance in fitness may be because of alleles with large Nes, substantial amounts of adaptation may be because of alleles in the infinitesimal regime, in which epistasis has modest effects. PMID:27901509

  17. How does epistasis influence the response to selection?

    PubMed

    Barton, N H

    2017-01-01

    Much of quantitative genetics is based on the 'infinitesimal model', under which selection has a negligible effect on the genetic variance. This is typically justified by assuming a very large number of loci with additive effects. However, it applies even when genes interact, provided that the number of loci is large enough that selection on each of them is weak relative to random drift. In the long term, directional selection will change allele frequencies, but even then, the effects of epistasis on the ultimate change in trait mean due to selection may be modest. Stabilising selection can maintain many traits close to their optima, even when the underlying alleles are weakly selected. However, the number of traits that can be optimised is apparently limited to ~4N e by the 'drift load', and this is hard to reconcile with the apparent complexity of many organisms. Just as for the mutation load, this limit can be evaded by a particular form of negative epistasis. A more robust limit is set by the variance in reproductive success. This suggests that selection accumulates information most efficiently in the infinitesimal regime, when selection on individual alleles is weak, and comparable with random drift. A review of evidence on selection strength suggests that although most variance in fitness may be because of alleles with large N e s, substantial amounts of adaptation may be because of alleles in the infinitesimal regime, in which epistasis has modest effects.

  18. The use of single-date MODIS imagery for estimating large-scale urban impervious surface fraction with spectral mixture analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Wu, Changshan

    2013-12-01

    Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.

  19. Transferability of optimally-selected climate models in the quantification of climate change impacts on hydrology

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe

    2016-11-01

    Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.

  20. DEVELOPMENT OF MACROINVERTEBRATE INDICATORS FOR NONWADEABLE TRIBUTARIES TO THE OHIO AND MISSISSIPPI RIVERS

    EPA Science Inventory

    In 2004-02005, macroinvertebrates were sampled from selected large rivers of the upper Midwest to develop appropriate assessment indicators. Macroinvertebrates, habitat and water chemistry data were collected from 132 randomly selected sites across 6 rivers with varying land cove...

  1. Selection of stable scFv antibodies by phage display.

    PubMed

    Brockmann, Eeva-Christine

    2012-01-01

    ScFv fragments are popular recombinant antibody formats but often suffer from limited stability. Phage display is a powerful tool in antibody engineering and applicable also for stability selection. ScFv variants with improved stability can be selected from large randomly mutated phage displayed libraries with a specific antigen after the unstable variants have been inactivated by heat or GdmCl. Irreversible scFv denaturation, which is a prerequisite for efficient selection, is achieved by combining denaturation with reduction of the intradomain disulfide bonds. Repeated selection cycles of increasing stringency result in enrichment of stabilized scFv fragments. Procedures for constructing a randomly mutated scFv library by error-prone PCR and phage display selection for enrichment of stable scFv antibodies from the library are described here.

  2. Measuring CAMD technique performance. 2. How "druglike" are drugs? Implications of Random test set selection exemplified using druglikeness classification models.

    PubMed

    Good, Andrew C; Hermsmeier, Mark A

    2007-01-01

    Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.

  3. Development of Multiple Regression Equations To Predict Fourth Graders' Achievement in Reading and Selected Content Areas.

    ERIC Educational Resources Information Center

    Hafner, Lawrence E.

    A study developed a multiple regression prediction equation for each of six selected achievement variables in a popular standardized test of achievement. Subjects, 42 fourth-grade pupils randomly selected across several classes in a large elementary school in a north Florida city, were administered several standardized tests to determine predictor…

  4. Why the null matters: statistical tests, random walks and evolution.

    PubMed

    Sheets, H D; Mitchell, C E

    2001-01-01

    A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.

  5. Using Propensity Scores in Quasi-Experimental Designs to Equate Groups

    ERIC Educational Resources Information Center

    Lane, Forrest C.; Henson, Robin K.

    2010-01-01

    Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…

  6. Applications of random forest feature selection for fine-scale genetic population assignment.

    PubMed

    Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G

    2018-02-01

    Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.

  7. On the Choice of Adequate Randomization Ranges for Limiting the Use of Unwanted Cues in Same-Different, Dual-Pair, and Oddity Tasks

    PubMed Central

    Dai, Huanping; Micheyl, Christophe

    2010-01-01

    A major concern when designing a psychophysical experiment is that participants may use another stimulus feature (“cue”) than that intended by the experimenter. One way to avoid this involves applying random variations to the corresponding feature across stimulus presentations, to make the “unwanted” cue unreliable. An important question facing experimenters who use this randomization (“roving”) technique is: How large should the randomization range be to ensure that participants cannot achieve a certain proportion correct (PC) by using the unwanted cue, while at the same time avoiding unnecessary interference of the randomization with task performance? Previous publications have provided formulas for the selection of adequate randomization ranges in yes-no and multiple-alternative, forced-choice tasks. In this article, we provide figures and tables, which can be used to select randomization ranges that are better suited to experiments involving a same-different, dual-pair, or oddity task. PMID:20139466

  8. Combined cognitive-strategy and task-specific training improves transfer to untrained activities in sub-acute stroke: An exploratory randomized controlled trial

    PubMed Central

    McEwen, Sara; Polatajko, Helene; Baum, Carolyn; Rios, Jorge; Cirone, Dianne; Doherty, Meghan; Wolf, Timothy

    2014-01-01

    Purpose The purpose of this study was to estimate the effect of the Cognitive Orientation to daily Occupational Performance (CO-OP) approach compared to usual outpatient rehabilitation on activity and participation in people less than 3 months post stroke. Methods An exploratory, single blind, randomized controlled trial with a usual care control arm was conducted. Participants referred to 2 stroke rehabilitation outpatient programs were randomized to receive either Usual Care or CO-OP. The primary outcome was actual performance of trained and untrained self-selected activities, measured using the Performance Quality Rating Scale (PQRS). Additional outcomes included the Canadian Occupational Performance Measure (COPM), the Stroke Impact Scale Participation Domain, the Community Participation Index, and the Self Efficacy Gauge. Results Thirty-five (35) eligible participants were randomized; 26 completed the intervention. Post-intervention, PQRS change scores demonstrated CO-OP had a medium effect over Usual Care on trained self-selected activities (d=0.5) and a large effect on untrained (d=1.2). At a 3 month follow-up, PQRS change scores indicated a large effect of CO-OP on both trained (d=1.6) and untrained activities (d=1.1). CO-OP had a small effect on COPM and a medium effect on the Community Participation Index perceived control and the Self-Efficacy Gauge. Conclusion CO-OP was associated with a large treatment effect on follow up performances of self-selected activities, and demonstrated transfer to untrained activities. A larger trial is warranted. PMID:25416738

  9. Combined Cognitive-Strategy and Task-Specific Training Improve Transfer to Untrained Activities in Subacute Stroke: An Exploratory Randomized Controlled Trial.

    PubMed

    McEwen, Sara; Polatajko, Helene; Baum, Carolyn; Rios, Jorge; Cirone, Dianne; Doherty, Meghan; Wolf, Timothy

    2015-07-01

    The purpose of this study was to estimate the effect of the Cognitive Orientation to daily Occupational Performance (CO-OP) approach compared with usual outpatient rehabilitation on activity and participation in people <3 months poststroke. An exploratory, single-blind, randomized controlled trial, with a usual-care control arm, was conducted. Participants referred to 2 stroke rehabilitation outpatient programs were randomized to receive either usual care or CO-OP. The primary outcome was actual performance of trained and untrained self-selected activities, measured using the Performance Quality Rating Scale (PQRS). Additional outcomes included the Canadian Occupational Performance Measure (COPM), the Stroke Impact Scale Participation Domain, the Community Participation Index, and the Self-Efficacy Gauge. A total of 35 eligible participants were randomized; 26 completed the intervention. Post intervention, PQRS change scores demonstrated that CO-OP had a medium effect over usual care on trained self-selected activities (d = 0.5) and a large effect on untrained activities (d = 1.2). At a 3-month follow-up, PQRS change scores indicated a large effect of CO-OP on both trained (d = 1.6) and untrained activities (d = 1.1). CO-OP had a small effect on COPM and a medium effect on the Community Participation Index perceived control and on the Self-Efficacy Gauge. CO-OP was associated with a large treatment effect on follow-up performances of self-selected activities and demonstrated transfer to untrained activities. A larger trial is warranted. © The Author(s) 2014.

  10. SNP selection and classification of genome-wide SNP data using stratified sampling random forests.

    PubMed

    Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K

    2012-09-01

    For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.

  11. Electrofishing Effort Required to Estimate Biotic Condition in Southern Idaho Rivers

    EPA Science Inventory

    An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in...

  12. Many Children Left Behind? Textbooks and Test Scores in Kenya. NBER Working Paper No. 13300

    ERIC Educational Resources Information Center

    Glewwe, Paul; Kremer, Michael; Moulin, Sylvie

    2007-01-01

    A randomized evaluation suggests that a program which provided official textbooks to randomly selected rural Kenyan primary schools did not increase test scores for the average student. In contrast, the previous literature suggests that textbook provision has a large impact on test scores. Disaggregating the results by students' initial academic…

  13. Educational Research with Real-World Data: Reducing Selection Bias with Propensity Scores

    ERIC Educational Resources Information Center

    Adelson, Jill L.

    2013-01-01

    Often it is infeasible or unethical to use random assignment in educational settings to study important constructs and questions. Hence, educational research often uses observational data, such as large-scale secondary data sets and state and school district data, and quasi-experimental designs. One method of reducing selection bias in estimations…

  14. Site Selection in Experiments: An Assessment of Site Recruitment and Generalizability in Two Scale-Up Studies

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castilla, Veronica

    2016-01-01

    Recently, statisticians have begun developing methods to improve the generalizability of results from large-scale experiments in education. This work has included the development of methods for improved site selection when random sampling is infeasible, including the use of stratification and targeted recruitment strategies. This article provides…

  15. The genealogy of samples in models with selection.

    PubMed

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  16. The Genealogy of Samples in Models with Selection

    PubMed Central

    Neuhauser, C.; Krone, S. M.

    1997-01-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604

  17. Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors. NBER Working Paper No. 14081

    ERIC Educational Resources Information Center

    Carrell, Scott E.; West, James E.

    2008-01-01

    It is difficult to measure teaching quality at the postsecondary level because students typically "self-select" their coursework and their professors. Despite this, student evaluations of professors are widely used in faculty promotion and tenure decisions. We exploit the random assignment of college students to professors in a large body of…

  18. Selective decontamination of the digestive tract in gastrointestinal surgery: useful in infection prevention? A systematic review.

    PubMed

    Abis, Gabor S A; Stockmann, Hein B A C; van Egmond, Marjolein; Bonjer, Hendrik J; Vandenbroucke-Grauls, Christina M J E; Oosterling, Steven J

    2013-12-01

    Gastrointestinal surgery is associated with a high incidence of infectious complications. Selective decontamination of the digestive tract is an antimicrobial prophylaxis regimen that aims to eradicate gastrointestinal carriage of potentially pathogenic microorganisms and represents an adjunct to regular prophylaxis in surgery. Relevant studies were identified using bibliographic searches of MEDLINE, EMBASE, and the Cochrane database (period from 1970 to November 1, 2012). Only studies investigating selective decontamination of the digestive tract in gastrointestinal surgery were included. Two randomized clinical trials and one retrospective case-control trial showed significant benefit in terms of infectious complications and anastomotic leakage in colorectal surgery. Two randomized controlled trials in esophageal surgery and two randomized clinical trials in gastric surgery reported lower levels of infectious complications. Selective decontamination of the digestive tract reduces infections following esophageal, gastric, and colorectal surgeries and also appears to have beneficial effects on anastomotic leakage in colorectal surgery. We believe these results provide the basis for a large multicenter prospective study to investigate the role of selective decontamination of the digestive tract in colorectal surgery.

  19. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  20. Age-related Cataract in a Randomized Trial of Selenium and Vitamin E in Men: The SELECT Eye Endpoints (SEE) Study

    PubMed Central

    Christen, William G.; Glynn, Robert J.; Gaziano, J. Michael; Darke, Amy K.; Crowley, John J.; Goodman, Phyllis J.; Lippman, Scott M.; Lad, Thomas E.; Bearden, James D.; Goodman, Gary E.; Minasian, Lori M.; Thompson, Ian M.; Blanke, Charles D.; Klein, Eric A.

    2014-01-01

    Importance Observational studies suggest a role for dietary nutrients such as vitamin E and selenium in cataract prevention. However, the results of randomized trials of vitamin E supplements and cataract have been disappointing, and are not yet available for selenium. Objective To test whether long-term supplementation with selenium and vitamin E affects the incidence of cataract in a large cohort of men. Design, Setting, and Participants The SELECT Eye Endpoints (SEE) study was an ancillary study of the SWOG-coordinated Selenium and Vitamin E Cancer Prevention Trial (SELECT), a randomized, placebo-controlled, four arm trial of selenium and vitamin E conducted among 35,533 men aged 50 years and older for African Americans and 55 and older for all other men, at 427 participating sites in the US, Canada, and Puerto Rico. A total of 11,267 SELECT participants from 128 SELECT sites participated in the SEE ancillary study. Intervention Individual supplements of selenium (200 µg/d from L-selenomethionine) and vitamin E (400 IU/d of all rac-α-tocopheryl acetate). Main Outcome Measures Incident cataract, defined as a lens opacity, age-related in origin, responsible for a reduction in best-corrected visual acuity to 20/30 or worse based on self-report confirmed by medical record review, and cataract extraction, defined as the surgical removal of an incident cataract. Results During a mean (SD) of 5.6 (1.2) years of treatment and follow-up, 389 cases of cataract were documented. There were 185 cataracts in the selenium group and 204 in the no selenium group (hazard ratio [HR], 0.91; 95 percent confidence interval [CI], 0.75 to 1.11; P=.37). For vitamin E, there were 197 cases in the treated group and 192 in the placebo group (HR, 1.02; CI, 0.84 to 1.25; P=.81). Similar results were observed for cataract extraction. Conclusions and Relevance These randomized trial data from a large cohort of apparently healthy men indicate that long-term daily supplementation with selenium and/or vitamin E is unlikely to have a large beneficial effect on age-related cataract. PMID:25232809

  1. Extensively Parameterized Mutation-Selection Models Reliably Capture Site-Specific Selective Constraint.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2016-11-01

    The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Student Sorting and Bias in Value Added Estimation: Selection on Observables and Unobservables. NBER Working Paper No. 14666

    ERIC Educational Resources Information Center

    Rothstein, Jesse

    2009-01-01

    Non-random assignment of students to teachers can bias value added estimates of teachers' causal effects. Rothstein (2008a, b) shows that typical value added models indicate large counter-factual effects of 5th grade teachers on students' 4th grade learning, indicating that classroom assignments are far from random. This paper quantifies the…

  3. Factors Associated with High Use of a Workplace Web-Based Stress Management Program in a Randomized Controlled Intervention Study

    ERIC Educational Resources Information Center

    Hasson, H.; Brown, C.; Hasson, D.

    2010-01-01

    In web-based health promotion programs, large variations in participant engagement are common. The aim was to investigate determinants of high use of a worksite self-help web-based program for stress management. Two versions of the program were offered to randomly selected departments in IT and media companies. A static version of the program…

  4. Prevailing practices in the use of antibiotics by dairy farmers in Eastern Haryana region of India

    PubMed Central

    Kumar, Vikash; Gupta, Jancy

    2018-01-01

    Aim: The aim of the study was to assess the antibiotic use in dairy animals and to trace its usage pattern among the small, medium, and large dairy farmers in Eastern Haryana region of India. Materials and Methods: Karnal and Kurukshetra districts from Eastern region of Haryana state were purposively selected, and four villages from each district were selected randomly. From each village, 21 farmers were selected using stratified random sampling by categorizing into small, medium, and large farmers constituting a total of 168 farmers as respondents. An antibiotic usage index (AUI) was developed to assess usage of antibiotics by dairy farmers. Results: Frequency of veterinary consultancy was high among large dairy farmers, and they mostly preferred veterinarians over para-veterinarians for treatment of dairy animals. Small farmers demanded low-cost antibiotics from veterinarians whereas large farmers rarely went for it. Antibiotics were used maximum for therapeutic purposes by all categories of farmers. Completion of treatment schedules and follow-up were strictly practiced by the majority of large farmers. AUI revealed that large farmers were more consistent on decision-making about prudent use of antibiotics. Routine use of antibiotics after parturition to prevent disease and sale of milk without adhering to withdrawal period was responsible for aggravating the antibiotic resistance. The extent of antibiotic use by small farmers depended on the severity of disease. The large farmers opted for the prophylactic use of antibiotics at the herd level. Conclusion: Antibiotic usage practices were judicious among large dairy farmers, moderately prudent by medium dairy farmers and faulty by small farmers. The frequency of veterinary consultancy promoted better veterinary-client relationship among large farmers. PMID:29657416

  5. Effects of multiple spreaders in community networks

    NASA Astrophysics Data System (ADS)

    Hu, Zhao-Long; Ren, Zhuo-Ming; Yang, Guang-Yong; Liu, Jian-Guo

    2014-12-01

    Human contact networks exhibit the community structure. Understanding how such community structure affects the epidemic spreading could provide insights for preventing the spreading of epidemics between communities. In this paper, we explore the spreading of multiple spreaders in community networks. A network based on the clustering preferential mechanism is evolved, whose communities are detected by the Girvan-Newman (GN) algorithm. We investigate the spreading effectiveness by selecting the nodes as spreaders in the following ways: nodes with the largest degree in each community (community hubs), the same number of nodes with the largest degree from the global network (global large-degree) and randomly selected one node within each community (community random). The experimental results on the SIR model show that the spreading effectiveness based on the global large-degree and community hubs methods is the same in the early stage of the infection and the method of community random is the worst. However, when the infection rate exceeds the critical value, the global large-degree method embodies the worst spreading effectiveness. Furthermore, the discrepancy of effectiveness for the three methods will decrease as the infection rate increases. Therefore, we should immunize the hubs in each community rather than those hubs in the global network to prevent the outbreak of epidemics.

  6. A novel artificial fish swarm algorithm for solving large-scale reliability-redundancy application problem.

    PubMed

    He, Qiang; Hu, Xiangtao; Ren, Hong; Zhang, Hongqi

    2015-11-01

    A novel artificial fish swarm algorithm (NAFSA) is proposed for solving large-scale reliability-redundancy allocation problem (RAP). In NAFSA, the social behaviors of fish swarm are classified in three ways: foraging behavior, reproductive behavior, and random behavior. The foraging behavior designs two position-updating strategies. And, the selection and crossover operators are applied to define the reproductive ability of an artificial fish. For the random behavior, which is essentially a mutation strategy, the basic cloud generator is used as the mutation operator. Finally, numerical results of four benchmark problems and a large-scale RAP are reported and compared. NAFSA shows good performance in terms of computational accuracy and computational efficiency for large scale RAP. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  7. A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.

    PubMed

    Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco

    2005-02-01

    Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.

  8. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    PubMed

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  9. Roosting habitat use and selection by northern spotted owls during natal dispersal

    USGS Publications Warehouse

    Sovern, Stan G.; Forsman, Eric D.; Dugger, Catherine M.; Taylor, Margaret

    2015-01-01

    We studied habitat selection by northern spotted owls (Strix occidentalis caurina) during natal dispersal in Washington State, USA, at both the roost site and landscape scales. We used logistic regression to obtain parameters for an exponential resource selection function based on vegetation attributes in roost and random plots in 76 forest stands that were used for roosting. We used a similar analysis to evaluate selection of landscape habitat attributes based on 301 radio-telemetry relocations and random points within our study area. We found no evidence of within-stand selection for any of the variables examined, but 78% of roosts were in stands with at least some large (>50 cm dbh) trees. At the landscape scale, owls selected for stands with high canopy cover (>70%). Dispersing owls selected vegetation types that were more similar to habitat selected by adult owls than habitat that would result from following guidelines previously proposed to maintain dispersal habitat. Our analysis indicates that juvenile owls select stands for roosting that have greater canopy cover than is recommended in current agency guidelines.

  10. Modified Bat Algorithm for Feature Selection with the Wisconsin Diagnosis Breast Cancer (WDBC) Dataset

    PubMed

    Jeyasingh, Suganthi; Veluchamy, Malathi

    2017-05-01

    Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License

  11. Random forest feature selection approach for image segmentation

    NASA Astrophysics Data System (ADS)

    Lefkovits, László; Lefkovits, Szidónia; Emerich, Simina; Vaida, Mircea Florin

    2017-03-01

    In the field of image segmentation, discriminative models have shown promising performance. Generally, every such model begins with the extraction of numerous features from annotated images. Most authors create their discriminative model by using many features without using any selection criteria. A more reliable model can be built by using a framework that selects the important variables, from the point of view of the classification, and eliminates the unimportant once. In this article we present a framework for feature selection and data dimensionality reduction. The methodology is built around the random forest (RF) algorithm and its variable importance evaluation. In order to deal with datasets so large as to be practically unmanageable, we propose an algorithm based on RF that reduces the dimension of the database by eliminating irrelevant features. Furthermore, this framework is applied to optimize our discriminative model for brain tumor segmentation.

  12. College Climate and Teacher-Trainee's Academic Work in Selected Colleges of Education in the Ashanti Region of Ghana

    ERIC Educational Resources Information Center

    Adjei, Augustine; Dontoh, Samuel; Baafi-Frimpong, Stephen

    2017-01-01

    The study aimed at investigating the extent to which College climate (Leadership roles/practices and Class size) impact on academic work of Teacher-trainees. A survey research design was used for the study because it involved a study of relatively large population who were purposively and randomly selected. A sample size of 322 out of the…

  13. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences.

    PubMed

    Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter

    2014-01-13

    Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.

  14. Impact of Probiotics on Necrotizing Enterocolitis

    PubMed Central

    Underwood, Mark A.

    2016-01-01

    A large number of randomized placebo-controlled clinical trials and cohort studies have demonstrated a decrease in the incidence of necrotizing enterocolitis with administration of probiotic microbes. These studies have prompted many neonatologists to adopt routine prophylactic administration of probiotics while others await more definitive studies and/or probiotic products with demonstrated purity and stable numbers of live organisms. Cross-contamination and inadequate sample size limit the value of further traditional placebo-controlled randomized controlled trials. Key areas for future research include mechanisms of protection, optimum probiotic species or strains (or combinations thereof) and duration of treatment, interactions between diet and the administered probiotic, and the influence of genetic polymorphisms in the mother and infant on probiotic response. Next generation probiotics selected based on bacterial genetics rather than ease of production and large cluster-randomized clinical trials hold great promise for NEC prevention. PMID:27836423

  15. Random sampling of elementary flux modes in large-scale metabolic networks.

    PubMed

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  16. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    PubMed

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.

  17. Random Bits Forest: a Strong Classifier/Regressor for Big Data

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Yi; Pu, Weilin; Wen, Kathryn; Shugart, Yin Yao; Xiong, Momiao; Jin, Li

    2016-07-01

    Efficiency, memory consumption, and robustness are common problems with many popular methods for data analysis. As a solution, we present Random Bits Forest (RBF), a classification and regression algorithm that integrates neural networks (for depth), boosting (for width), and random forests (for prediction accuracy). Through a gradient boosting scheme, it first generates and selects ~10,000 small, 3-layer random neural networks. These networks are then fed into a modified random forest algorithm to obtain predictions. Testing with datasets from the UCI (University of California, Irvine) Machine Learning Repository shows that RBF outperforms other popular methods in both accuracy and robustness, especially with large datasets (N > 1000). The algorithm also performed highly in testing with an independent data set, a real psoriasis genome-wide association study (GWAS).

  18. Effect of damping on excitability of high-order normal modes. [for a large space telescope spacecraft

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.; Gates, R. M.; Straayer, J. W.

    1975-01-01

    The effect of localized structural damping on the excitability of higher-order large space telescope spacecraft modes is investigated. A preprocessor computer program is developed to incorporate Voigt structural joint damping models in a finite-element dynamic model. A postprocessor computer program is developed to select critical modes for low-frequency attitude control problems and for higher-frequency fine-stabilization problems. The selection is accomplished by ranking the flexible modes based on coefficients for rate gyro, position gyro, and optical sensor, and on image-plane motions due to sinusoidal or random PSD force and torque inputs.

  19. Trial Registration: Understanding and Preventing Reporting Bias in Social Work Research

    ERIC Educational Resources Information Center

    Harrison, Bronwyn A.; Mayo-Wilson, Evan

    2014-01-01

    Randomized controlled trials are considered the gold standard for evaluating social work interventions. However, published reports can systematically overestimate intervention effects when researchers selectively report large and significant findings. Publication bias and other types of reporting biases can be minimized through prospective trial…

  20. Optimizing event selection with the random grid search

    NASA Astrophysics Data System (ADS)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; Stewart, Chip

    2018-07-01

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  1. Fluctuating Selection in the Moran

    PubMed Central

    Dean, Antony M.; Lehman, Clarence; Yi, Xiao

    2017-01-01

    Contrary to classical population genetics theory, experiments demonstrate that fluctuating selection can protect a haploid polymorphism in the absence of frequency dependent effects on fitness. Using forward simulations with the Moran model, we confirm our analytical results showing that a fluctuating selection regime, with a mean selection coefficient of zero, promotes polymorphism. We find that increases in heterozygosity over neutral expectations are especially pronounced when fluctuations are rapid, mutation is weak, the population size is large, and the variance in selection is big. Lowering the frequency of fluctuations makes selection more directional, and so heterozygosity declines. We also show that fluctuating selection raises dn/ds ratios for polymorphism, not only by sweeping selected alleles into the population, but also by purging the neutral variants of selected alleles as they undergo repeated bottlenecks. Our analysis shows that randomly fluctuating selection increases the rate of evolution by increasing the probability of fixation. The impact is especially noticeable when the selection is strong and mutation is weak. Simulations show the increase in the rate of evolution declines as the rate of new mutations entering the population increases, an effect attributable to clonal interference. Intriguingly, fluctuating selection increases the dn/ds ratios for divergence more than for polymorphism, a pattern commonly seen in comparative genomics. Our model, which extends the classical neutral model of molecular evolution by incorporating random fluctuations in selection, accommodates a wide variety of observations, both neutral and selected, with economy. PMID:28108586

  2. Comparative Evaluations of Randomly Selected Four Point-of-Care Glucometer Devices in Addis Ababa, Ethiopia.

    PubMed

    Wolde, Mistire; Tarekegn, Getahun; Kebede, Tedla

    2018-05-01

    Point-of-care glucometer (PoCG) devices play a significant role in self-monitoring of the blood sugar level, particularly in the follow-up of high blood sugar therapeutic response. The aim of this study was to evaluate blood glucose test results performed with four randomly selected glucometers on diabetes and control subjects versus standard wet chemistry (hexokinase) methods in Addis Ababa, Ethiopia. A prospective cross-sectional study was conducted on randomly selected 200 study participants (100 participants with diabetes and 100 healthy controls). Four randomly selected PoCG devices (CareSens N, DIAVUE Prudential, On Call Extra, i-QARE DS-W) were evaluated against hexokinase method and ISO 15197:2003 and ISO 15197:2013 standards. The minimum and maximum blood sugar values were recorded by CareSens N (21 mg/dl) and hexokinase method (498.8 mg/dl), respectively. The mean sugar values of all PoCG devices except On Call Extra showed significant differences compared with the reference hexokinase method. Meanwhile, all four PoCG devices had strong positive relationship (>80%) with the reference method (hexokinase). On the other hand, none of the four PoCG devices fulfilled the minimum accuracy measurement set by ISO 15197:2003 and ISO 15197:2013 standards. In addition, the linear regression analysis revealed that all four selected PoCG overestimated the glucose concentrations. The overall evaluation of the selected four PoCG measurements were poorly correlated with standard reference method. Therefore, before introducing PoCG devices to the market, there should be a standardized evaluation platform for validation. Further similar large-scale studies on other PoCG devices also need to be undertaken.

  3. Selecting Statistical Quality Control Procedures for Limiting the Impact of Increases in Analytical Random Error on Patient Safety.

    PubMed

    Yago, Martín

    2017-05-01

    QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.

  4. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences

    PubMed Central

    2014-01-01

    Background Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Results Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. Conclusion The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification. PMID:24418292

  5. ALIEN SPECIES IMPORTANTANCE IN NATIVE VEGETATION ALONG WADEABLE STREAMS, JOHN DAY RIVER BASIN, OREGON, USA

    EPA Science Inventory

    We evaluated the importance of alien species in existing vegetation along wadeable streams of a large, topographically diverse river basin in eastern Oregon, USA; sampling 165 plots (30 × 30 m) across 29 randomly selected 1-km stream reaches. Plots represented eight streamside co...

  6. Molecular Characterization of Cultivated Pawpaw (Asimina triloba) Using RAPD Markers

    Treesearch

    Hongwen Huang; Desmond R. Layne; Thomas L. Kubisiak

    2003-01-01

    Thirty-four extant pawpaw [Asimina triloba (L.) Dunal] cultivars and advanced selections representing a large portion of the gene pool of cultivated pawpaws were investigated using 71 randomly amplified polymorphic DNA (RAPD) markers to establish genetic identities and evaluate genetic relatedness. All 34 cultivated pawpaws were uniquely...

  7. Delivery Systems: "Saber Tooth" Effect in Counseling.

    ERIC Educational Resources Information Center

    Traylor, Elwood B.

    This study reported the role of counselors as perceived by black students in a secondary school. Observational and interview methods were employed to obtain data from 24 black students selected at random from the junior and senior classes of a large metropolitan secondary school. Findings include: counselors were essentially concerned with…

  8. U.S. EPA/ORD LARGE BUILDINGS STUDY: RESULTS OF THE INITIAL SURVEY OF RANDOMLY SELECTED GSA BUILDINGS

    EPA Science Inventory

    The Atmospheric Research and Exposure Assessment Laboratory (AREAL), Office of Research and Development (ORD), U.S. Environmental Protection Agency (EPA), is initiating a research program to connect fundamental information on the key parameters and factors that influence indoor a...

  9. Escitalopram treatment of depression in human immunodeficiency virus/acquired immunodeficiency syndrome: a randomized, double-blind, placebo-controlled study.

    PubMed

    Hoare, Jacqueline; Carey, Paul; Joska, John A; Carrara, Henri; Sorsdahl, Katherine; Stein, Dan J

    2014-02-01

    Depression can be a chronic and impairing illness in people with human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome. Large randomized studies of newer selective serotonin reuptake inhibitors such as escitalopram in the treatment of depression in HIV, examining comparative treatment efficacy and safety, have yet to be done in HIV-positive patients. This was a fixed-dose, placebo-controlled, randomized, double-blind study to investigate the efficacy of escitalopram in HIV-seropositive subjects with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, major depressive disorder. One hundred two participants were randomly assigned to either 10 mg of escitalopram or placebo for 6 weeks. An analysis of covariance of the completers found that there was no advantage for escitalopram over placebo on the Montgomery-Asberg Depression Rating Scale (p = 0.93). Sixty-two percent responded to escitalopram and 59% responded to placebo on the Clinical Global Impression Scale. Given the relatively high placebo response, future trials in this area need to be selective in participant recruitment and to be adequately powered.

  10. Multi-Conformer Ensemble Docking to Difficult Protein Targets

    DOE PAGES

    Ellingson, Sally R.; Miao, Yinglong; Baudry, Jerome; ...

    2014-09-08

    We investigate large-scale ensemble docking using five proteins from the Directory of Useful Decoys (DUD, dud.docking.org) for which docking to crystal structures has proven difficult. Molecular dynamics trajectories are produced for each protein and an ensemble of representative conformational structures extracted from the trajectories. Docking calculations are performed on these selected simulation structures and ensemble-based enrichment factors compared with those obtained using docking in crystal structures of the same protein targets or random selection of compounds. We also found simulation-derived snapshots with improved enrichment factors that increased the chemical diversity of docking hits for four of the five selected proteins.more » A combination of all the docking results obtained from molecular dynamics simulation followed by selection of top-ranking compounds appears to be an effective strategy for increasing the number and diversity of hits when using docking to screen large libraries of chemicals against difficult protein targets.« less

  11. The distribution of genetic variance across phenotypic space and the response to selection.

    PubMed

    Blows, Mark W; McGuigan, Katrina

    2015-05-01

    The role of adaptation in biological invasions will depend on the availability of genetic variation for traits under selection in the new environment. Although genetic variation is present for most traits in most populations, selection is expected to act on combinations of traits, not individual traits in isolation. The distribution of genetic variance across trait combinations can be characterized by the empirical spectral distribution of the genetic variance-covariance (G) matrix. Empirical spectral distributions of G from a range of trait types and taxa all exhibit a characteristic shape; some trait combinations have large levels of genetic variance, while others have very little genetic variance. In this study, we review what is known about the empirical spectral distribution of G and show how it predicts the response to selection across phenotypic space. In particular, trait combinations that form a nearly null genetic subspace with little genetic variance respond only inconsistently to selection. We go on to set out a framework for understanding how the empirical spectral distribution of G may differ from the random expectations that have been developed under random matrix theory (RMT). Using a data set containing a large number of gene expression traits, we illustrate how hypotheses concerning the distribution of multivariate genetic variance can be tested using RMT methods. We suggest that the relative alignment between novel selection pressures during invasion and the nearly null genetic subspace is likely to be an important component of the success or failure of invasion, and for the likelihood of rapid adaptation in small populations in general. © 2014 John Wiley & Sons Ltd.

  12. Vast Portfolio Selection with Gross-exposure Constraints*

    PubMed Central

    Fan, Jianqing; Zhang, Jingjin; Yu, Ke

    2012-01-01

    We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404

  13. Optimizing event selection with the random grid search

    DOE PAGES

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen; ...

    2018-02-27

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  14. Optimizing Event Selection with the Random Grid Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events inmore » the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  15. Optimizing event selection with the random grid search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pushpalatha C.; Prosper, Harrison B.; Sekmen, Sezen

    In this paper, the random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector bosonmore » fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.« less

  16. Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.

    PubMed

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-01-01

    Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

  17. Population size effects in evolutionary dynamics on neutral networks and toy landscapes

    NASA Astrophysics Data System (ADS)

    Sumedha; Martin, Olivier C.; Peliti, Luca

    2007-05-01

    We study the dynamics of a population subject to selective pressures, evolving either on RNA neutral networks or on toy fitness landscapes. We discuss the spread and the neutrality of the population in the steady state. Different limits arise depending on whether selection or random drift is dominant. In the presence of strong drift we show that the observables depend mainly on Mμ, M being the population size and μ the mutation rate, while corrections to this scaling go as 1/M: such corrections can be quite large in the presence of selection if there are barriers in the fitness landscape. Also we find that the convergence to the large-Mμ limit is linear in 1/Mμ. Finally we introduce a protocol that minimizes drift; then observables scale like 1/M rather than 1/(Mμ), allowing one to determine the large-M limit more quickly when μ is small; furthermore the genotypic diversity increases from O(lnM) to O(M).

  18. Academic Specialisation and Returns to Education: Evidence from India

    ERIC Educational Resources Information Center

    Saha, Bibhas; Sensarma, Rudra

    2011-01-01

    We study returns to academic specialisation for Indian corporate sector workers by analysing cross-sectional data on male employees randomly selected from six large firms. Our analysis shows that going to college pays off, as it brings significant incremental returns over and above school education. However, the increase in returns is more…

  19. Serving Bowl Selection Biases the Amount of Food Served

    ERIC Educational Resources Information Center

    van Kleef, Ellen; Shimizu, Mitsuru; Wansink, Brian

    2012-01-01

    Objective: To determine how common serving bowls containing food for multiple persons influence serving behavior and consumption and whether they do so independently of satiation and food evaluation. Methods: In this between-subjects experiment, 68 participants were randomly assigned to either a group serving pasta from a large-sized bowl (6.9-L…

  20. Who Gets Care? Mental Health Service Use Following a School-Based Suicide Prevention Program

    ERIC Educational Resources Information Center

    Kataoka, Sheryl; Stein, Bradley D.; Nadeem, Erum; Wong, Marleen

    2007-01-01

    Objective: To examine symptomatology and mental health service use following students' contact with a large urban school district's suicide prevention program. Method: In 2001 school district staff conducted telephone interviews with 95 randomly selected parents approximately 5 months following their child's contact with the district's suicide…

  1. Health Literacy in College Students

    ERIC Educational Resources Information Center

    Ickes, Melinda J.; Cottrell, Randall

    2010-01-01

    Objective: The purpose of this study was to assess the health literacy levels, and the potential importance of healthy literacy, of college students. Participants: Courses were randomly selected from all upper level undergraduate courses at a large Research I university to obtain a sample size of N = 399. Methods: During the 2007-2008 school year,…

  2. Fluctuating Selection in the Moran.

    PubMed

    Dean, Antony M; Lehman, Clarence; Yi, Xiao

    2017-03-01

    Contrary to classical population genetics theory, experiments demonstrate that fluctuating selection can protect a haploid polymorphism in the absence of frequency dependent effects on fitness. Using forward simulations with the Moran model, we confirm our analytical results showing that a fluctuating selection regime, with a mean selection coefficient of zero, promotes polymorphism. We find that increases in heterozygosity over neutral expectations are especially pronounced when fluctuations are rapid, mutation is weak, the population size is large, and the variance in selection is big. Lowering the frequency of fluctuations makes selection more directional, and so heterozygosity declines. We also show that fluctuating selection raises d n / d s ratios for polymorphism, not only by sweeping selected alleles into the population, but also by purging the neutral variants of selected alleles as they undergo repeated bottlenecks. Our analysis shows that randomly fluctuating selection increases the rate of evolution by increasing the probability of fixation. The impact is especially noticeable when the selection is strong and mutation is weak. Simulations show the increase in the rate of evolution declines as the rate of new mutations entering the population increases, an effect attributable to clonal interference. Intriguingly, fluctuating selection increases the d n / d s ratios for divergence more than for polymorphism, a pattern commonly seen in comparative genomics. Our model, which extends the classical neutral model of molecular evolution by incorporating random fluctuations in selection, accommodates a wide variety of observations, both neutral and selected, with economy. Copyright © 2017 by the Genetics Society of America.

  3. Resampling method for applying density-dependent habitat selection theory to wildlife surveys.

    PubMed

    Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel

    2015-01-01

    Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large geographic extents.

  4. 47 CFR 1.1602 - Designation for random selection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  5. 47 CFR 1.1602 - Designation for random selection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  6. [Subjective evaluation of the effect of meteorological factors on the psychophysical status of workers of a large industrial plant].

    PubMed

    Kocur, J; Gruszczyński, W

    1983-01-01

    The authors have conducted an epidemiological inquiry in a randomly selected group of 171 workers of a large refining and petrochemical plant. The investigation demonstrated a high meteorotropic sensitivity varying with the employment length, higher in women and those treated for various psychic and somatic disturbances. High meteorotropic sensitivity of refinery plant workers made the authors raise the hypothesis of the effect of chemical working environment pollution upon the physiological functions of the organism.

  7. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  8. Large-scale randomized clinical trials of bioactives and nutrients in relation to human health and disease prevention - Lessons from the VITAL and COSMOS trials.

    PubMed

    Rautiainen, Susanne; Sesso, Howard D; Manson, JoAnn E

    2017-12-29

    Several bioactive compounds and nutrients in foods have physiological properties that are beneficial for human health. While nutrients typically have clear definitions with established levels of recommended intakes, bioactive compounds often lack such a definition. Although a food-based approach is often the optimal approach to ensure adequate intake of bioactives and nutrients, these components are also often produced as dietary supplements. However, many of these supplements are not sufficiently studied and have an unclear role in chronic disease prevention. Randomized trials are considered the gold standard of study designs, but have not been fully applied to understand the effects of bioactives and nutrients. We review the specific role of large-scale trials to test whether bioactives and nutrients have an effect on health outcomes through several crucial components of trial design, including selection of intervention, recruitment, compliance, outcome selection, and interpretation and generalizability of study findings. We will discuss these components in the context of two randomized clinical trials, the VITamin D and OmegA-3 TriaL (VITAL) and the COcoa Supplement and Multivitamin Outcomes Study (COSMOS). We will mainly focus on dietary supplements of bioactives and nutrients while also emphasizing the need for translation and integration with food-based trials that are of vital importance within nutritional research. Copyright © 2017. Published by Elsevier Ltd.

  9. 47 CFR 1.1603 - Conduct of random selection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  10. 47 CFR 1.1603 - Conduct of random selection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  11. High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems

    NASA Astrophysics Data System (ADS)

    Bose, S. K.

    1980-12-01

    Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.

  12. Exploring the repetition bias in voluntary task switching.

    PubMed

    Mittelstädt, Victor; Dignath, David; Schmidt-Ott, Magdalena; Kiesel, Andrea

    2018-01-01

    In the voluntary task-switching paradigm, participants are required to randomly select tasks. We reasoned that the consistent finding of a repetition bias (i.e., participants repeat tasks more often than expected by chance) reflects reasonable adaptive task selection behavior to balance the goal of random task selection with the goals to minimize the time and effort for task performance. We conducted two experiments in which participants were provided with variable amount of preview for the non-chosen task stimuli (i.e., potential switch stimuli). We assumed that switch stimuli would initiate some pre-processing resulting in improved performance in switch trials. Results showed that reduced switch costs due to extra-preview in advance of each trial were accompanied by more task switches. This finding is in line with the characteristics of rational adaptive behavior. However, participants were not biased to switch tasks more often than chance despite large switch benefits. We suggest that participants might avoid effortful additional control processes that modulate the effects of preview on task performance and task choice.

  13. Two-Way Selection for Growth Rate in the Common Carp (CYPRINUS CARPIO L.)

    PubMed Central

    Moav, R.; Wohlfarth, G.

    1976-01-01

    The domesticated European carp was subjected to a two-way selection for growth rate. Five generations of mass selection for faster growth rate did not yield any response, but subsequent selection between groups (families) resulted in considerable progress while maintaining a large genetic variance. Selection for slow growth rate yielded relatively strong response for the first three generations. Random-bred control lines suffered from strong inbreeding depression and when two lines were crossed, the F1 showed a high degree of heterosis. Selection was performed on pond-raised fish, but growth rate was also tested in cages. A strong pond-cage genetic interaction was found. A theoretical explanation was suggested involving overdominance for fast growth rate and amplification through competition of intra-group but not inter-group variation. PMID:1248737

  14. A Statistical Analysis of Data Used in Critical Decision Making by Secondary School Personnel.

    ERIC Educational Resources Information Center

    Dunn, Charleta J.; Kowitz, Gerald T.

    Guidance decisions depend on the validity of standardized tests and teacher judgment records as measures of student achievement. To test this validity, a sample of 400 high school juniors, randomly selected from two large Gulf Coas t area schools, were administered the Iowa Tests of Educational Development. The nine subtest scores and each…

  15. Age structure and growth of California black oak (Quercus kelloggii) in the central Sierra Nevada, California

    Treesearch

    Barrett A. Garrison; Christopher D. otahal; Matthew L. Triggs

    2002-01-01

    Age structure and growth of California black oak (Quercus kelloggii) was determined from tagged trees at four 26.1-acre study stands in Placer County, California. Stands were dominated by large diameter (>20 inch dbh) California black oak and ponderosa pine (Pinus ponderosa). Randomly selected trees were tagged in June-August...

  16. The Influence of Age, Sex, and School Size Upon the Development of Formal Operational Thought.

    ERIC Educational Resources Information Center

    Lewis, William Roedolph

    School size, age and sex of students as related to scores on the six Piagetian Developmental Thought Processes Tasks were investigated. Five hundred seventy-four students from seventh through twelfth grades were randomly selected from 25 different schools classified as small, medium, or large. Data were treated through factorial analysis of…

  17. Using Extrinsic Motivation to Influence Student Attitude and Behavior toward State Assessments at an Urban High School

    ERIC Educational Resources Information Center

    Emmett, Joshua

    2013-01-01

    The purpose of this qualitative research study was to discover the influence of a student achievement program implemented at one large urban high school that employed extrinsic motivation to promote student achievement on state assessments. Using organismic integration theory as the theoretical framework, 19 randomly selected students participated…

  18. Self-Perceptions on Sex-Typed Attributes and the Occupational Aspirations and Expectations of High School Females.

    ERIC Educational Resources Information Center

    Davis, Patricia C.; And Others

    Relationships among high school females' self-perceptions on sex-stereotypic attributes and their occupational aspirations and expectations were investigated. Two measures were administered to, and data were collected from, 200 randomly selected females in grade 12 from a large urban school district. Occupational choice was measured by two…

  19. Selecting for Fast Protein-Protein Association As Demonstrated on a Random TEM1 Yeast Library Binding BLIP.

    PubMed

    Cohen-Khait, Ruth; Schreiber, Gideon

    2018-04-27

    Protein-protein interactions mediate the vast majority of cellular processes. Though protein interactions obey basic chemical principles also within the cell, the in vivo physiological environment may not allow for equilibrium to be reached. Thus, in vitro measured thermodynamic affinity may not provide a complete picture of protein interactions in the biological context. Binding kinetics composed of the association and dissociation rate constants are relevant and important in the cell. Therefore, changes in protein-protein interaction kinetics have a significant impact on the in vivo activity of the proteins. The common protocol for the selection of tighter binders from a mutant library selects for protein complexes with slower dissociation rate constants. Here we describe a method to specifically select for variants with faster association rate constants by using pre-equilibrium selection, starting from a large random library. Toward this end, we refine the selection conditions of a TEM1-β-lactamase library against its natural nanomolar affinity binder β-lactamase inhibitor protein (BLIP). The optimal selection conditions depend on the ligand concentration and on the incubation time. In addition, we show that a second sort of the library helps to separate signal from noise, resulting in a higher percent of faster binders in the selected library. Fast associating protein variants are of particular interest for drug development and other biotechnological applications.

  20. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    PubMed

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.

  1. Responses to two-way selection on growth in mass-spawned F1 progeny of Argopecten irradians concentricus (Say)

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Liu, Jin; Li, Yanhong; Zhu, Xiaowen; Liu, Zhigang

    2014-03-01

    In the present study, the effect of one-generation divergent selection on the growth and survival of the bay scallop ( Argopecten irradians concentricus) was examined to evaluate the efficacy of a selection program currently being carried out in Beibu Bay in the South China Sea. A total of 146 adult scallops were randomly selected from the same cultured population of A. i. concentricus, and divided into two groups in shell length (anterior-posterior measurement): large (4.91-6.02 cm, n=74) and small (3.31-4.18 cm, n=72). At the same time, a control group was also randomly sampled (4.21-4.88 cm, n =80). Mass-spawned F 1 progenies from the three size groups were obtained and reared under identical conditions at all growth phases. The effects of two-way (or upward-downward) selection on fertilization rate, hatching rate, survival rate, daily growth in shell length and body weight were assessed in the three size groups. Results show that significant differences ( P<0.01) were found in hatching rate, survival rate and daily growth of F1 progenies, but not in fertilization rate ( P>0.05), among the three groups. The hatching rate, survival rate and daily growth of the progeny of large-sized parents were greater than those of the control group ( P<0.05), which in turn were larger than those of small-sized group ( P<0.05). Responses to selection by shell length and body weight were 0.32 ± 0.04 cm and 2.18 ± 0.05 g, respectively, for the upward selection, and -0.14 ± 0.03 cm and -2.77 ± 0.06 g, respectively, for the downward selection. The realized heritability estimates of shell length and body weight were 0.38 ± 0.06 cm and 0.22 ± 0.07 g for the upward selection, and 0.24 ± 0.06 cm and 0.37 ± 0.09 g for the downward selection, respectively. The change in growth by bidirectional selection suggests that high genetic variation may be present in the cultured bay scallop population in China.

  2. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  3. Inverse probability weighting for covariate adjustment in randomized studies.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling

    2014-02-20

    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.

  4. A simple rule for the evolution of cooperation on graphs and social networks.

    PubMed

    Ohtsuki, Hisashi; Hauert, Christoph; Lieberman, Erez; Nowak, Martin A

    2006-05-25

    A fundamental aspect of all biological systems is cooperation. Cooperative interactions are required for many levels of biological organization ranging from single cells to groups of animals. Human society is based to a large extent on mechanisms that promote cooperation. It is well known that in unstructured populations, natural selection favours defectors over cooperators. There is much current interest, however, in studying evolutionary games in structured populations and on graphs. These efforts recognize the fact that who-meets-whom is not random, but determined by spatial relationships or social networks. Here we describe a surprisingly simple rule that is a good approximation for all graphs that we have analysed, including cycles, spatial lattices, random regular graphs, random graphs and scale-free networks: natural selection favours cooperation, if the benefit of the altruistic act, b, divided by the cost, c, exceeds the average number of neighbours, k, which means b/c > k. In this case, cooperation can evolve as a consequence of 'social viscosity' even in the absence of reputation effects or strategic complexity.

  5. Teacher Aides, Class Size and Academic Achievement: A Preliminary Evaluation of Indiana's Prime Time.

    ERIC Educational Resources Information Center

    Lapsley, Daniel K.; Daytner, Katrina M.; Kelly, Ken; Maxwell, Scott E.

    This large-scale evaluation of Indiana's Prime Time, a funding mechanism designed to reduce class size or pupil-teacher ratio (PTR) in grades K-3 examined the academic performance of nearly 11,000 randomly selected third graders on the state mandated standardized achievement test as a function of class size, PTR, and presence of an instructional…

  6. Obeying the Rules or Gaming the System? Delegating Random Selection for Examinations to Head Teachers within an Accountability System

    ERIC Educational Resources Information Center

    Elstad, Eyvind; Turmo, Are

    2011-01-01

    As education systems around the world move towards increased accountability based on performance measures, it is important to investigate the unintended effects of accountability systems. This article seeks to explore the extent to which head teachers in a large Norwegian municipality may resort to gaming the incentive system to boost their…

  7. Survey Response in a Statewide Social Experiment: Differences in Being Located and Collaborating, by Race and Hispanic Origin

    ERIC Educational Resources Information Center

    Nam, Yunju; Mason, Lisa Reyes; Kim, Youngmi; Clancy, Margaret; Sherraden, Michael

    2013-01-01

    This study examined whether and how survey response differs by race and Hispanic origin, using data from birth certificates and survey administrative data for a large-scale statewide experiment. The sample consisted of mothers of infants selected from Oklahoma birth certificates using a stratified random sampling method (N = 7,111). This study…

  8. Selective modulation of cell response on engineered fractal silicon substrates

    PubMed Central

    Gentile, Francesco; Medda, Rebecca; Cheng, Ling; Battista, Edmondo; Scopelliti, Pasquale E.; Milani, Paolo; Cavalcanti-Adam, Elisabetta A.; Decuzzi, Paolo

    2013-01-01

    A plethora of work has been dedicated to the analysis of cell behavior on substrates with ordered topographical features. However, the natural cell microenvironment is characterized by biomechanical cues organized over multiple scales. Here, randomly rough, self-affinefractal surfaces are generated out of silicon,where roughness Ra and fractal dimension Df are independently controlled. The proliferation rates, the formation of adhesion structures, and the morphology of 3T3 murine fibroblasts are monitored over six different substrates. The proliferation rate is maximized on surfaces with moderate roughness (Ra ~ 40 nm) and large fractal dimension (Df ~ 2.4); whereas adhesion structures are wider and more stable on substrates with higher roughness (Ra ~ 50 nm) and lower fractal dimension (Df ~ 2.2). Higher proliferation occurson substrates exhibiting densely packed and sharp peaks, whereas more regular ridges favor adhesion. These results suggest that randomly roughtopographies can selectively modulate cell behavior. PMID:23492898

  9. Active classifier selection for RGB-D object categorization using a Markov random field ensemble method

    NASA Astrophysics Data System (ADS)

    Durner, Maximilian; Márton, Zoltán.; Hillenbrand, Ulrich; Ali, Haider; Kleinsteuber, Martin

    2017-03-01

    In this work, a new ensemble method for the task of category recognition in different environments is presented. The focus is on service robotic perception in an open environment, where the robot's task is to recognize previously unseen objects of predefined categories, based on training on a public dataset. We propose an ensemble learning approach to be able to flexibly combine complementary sources of information (different state-of-the-art descriptors computed on color and depth images), based on a Markov Random Field (MRF). By exploiting its specific characteristics, the MRF ensemble method can also be executed as a Dynamic Classifier Selection (DCS) system. In the experiments, the committee- and topology-dependent performance boost of our ensemble is shown. Despite reduced computational costs and using less information, our strategy performs on the same level as common ensemble approaches. Finally, the impact of large differences between datasets is analyzed.

  10. Engineering dihydropteroate synthase (DHPS) for efficient expression on M13 phage.

    PubMed

    Brockmann, Eeva-Christine; Lamminmäki, Urpo; Saviranta, Petri

    2005-06-20

    Phage display is a commonly used selection technique in protein engineering, but not all proteins can be expressed on phage. Here, we describe the expression of a cytoplasmic homodimeric enzyme dihydropteroate synthetase (DHPS) on M13 phage, established by protein engineering of DHPS. The strategy included replacement of cysteine residues and screening for periplasmic expression followed by random mutagenesis and phage display selection with a conformation-specific anti-DHPS antibody. Cysteine replacement alone resulted in a 12-fold improvement in phage display of DHPS, but after random mutagenesis and three rounds of phage display selection, phage display efficiency of the library had improved 280-fold. Most of the selected clones had a common Asp96Asn mutation that was largely responsible for the efficient phage display of DHPS. Asp96Asn affected synergistically with the cysteine replacing mutations that were needed to remove the denaturing effect of potential wrong disulfide bridging in phage display. Asp96Asn alone resulted in a 1.8-fold improvement in phage display efficiency, but in combination with the cysteine replacing mutations, a total of 130-fold improvement in phage display efficiency of DHPS was achieved.

  11. Sample size determination for bibliographic retrieval studies

    PubMed Central

    Yao, Xiaomei; Wilczynski, Nancy L; Walter, Stephen D; Haynes, R Brian

    2008-01-01

    Background Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies. Methods The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset. Results For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%). Conclusion The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach. PMID:18823538

  12. On the asymptotic standard error of a class of robust estimators of ability in dichotomous item response models.

    PubMed

    Magis, David

    2014-11-01

    In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.

  13. Detecting evolutionary forces in language change.

    PubMed

    Newberry, Mitchell G; Ahern, Christopher A; Clark, Robin; Plotkin, Joshua B

    2017-11-09

    Both language and genes evolve by transmission over generations with opportunity for differential replication of forms. The understanding that gene frequencies change at random by genetic drift, even in the absence of natural selection, was a seminal advance in evolutionary biology. Stochastic drift must also occur in language as a result of randomness in how linguistic forms are copied between speakers. Here we quantify the strength of selection relative to stochastic drift in language evolution. We use time series derived from large corpora of annotated texts dating from the 12th to 21st centuries to analyse three well-known grammatical changes in English: the regularization of past-tense verbs, the introduction of the periphrastic 'do', and variation in verbal negation. We reject stochastic drift in favour of selection in some cases but not in others. In particular, we infer selection towards the irregular forms of some past-tense verbs, which is likely driven by changing frequencies of rhyming patterns over time. We show that stochastic drift is stronger for rare words, which may explain why rare forms are more prone to replacement than common ones. This work provides a method for testing selective theories of language change against a null model and reveals an underappreciated role for stochasticity in language evolution.

  14. Red-shouldered hawk nesting habitat preference in south Texas

    USGS Publications Warehouse

    Strobel, Bradley N.; Boal, Clint W.

    2010-01-01

    We examined nesting habitat preference by red-shouldered hawks Buteo lineatus using conditional logistic regression on characteristics measured at 27 occupied nest sites and 68 unused sites in 2005–2009 in south Texas. We measured vegetation characteristics of individual trees (nest trees and unused trees) and corresponding 0.04-ha plots. We evaluated the importance of tree and plot characteristics to nesting habitat selection by comparing a priori tree-specific and plot-specific models using Akaike's information criterion. Models with only plot variables carried 14% more weight than models with only center tree variables. The model-averaged odds ratios indicated red-shouldered hawks selected to nest in taller trees and in areas with higher average diameter at breast height than randomly available within the forest stand. Relative to randomly selected areas, each 1-m increase in nest tree height and 1-cm increase in the plot average diameter at breast height increased the probability of selection by 85% and 10%, respectively. Our results indicate that red-shouldered hawks select nesting habitat based on vegetation characteristics of individual trees as well as the 0.04-ha area surrounding the tree. Our results indicate forest management practices resulting in tall forest stands with large average diameter at breast height would benefit red-shouldered hawks in south Texas.

  15. What is the Optimal Strategy for Adaptive Servo-Ventilation Therapy?

    PubMed

    Imamura, Teruhiko; Kinugawa, Koichiro

    2018-05-23

    Clinical advantages in the adaptive servo-ventilation (ASV) therapy have been reported in selected heart failure patients with/without sleep-disorder breathing, whereas multicenter randomized control trials could not demonstrate such advantages. Considering this discrepancy, optimal patient selection and device setting may be a key for the successful ASV therapy. Hemodynamic and echocardiographic parameters indicating pulmonary congestion such as elevated pulmonary capillary wedge pressure were reported as predictors of good response to ASV therapy. Recently, parameters indicating right ventricular dysfunction also have been reported as good predictors. Optimal device setting with appropriate pressure setting during appropriate time may also be a key. Large-scale prospective trial with optimal patient selection and optimal device setting is warranted.

  16. Analysis of training sample selection strategies for regression-based quantitative landslide susceptibility mapping methods

    NASA Astrophysics Data System (ADS)

    Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem

    2017-07-01

    All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.

  17. Practical Aspects of Access Network Indoor Extensions Using Multimode Glass and Plastic Optical Fibers

    NASA Astrophysics Data System (ADS)

    Keiser, Gerd; Liu, Hao-Yu; Lu, Shao-Hsi; Devi Pukhrambam, Puspa

    2012-07-01

    Low-cost multimode glass and plastic optical fibers are attractive for high-capacity indoor telecom networks. Many existing buildings already have glass multimode fibers installed for local area network applications. Future indoor applications will use combinations of glass multimode fibers with plastic optical fibers that have low losses in the 850-nm-1,310-nm range. This article examines real-world link losses when randomly interconnecting glass and plastic fiber segments having factory-installed connectors. Potential interconnection issues include large variations in connector losses among randomly selected fiber segments, asymmetric link losses in bidirectional links, and variations in bandwidths among different types of fibers.

  18. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  19. Determinants of Habitat Selection by Hatchling Australian Freshwater Crocodiles

    PubMed Central

    Somaweera, Ruchira; Webb, Jonathan K.; Shine, Richard

    2011-01-01

    Animals almost always use habitats non-randomly, but the costs and benefits of using specific habitat types remain unknown for many types of organisms. In a large lake in northwestern Australia (Lake Argyle), most hatchling (<12-month-old) freshwater crocodiles (Crocodylus johnstoni) are found in floating vegetation mats or grassy banks rather than the more widely available open banks. Mean body sizes of young crocodiles did not differ among the three habitat types. We tested four potential explanations for non-random habitat selection: proximity to nesting sites, thermal conditions, food availability, and exposure to predation. The three alternative habitat types did not differ in proximity to nesting sites, or in thermal conditions. Habitats with higher food availability harboured more hatchlings, and feeding rates (obtained by stomach-flushing of recently-captured crocodiles) were highest in such areas. Predation risk may also differ among habitats: we were twice as likely to capture a crocodile after seeing it in open-bank sites than in the other two habitat types. Thus, habitat selection of hatchling crocodiles in this system may be driven both by prey availability and by predation risk. PMID:22163308

  20. The Genetic Architecture of Response to Long-Term Artificial Selection for Oil Concentration in the Maize Kernel

    PubMed Central

    Laurie, Cathy C.; Chasalow, Scott D.; LeDeaux, John R.; McCarroll, Robert; Bush, David; Hauge, Brian; Lai, Chaoqiang; Clark, Darryl; Rocheford, Torbert R.; Dudley, John W.

    2004-01-01

    In one of the longest-running experiments in biology, researchers at the University of Illinois have selected for altered composition of the maize kernel since 1896. Here we use an association study to infer the genetic basis of dramatic changes that occurred in response to selection for changes in oil concentration. The study population was produced by a cross between the high- and low-selection lines at generation 70, followed by 10 generations of random mating and the derivation of 500 lines by selfing. These lines were genotyped for 488 genetic markers and the oil concentration was evaluated in replicated field trials. Three methods of analysis were tested in simulations for ability to detect quantitative trait loci (QTL). The most effective method was model selection in multiple regression. This method detected ∼50 QTL accounting for ∼50% of the genetic variance, suggesting that >50 QTL are involved. The QTL effect estimates are small and largely additive. About 20% of the QTL have negative effects (i.e., not predicted by the parental difference), which is consistent with hitchhiking and small population size during selection. The large number of QTL detected accounts for the smooth and sustained response to selection throughout the twentieth century. PMID:15611182

  1. Engineering Education and Students' Challenges: Strategies toward Enhancing the Educational Environment in Engineering Colleges

    ERIC Educational Resources Information Center

    Alkandari, Nabila Y.

    2014-01-01

    The main goal of this research is to gain an understanding of the challenges which have to be confronted by the engineering students at the College of Engineering and Petroleum at Kuwait University. The college has a large number of students, of which three hundred and eighty five were selected on a random basis for study purposes. The results…

  2. Moderate Deviations for Recursive Stochastic Algorithms

    DTIC Science & Technology

    2014-08-02

    to (2.14) 1 n n1X i=0 E[R(ni k Xni )] KE a2(n)n : Because of this the (random) Radon -Nikodym derivatives fni (y) = dni d Xni (y) are well de...ned and can be selected in a measurable way. We will control the magnitude of the noise when the Radon -Nikodym derivative is large by bounding 1 n n

  3. The Effectiveness of Alternative Cancer Education Programs in Promoting Knowledge, Attitudes, and Self-Examination Behavior in a Population of College-Aged Men.

    ERIC Educational Resources Information Center

    Marty, Phillip J.; McDermott, Robert J.

    A study determined whether changes in knowledge, selected attitudes, and self-examination behavior occurred among college-aged men after exposure to alternative cancer education programs. College-aged men (n=128) from two large health education classes at a mid-western university were randomly assigned to two treatment groups. The first group…

  4. A "Politically Robust" Experimental Design for Public Policy Evaluation, with Application to the Mexican Universal Health Insurance Program

    ERIC Educational Resources Information Center

    King, Gary; Gakidou, Emmanuela; Ravishankar, Nirmala; Moore, Ryan T.; Lakin, Jason; Vargas, Manett; Tellez-Rojo, Martha Maria; Avila, Juan Eugenio Hernandez; Avila, Mauricio Hernandez; Llamas, Hector Hernandez

    2007-01-01

    We develop an approach to conducting large-scale randomized public policy experiments intended to be more robust to the political interventions that have ruined some or all parts of many similar previous efforts. Our proposed design is insulated from selection bias in some circumstances even if we lose observations; our inferences can still be…

  5. Socio-Cognitive and Nutritional Factors Associated with Body Mass Index in Children and Adolescents: Possibilities for Childhood Obesity Prevention

    ERIC Educational Resources Information Center

    O'Dea, Jennifer A.; Wilson, Rachel

    2006-01-01

    A large national study of schoolchildren aged 6-18 years was conducted to assess nutritional and socio-cognitive factors associated with body mass index (BMI). A questionnaire was used to assess nutritional quality of breakfast, importance of physical activity and food variety score, among 4441 students from randomly selected schools in all states…

  6. Hemicraniectomy for Ischemic and Hemorrhagic Stroke: Facts and Controversies.

    PubMed

    Gupta, Aman; Sattur, Mithun G; Aoun, Rami James N; Krishna, Chandan; Bolton, Patrick B; Chong, Brian W; Demaerschalk, Bart M; Lyons, Mark K; McClendon, Jamal; Patel, Naresh; Sen, Ayan; Swanson, Kristin; Zimmerman, Richard S; Bendok, Bernard R

    2017-07-01

    Malignant large artery stroke is associated with high mortality of 70% to 80% with best medical management. Decompressive craniectomy (DC) is a highly effective tool in reducing mortality. Convincing evidence has accumulated from several randomized trials, in addition to multiple retrospective studies, that demonstrate not only survival benefit but also improved functional outcome with DC in appropriately selected patients. This article explores in detail the evidence for DC, nuances regarding patient selection, and applicability of DC for supratentorial intracerebral hemorrhage and posterior fossa ischemic and hemorrhagic stroke. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    USGS Publications Warehouse

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a Bayesian hierarchical discrete-choice model for resource selection can provide managers with 2 components of population-level inference: average population selection and variability of selection. Both components are necessary to make sound management decisions based on animal selection.

  8. Reliable Refuge: Two Sky Island Scorpion Species Select Larger, Thermally Stable Retreat Sites.

    PubMed

    Becker, Jamie E; Brown, Christopher A

    2016-01-01

    Sky island scorpions shelter under rocks and other surface debris, but, as with other scorpions, it is unclear whether these species select retreat sites randomly. Furthermore, little is known about the thermal preferences of scorpions, and no research has been done to identify whether reproductive condition might influence retreat site selection. The objectives were to (1) identify physical or thermal characteristics for retreat sites occupied by two sky island scorpions (Vaejovis cashi Graham 2007 and V. electrum Hughes 2011) and those not occupied; (2) determine whether retreat site selection differs between the two study species; and (3) identify whether thermal selection differs between species and between gravid and non-gravid females of the same species. Within each scorpion's habitat, maximum dimensions of rocks along a transect line were measured and compared to occupied rocks to determine whether retreat site selection occurred randomly. Temperature loggers were placed under a subset of occupied and unoccupied rocks for 48 hours to compare the thermal characteristics of these rocks. Thermal gradient trials were conducted before parturition and after dispersal of young in order to identify whether gravidity influences thermal preference. Vaejovis cashi and V. electrum both selected larger retreat sites that had more stable thermal profiles. Neither species appeared to have thermal preferences influenced by reproductive condition. However, while thermal selection did not differ among non-gravid individuals, gravid V. electrum selected warmer temperatures than its gravid congener. Sky island scorpions appear to select large retreat sites to maintain thermal stability, although biotic factors (e.g., competition) could also be involved in this choice. Future studies should focus on identifying the various biotic or abiotic factors that could influence retreat site selection in scorpions, as well as determining whether reproductive condition affects thermal selection in other arachnids.

  9. Frequency of RNA–RNA interaction in a model of the RNA World

    PubMed Central

    STRIGGLES, JOHN C.; MARTIN, MATTHEW B.; SCHMIDT, FRANCIS J.

    2006-01-01

    The RNA World model for prebiotic evolution posits the selection of catalytic/template RNAs from random populations. The mechanisms by which these random populations could be generated de novo are unclear. Non-enzymatic and RNA-catalyzed nucleic acid polymerizations are poorly processive, which means that the resulting short-chain RNA population could contain only limited diversity. Nonreciprocal recombination of smaller RNAs provides an alternative mechanism for the assembly of larger species with concomitantly greater structural diversity; however, the frequency of any specific recombination event in a random RNA population is limited by the low probability of an encounter between any two given molecules. This low probability could be overcome if the molecules capable of productive recombination were redundant, with many nonhomologous but functionally equivalent RNAs being present in a random population. Here we report fluctuation experiments to estimate the redundancy of the set of RNAs in a population of random sequences that are capable of non-Watson-Crick interaction with another RNA. Parallel SELEX experiments showed that at least one in 106 random 20-mers binds to the P5.1 stem–loop of Bacillus subtilis RNase P RNA with affinities equal to that of its naturally occurring partner. This high frequency predicts that a single RNA in an RNA World would encounter multiple interacting RNAs within its lifetime, supporting recombination as a plausible mechanism for prebiotic RNA evolution. The large number of equivalent species implies that the selection of any single interacting species in the RNA World would be a contingent event, i.e., one resulting from historical accident. PMID:16495233

  10. Genetic association of marbling score with intragenic nucleotide variants at selection signals of the bovine genome.

    PubMed

    Ryu, J; Lee, C

    2016-04-01

    Selection signals of Korean cattle might be attributed largely to artificial selection for meat quality. Rapidly increased intragenic markers of newly annotated genes in the bovine genome would help overcome limited findings of genetic markers associated with meat quality at the selection signals in a previous study. The present study examined genetic associations of marbling score (MS) with intragenic nucleotide variants at selection signals of Korean cattle. A total of 39 092 nucleotide variants of 407 Korean cattle were utilized in the association analysis. A total of 129 variants were selected within newly annotated genes in the bovine genome. Their genetic associations were analyzed using the mixed model with random polygenic effects based on identical-by-state genetic relationships among animals in order to control for spurious associations produced by population structure. Genetic associations of MS were found (P<3.88×10-4) with six intragenic nucleotide variants on bovine autosomes 3 (cache domain containing 1, CACHD1), 5 (like-glycosyltransferase, LARGE), 16 (cell division cycle 42 binding protein kinase alpha, CDC42BPA) and 21 (snurportin 1, SNUPN; protein tyrosine phosphatase, non-receptor type 9, PTPN9; chondroitin sulfate proteoglycan 4, CSPG4). In particular, the genetic associations with CDC42BPA and LARGE were confirmed using an independent data set of Korean cattle. The results implied that allele frequencies of functional variants and their proximity variants have been augmented by directional selection for greater MS and remain selection signals in the bovine genome. Further studies of fine mapping would be useful to incorporate favorable alleles in marker-assisted selection for MS of Korean cattle.

  11. Clustering of galaxies near damped Lyman-alpha systems with (z) = 2.6

    NASA Technical Reports Server (NTRS)

    Wolfe, A. M

    1993-01-01

    The galaxy two-point correlation function, xi, at (z) = 2.6 is determined by comparing the number of Ly-alpha-emitting galaxies in narrowband CCD fields selected for the presence of damped L-alpha absorption to their number in randomly selected control fields. Comparisons between the presented determination of (xi), a density-weighted volume average of xi, and model predictions for (xi) at large redshifts show that models in which the clustering pattern is fixed in proper coordinates are highly unlikely, while better agreement is obtained if the clustering pattern is fixed in comoving coordinates. Therefore, clustering of Ly-alpha-emitting galaxies around damped Ly-alpha systems at large redshifts is strong. It is concluded that the faint blue galaxies are drawn from a parent population different from normal galaxies, the presumed offspring of damped Ly-alpha systems.

  12. Prediction of large negative shaded-side spacecraft potentials

    NASA Technical Reports Server (NTRS)

    Prokopenko, S. M. L.; Laframboise, J. G.

    1977-01-01

    A calculation by Knott, for the floating potential of a spherically symmetric synchronous-altitude satellite in eclipse, was adapted to provide simple calculations of upper bounds on negative potentials which may be achieved by electrically isolated shaded surfaces on spacecraft in sunlight. Large (approximately 60 percent) increases in predicted negative shaded-side potentials are obtained. To investigate effective potential barrier or angular momentum selection effects due to the presence of less negative sunlit-side or adjacent surface potentials, these expressions were replaced by the ion random current, which is a lower bound for convex surfaces when such effects become very severe. Further large increases in predicted negative potentials were obtained, amounting to a doubling in some cases.

  13. Lifestyle and Metformin Treatment Favorably Influence Lipoprotein Subfraction Distribution in the Diabetes Prevention Program

    PubMed Central

    Temprosa, M.; Otvos, J.; Brunzell, J.; Marcovina, S.; Mather, K.; Arakaki, R.; Watson, K.; Horton, E.; Barrett-Connor, E.

    2013-01-01

    Context: Although intensive lifestyle change (ILS) and metformin reduce diabetes incidence in subjects with impaired glucose tolerance (IGT), their effects on lipoprotein subfractions have not been studied. Objective: The objective of the study was to characterize the effects of ILS and metformin vs placebo interventions on lipoprotein subfractions in the Diabetes Prevention Program. Design: This was a randomized clinical trial, testing the effects of ILS, metformin, and placebo on diabetes development in subjects with IGT. Participants: Selected individuals with IGT randomized in the Diabetes Prevention Program participated in the study. Interventions: Interventions included randomization to metformin 850 mg or placebo twice daily or ILS aimed at a 7% weight loss using a low-fat diet with increased physical activity. Main Outcome Measures: Lipoprotein subfraction size, density, and concentration measured by magnetic resonance and density gradient ultracentrifugation at baseline and 1 year were measured. Results: ILS decreased large and buoyant very low-density lipoprotein, small and dense low-density lipoprotein (LDL), and small high-density lipoprotein (HDL) and raised large HDL. Metformin modestly reduced small and dense LDL and raised small and large HDL. Change in insulin resistance largely accounted for the intervention-associated decreases in large very low-density lipoprotein, whereas changes in body mass index (BMI) and adiponectin were strongly associated with changes in LDL. Baseline and a change in adiponectin were related to change in large HDL, and BMI change associated with small HDL change. The effect of metformin to increase small HDL was independent of adiponectin, BMI, and insulin resistance. Conclusion: ILS and metformin treatment have favorable effects on lipoprotein subfractions that are primarily mediated by intervention-related changes in insulin resistance, BMI, and adiponectin. Interventions that slow the development of diabetes may also retard the progression of atherosclerosis. PMID:23979954

  14. COMPARISON OF RANDOM AND SYSTEMATIC SITE SELECTION FOR ASSESSING ATTAINMENT OF AQUATIC LIFE USES IN SEGMENTS OF THE OHIO RIVER

    EPA Science Inventory

    This report is a description of field work and data analysis results comparing a design comparable to systematic site selection with one based on random selection of sites. The report is expected to validate the use of random site selection in the bioassessment program for the O...

  15. Question 3: The Worlds of the Prebiotic and Never Born Proteins

    NASA Astrophysics Data System (ADS)

    Chiarabelli, Cristiano; de Lucrezia, Davide

    2007-10-01

    Starting from the statement that no reliable methods are known to produce high molecular weight polypeptides under prebiotic conditions, a possible approach, at least to understand the differences between extant proteins and the possible large number of never born proteins, could be biological. Using the phage display method a large library of totally random amino acidic sequences was obtained. Consequently, different experiments to directly consider the frequency of stable folds were performed, and the interesting results obtained from such new approach are discussed in terms of contingency, contributing to the discussion on the selection mechanism of extant proteins.

  16. Eighty routes to a ribonucleotide world; dispersion and stringency in the decisive selection.

    PubMed

    Yarus, Michael

    2018-05-21

    We examine the initial emergence of genetics; that is, of an inherited chemical capability. The crucial actors are ribonucleotides, occasionally meeting in a prebiotic landscape. Previous work identified six influential variables during such random ribonucleotide pooling. Geochemical pools can be in periodic danger (e.g., from tides) or constant danger (e.g., from unfavorable weather). Such pools receive Gaussian nucleotide amounts sporadically, at random times, or get varying substrates simultaneously. Pools use cross-templated RNA synthesis (5'-5' product from 5'-3' template) or para-templated (5'-5' product from 5'-5' template) synthesis. Pools can undergo mild or strong selection, and be recently initiated (early) or late in age. Considering > 80 combinations of these variables, selection calculations identify a superior route. Most likely, an early, sporadically fed, cross-templating pool in constant danger, receiving ≥ 1 mM nucleotides while under strong selection for a coenzyme-like product will host selection of the first encoded biochemical functions. Predominantly templated products emerge from a critical event, the starting bloc selection, which exploits inevitable differences among early pools. Favorable selection has a simple rationale; it is increased by product dispersion (sd/mean), by selection intensity (mild or strong), or by combining these factors as stringency, reciprocal fraction of pools selected (1/sfsel). To summarize: chance utility, acting via a preference for disperse, templated coenzyme-like dinucleotides, uses stringent starting bloc selection to quickly establish majority encoded/genetic expression. Despite its computational origin, starting bloc selection is largely independent of specialized assumptions. This ribodinucleotide route to inheritance may also have facilitated 5'-3' chemical RNA replication. Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  17. On the role of heat and mass transfer into laser processability during selective laser melting AlSi12 alloy based on a randomly packed powder-bed

    NASA Astrophysics Data System (ADS)

    Wang, Lianfeng; Yan, Biao; Guo, Lijie; Gu, Dongdong

    2018-04-01

    A newly transient mesoscopic model with a randomly packed powder-bed has been proposed to investigate the heat and mass transfer and laser process quality between neighboring tracks during selective laser melting (SLM) AlSi12 alloy by finite volume method (FVM), considering the solid/liquid phase transition, variable temperature-dependent properties and interfacial force. The results apparently revealed that both the operating temperature and resultant cooling rate were obviously elevated by increasing the laser power. Accordingly, the resultant viscosity of liquid significantly reduced under a large laser power and was characterized with a large velocity, which was prone to result in a more intensive convection within pool. In this case, the sufficient heat and mass transfer occurred at the interface between the previously fabricated tracks and currently building track, revealing a strongly sufficient spreading between the neighboring tracks and a resultant high-quality surface without obvious porosity. By contrast, the surface quality of SLM-processed components with a relatively low laser power notably weakened due to the limited and insufficient heat and mass transfer at the interface of neighboring tracks. Furthermore, the experimental surface morphologies of the top surface were correspondingly acquired and were in full accordance to the calculated results via simulation.

  18. Unraveling the non-senescence phenomenon in Hydra.

    PubMed

    Dańko, Maciej J; Kozłowski, Jan; Schaible, Ralf

    2015-10-07

    Unlike other metazoans, Hydra does not experience the distinctive rise in mortality with age known as senescence, which results from an increasing imbalance between cell damage and cell repair. We propose that the Hydra controls damage accumulation mainly through damage-dependent cell selection and cell sloughing. We examine our hypothesis with a model that combines cellular damage with stem cell renewal, differentiation, and elimination. The Hydra individual can be seen as a large single pool of three types of stem cells with some features of differentiated cells. This large stem cell community prevents "cellular damage drift," which is inevitable in complex conglomerate (differentiated) metazoans with numerous and generally isolated pools of stem cells. The process of cellular damage drift is based on changes in the distribution of damage among cells due to random events, and is thus similar to Muller's ratchet in asexual populations. Events in the model that are sources of randomness include budding, cellular death, and cellular damage and repair. Our results suggest that non-senescence is possible only in simple Hydra-like organisms which have a high proportion and number of stem cells, continuous cell divisions, an effective cell selection mechanism, and stem cells with the ability to undertake some roles of differentiated cells. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  20. Voluntary strategy suppresses the positive impact of preferential selection in prisoner’s dilemma

    NASA Astrophysics Data System (ADS)

    Sun, Lei; Lin, Pei-jie; Chen, Ya-shan

    2014-11-01

    Impact of aspiration is ubiquitous in social and biological disciplines. In this work, we try to explore the impact of such a trait on voluntary prisoners’ dilemma game via a selection parameter w. w=0 returns the traditional version of random selection. For positive w, the opponent of high payoff will be selected; while negative w means that the partner of low payoff will be chosen. We find that for positive w, cooperation will be greatly promoted in the interval of small b, at variance cooperation is inhibited with large b. For negative w, cooperation is fully restrained, irrespective of b value. It is found that the positive impact of preferential selection is suppressed by the voluntary strategy in prisoner’s dilemma. These observations can be supported by the spatial patterns. Our work may shed light on the emergence and persistence of cooperation with voluntary participation in social dilemma.

  1. Random covering of the circle: the configuration-space of the free deposition process

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry

    2003-12-01

    Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.

  2. Models of Protocellular Structure, Function and Evolution

    NASA Technical Reports Server (NTRS)

    New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.

    2001-01-01

    In the absence of any record of protocells, the most direct way to test our understanding of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction. A mutagenic approach, in which the sequences of selected molecules are randomly altered, can yield further improvements in performance or alterations of specificities. Unfortunately, the catalytic potential of nucleic acids is rather limited. Proteins are more catalytically capable but cannot be directly amplified. In the new technique, this problem is circumvented by covalently linking each protein of the initial, diverse, pool to the RNA sequence that codes for it. Then, selection is performed on the proteins, but the nucleic acids are replicated. Additional information is contained in the original extended abstract.

  3. Selection of DNA aptamers against Human Cardiac Troponin I for colorimetric sensor based dot blot application.

    PubMed

    Dorraj, Ghamar Soltan; Rassaee, Mohammad Javad; Latifi, Ali Mohammad; Pishgoo, Bahram; Tavallaei, Mahmood

    2015-08-20

    Troponin T and I are ideal markers which are highly sensitive and specific for myocardial injury and have shown better efficacy than earlier markers. Since aptamers are ssDNA or RNA that bind to a wide variety of target molecules, the purpose of this research was to select an aptamer from a 79bp single-stranded DNA (ssDNA) random library that was used to bind the Human Cardiac Troponin I from a synthetic nucleic acids library by systematic evolution of ligands exponential enrichment (Selex) based on several selection and amplification steps. Human Cardiac Troponin I protein was coated onto the surface of streptavidin magnetic beads to extract specific aptamer from a large and diverse random ssDNA initial oligonucleotide library. As a result, several aptamers were selected and further examined for binding affinity and specificity. Finally TnIApt 23 showed beast affinity in nanomolar range (2.69nM) toward the target protein. A simple and rapid colorimetric detection assay for Human Cardiac Troponin I using the novel and specific aptamer-AuNPs conjugates based on dot blot assay was developed. The detection limit for this protein using aptamer-AuNPs-based assay was found to be 5ng/ml. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Using trading strategies to detect phase transitions in financial markets.

    PubMed

    Forró, Z; Woodard, R; Sornette, D

    2015-04-01

    We show that the log-periodic power law singularity model (LPPLS), a mathematical embodiment of positive feedbacks between agents and of their hierarchical dynamical organization, has a significant predictive power in financial markets. We find that LPPLS-based strategies significantly outperform the randomized ones and that they are robust with respect to a large selection of assets and time periods. The dynamics of prices thus markedly deviate from randomness in certain pockets of predictability that can be associated with bubble market regimes. Our hybrid approach, marrying finance with the trading strategies, and critical phenomena with LPPLS, demonstrates that targeting information related to phase transitions enables the forecast of financial bubbles and crashes punctuating the dynamics of prices.

  5. Comparative effectiveness research in cancer with observational data.

    PubMed

    Giordano, Sharon H

    2015-01-01

    Observational studies are increasingly being used for comparative effectiveness research. These studies can have the greatest impact when randomized trials are not feasible or when randomized studies have not included the population or outcomes of interest. However, careful attention must be paid to study design to minimize the likelihood of selection biases. Analytic techniques, such as multivariable regression modeling, propensity score analysis, and instrumental variable analysis, also can also be used to help address confounding. Oncology has many existing large and clinically rich observational databases that can be used for comparative effectiveness research. With careful study design, observational studies can produce valid results to assess the benefits and harms of a treatment or intervention in representative real-world populations.

  6. Using trading strategies to detect phase transitions in financial markets

    NASA Astrophysics Data System (ADS)

    Forró, Z.; Woodard, R.; Sornette, D.

    2015-04-01

    We show that the log-periodic power law singularity model (LPPLS), a mathematical embodiment of positive feedbacks between agents and of their hierarchical dynamical organization, has a significant predictive power in financial markets. We find that LPPLS-based strategies significantly outperform the randomized ones and that they are robust with respect to a large selection of assets and time periods. The dynamics of prices thus markedly deviate from randomness in certain pockets of predictability that can be associated with bubble market regimes. Our hybrid approach, marrying finance with the trading strategies, and critical phenomena with LPPLS, demonstrates that targeting information related to phase transitions enables the forecast of financial bubbles and crashes punctuating the dynamics of prices.

  7. Selective Cannabinoids for Chronic Neuropathic Pain: A Systematic Review and Meta-analysis.

    PubMed

    Meng, Howard; Johnston, Bradley; Englesakis, Marina; Moulin, Dwight E; Bhatia, Anuj

    2017-11-01

    There is a lack of consensus on the role of selective cannabinoids for the treatment of neuropathic pain (NP). Guidelines from national and international pain societies have provided contradictory recommendations. The primary objective of this systematic review and meta-analysis (SR-MA) was to determine the analgesic efficacy and safety of selective cannabinoids compared to conventional management or placebo for chronic NP. We reviewed randomized controlled trials that compared selective cannabinoids (dronabinol, nabilone, nabiximols) with conventional treatments (eg, pharmacotherapy, physical therapy, or a combination of these) or placebo in patients with chronic NP because patients with NP may be on any of these therapies or none if all standard treatments have failed to provide analgesia and or if these treatments have been associated with adverse effects. MEDLINE, EMBASE, and other major databases up to March 11, 2016, were searched. Data on scores of numerical rating scale for NP and its subtypes, central and peripheral, were meta-analyzed. The certainty of evidence was classified using the Grade of Recommendations Assessment, Development, and Evaluation approach. Eleven randomized controlled trials including 1219 patients (614 in selective cannabinoid and 605 in comparator groups) were included in this SR-MA. There was variability in the studies in quality of reporting, etiology of NP, type and dose of selective cannabinoids. Patients who received selective cannabinoids reported a significant, but clinically small, reduction in mean numerical rating scale pain scores (0-10 scale) compared with comparator groups (-0.65 points; 95% confidence interval, -1.06 to -0.23 points; P = .002, I = 60%; Grade of Recommendations Assessment, Development, and Evaluation: weak recommendation and moderate-quality evidence). Use of selective cannabinoids was also associated with improvements in quality of life and sleep with no major adverse effects. Selective cannabinoids provide a small analgesic benefit in patients with chronic NP. There was a high degree of heterogeneity among publications included in this SR-MA. Well-designed, large, randomized studies are required to better evaluate specific dosage, duration of intervention, and the effect of this intervention on physical and psychologic function.

  8. Use of simulation to compare the performance of minimization with stratified blocked randomization.

    PubMed

    Toorawa, Robert; Adena, Michael; Donovan, Mark; Jones, Steve; Conlon, John

    2009-01-01

    Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations. Copyright (c) 2008 John Wiley & Sons, Ltd.

  9. Application of random effects to the study of resource selection by animals

    USGS Publications Warehouse

    Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.

    2006-01-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  10. Application of random effects to the study of resource selection by animals.

    PubMed

    Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L

    2006-07-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  11. A sampling design framework for monitoring secretive marshbirds

    USGS Publications Warehouse

    Johnson, D.H.; Gibbs, J.P.; Herzog, M.; Lor, S.; Niemuth, N.D.; Ribic, C.A.; Seamans, M.; Shaffer, T.L.; Shriver, W.G.; Stehman, S.V.; Thompson, W.L.

    2009-01-01

    A framework for a sampling plan for monitoring marshbird populations in the contiguous 48 states is proposed here. The sampling universe is the breeding habitat (i.e. wetlands) potentially used by marshbirds. Selection protocols would be implemented within each of large geographical strata, such as Bird Conservation Regions. Site selection will be done using a two-stage cluster sample. Primary sampling units (PSUs) would be land areas, such as legal townships, and would be selected by a procedure such as systematic sampling. Secondary sampling units (SSUs) will be wetlands or portions of wetlands in the PSUs. SSUs will be selected by a randomized spatially balanced procedure. For analysis, the use of a variety of methods as a means of increasing confidence in conclusions that may be reached is encouraged. Additional effort will be required to work out details and implement the plan.

  12. Generalizing Evidence From Randomized Clinical Trials to Target Populations

    PubMed Central

    Cole, Stephen R.; Stuart, Elizabeth A.

    2010-01-01

    Properly planned and conducted randomized clinical trials remain susceptible to a lack of external validity. The authors illustrate a model-based method to standardize observed trial results to a specified target population using a seminal human immunodeficiency virus (HIV) treatment trial, and they provide Monte Carlo simulation evidence supporting the method. The example trial enrolled 1,156 HIV-infected adult men and women in the United States in 1996, randomly assigned 577 to a highly active antiretroviral therapy and 579 to a largely ineffective combination therapy, and followed participants for 52 weeks. The target population was US people infected with HIV in 2006, as estimated by the Centers for Disease Control and Prevention. Results from the trial apply, albeit muted by 12%, to the target population, under the assumption that the authors have measured and correctly modeled the determinants of selection that reflect heterogeneity in the treatment effect. In simulations with a heterogeneous treatment effect, a conventional intent-to-treat estimate was biased with poor confidence limit coverage, but the proposed estimate was largely unbiased with appropriate confidence limit coverage. The proposed method standardizes observed trial results to a specified target population and thereby provides information regarding the generalizability of trial results. PMID:20547574

  13. Overlooked Threats to Respondent Driven Sampling Estimators: Peer Recruitment Reality, Degree Measures, and Random Selection Assumption.

    PubMed

    Li, Jianghong; Valente, Thomas W; Shin, Hee-Sung; Weeks, Margaret; Zelenev, Alexei; Moothi, Gayatri; Mosher, Heather; Heimer, Robert; Robles, Eduardo; Palmer, Greg; Obidoa, Chinekwu

    2017-06-28

    Intensive sociometric network data were collected from a typical respondent driven sample (RDS) of 528 people who inject drugs residing in Hartford, Connecticut in 2012-2013. This rich dataset enabled us to analyze a large number of unobserved network nodes and ties for the purpose of assessing common assumptions underlying RDS estimators. Results show that several assumptions central to RDS estimators, such as random selection, enrollment probability proportional to degree, and recruitment occurring over recruiter's network ties, were violated. These problems stem from an overly simplistic conceptualization of peer recruitment processes and dynamics. We found nearly half of participants were recruited via coupon redistribution on the street. Non-uniform patterns occurred in multiple recruitment stages related to both recruiter behavior (choosing and reaching alters, passing coupons, etc.) and recruit behavior (accepting/rejecting coupons, failing to enter study, passing coupons to others). Some factors associated with these patterns were also associated with HIV risk.

  14. Chain Pooling modeling selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    As many as three iterated statistical model deletion procedures were considered for an experiment. Population model coefficients were chosen to simulate a saturated 2 to the 4th power experiment having an unfavorable distribution of parameter values. Using random number studies, three model selection strategies were developed, namely, (1) a strategy to be used in anticipation of large coefficients of variation, approximately 65 percent, (2) a strategy to be sued in anticipation of small coefficients of variation, 4 percent or less, and (3) a security regret strategy to be used in the absence of such prior knowledge.

  15. Controllability of social networks and the strategic use of random information.

    PubMed

    Cremonini, Marco; Casamassima, Francesca

    2017-01-01

    This work is aimed at studying realistic social control strategies for social networks based on the introduction of random information into the state of selected driver agents. Deliberately exposing selected agents to random information is a technique already experimented in recommender systems or search engines, and represents one of the few options for influencing the behavior of a social context that could be accepted as ethical, could be fully disclosed to members, and does not involve the use of force or of deception. Our research is based on a model of knowledge diffusion applied to a time-varying adaptive network and considers two well-known strategies for influencing social contexts: One is the selection of few influencers for manipulating their actions in order to drive the whole network to a certain behavior; the other, instead, drives the network behavior acting on the state of a large subset of ordinary, scarcely influencing users. The two approaches have been studied in terms of network and diffusion effects. The network effect is analyzed through the changes induced on network average degree and clustering coefficient, while the diffusion effect is based on two ad hoc metrics which are defined to measure the degree of knowledge diffusion and skill level, as well as the polarization of agent interests. The results, obtained through simulations on synthetic networks, show a rich dynamics and strong effects on the communication structure and on the distribution of knowledge and skills. These findings support our hypothesis that the strategic use of random information could represent a realistic approach to social network controllability, and that with both strategies, in principle, the control effect could be remarkable.

  16. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    PubMed

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  17. Genetic variation maintained in multilocus models of additive quantitative traits under stabilizing selection.

    PubMed Central

    Bürger, R; Gimelfarb, A

    1999-01-01

    Stabilizing selection for an intermediate optimum is generally considered to deplete genetic variation in quantitative traits. However, conflicting results from various types of models have been obtained. While classical analyses assuming a large number of independent additive loci with individually small effects indicated that no genetic variation is preserved under stabilizing selection, several analyses of two-locus models showed the contrary. We perform a complete analysis of a generalization of Wright's two-locus quadratic-optimum model and investigate numerically the ability of quadratic stabilizing selection to maintain genetic variation in additive quantitative traits controlled by up to five loci. A statistical approach is employed by choosing randomly 4000 parameter sets (allelic effects, recombination rates, and strength of selection) for a given number of loci. For each parameter set we iterate the recursion equations that describe the dynamics of gamete frequencies starting from 20 randomly chosen initial conditions until an equilibrium is reached, record the quantities of interest, and calculate their corresponding mean values. As the number of loci increases from two to five, the fraction of the genome expected to be polymorphic declines surprisingly rapidly, and the loci that are polymorphic increasingly are those with small effects on the trait. As a result, the genetic variance expected to be maintained under stabilizing selection decreases very rapidly with increased number of loci. The equilibrium structure expected under stabilizing selection on an additive trait differs markedly from that expected under selection with no constraints on genotypic fitness values. The expected genetic variance, the expected polymorphic fraction of the genome, as well as other quantities of interest, are only weakly dependent on the selection intensity and the level of recombination. PMID:10353920

  18. Differentiation in Access to, and the Use and Sharing of (Open) Educational Resources among Students and Lecturers at Kenyan Universities

    ERIC Educational Resources Information Center

    Pete, Judith; Mulder, Fred; Neto, Jose Dutra Oliveira

    2017-01-01

    In order to obtain a fair "OER picture" for the Global South a large-scale study has been carried out for a series of countries, including Kenya. In this paper we report on the Kenya study, run at four universities that have been selected with randomly sampled students and lecturers. Empirical data have been generated by the use of a…

  19. The effect of newly induced mutations on the fitness of genotypes and populations of yeast (Saccharomyces cerevisiae).

    PubMed

    Orthen, E; Lange, P; Wöhrmann, K

    1984-12-01

    This paper analyses the fate of artificially induced mutations and their importance to the fitness of populations of the yeast, Saccharomyces cerevisiae, an increasingly important model organism in population genetics. Diploid strains, treated with UV and EMS, were cultured asexually for approximately 540 generations and under conditions where the asexual growth was interrupted by a sexual phase. Growth rates of 100 randomly sampled diploid clones were estimated at the beginning and at the end of the experiment. After the induction of sporulation the growth rates of 100 randomly sampled spores were measured. UV and EMS treatment decreases the average growth rate of the clones significantly but increases the variability in comparison to the untreated control. After selection over approximately 540 generations, variability in growth rates was reduced to that of the untreated control. No increase in mean population fitness was observed. However, the results show that after selection there still exists a large amount of hidden genetic variability in the populations which is revealed when the clones are cultivated in environments other than those in which selection took place. A sexual phase increased the reduction of the induced variability.

  20. Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network

    NASA Astrophysics Data System (ADS)

    Wang, Zhen-yu; Zhang, Li-jie

    2017-10-01

    Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s

  1. Types, frequencies, and burden of nonspecific adverse events of drugs: analysis of randomized placebo-controlled clinical trials.

    PubMed

    Mahr, Alfred; Golmard, Clara; Pham, Emilie; Iordache, Laura; Deville, Laure; Faure, Pierre

    2017-07-01

    Scarce studies analyzing adverse event (AE) data from randomized placebo-controlled clinical trials (RPCCTs) of selected illnesses suggested that a substantial proportion of collected AEs are unrelated to the drug taken. This study analyzed the nonspecific AEs occurring with active-drug exposure in RPCCTs for a large range of medical conditions. Randomized placebo-controlled clinical trials published in five prominent medical journals during 2006-2012 were searched. Only trials that evaluated orally or parenterally administered active drugs versus placebo in a head-to-head setting were selected. For AEs reported from ≥10 RPCCTs, Pearson's correlation coefficients (r) were calculated to determine the relationship between AE rates in placebo and active-drug recipients. Random-effects meta-analyses were used to compute proportions of nonspecific AEs, which were truncated at a maximum of 100%, in active-drug recipients. We included 231 trials addressing various medical domains or healthy participants. For the 88 analyzed AE variables, AE rates for placebo and active-drug recipients were in general strongly correlated (r > 0.50) or very strongly correlated (r > 0.80). The pooled proportions of nonspecific AEs for the active-drug recipients were 96.8% (95%CI: 95.5-98.1) for any AEs, 100% (97.9-100) for serious AEs, and 77.7% (72.7-83.2) for drug-related AEs. Results were similar for individual medical domains and healthy participants. The pooled proportion of nonspecificity of 82 system organ class and individual AE types ranged from 38% to 100%. The large proportion of nonspecific AEs reported in active-drug recipients of RPCCTs, including serious and drug-related AEs, highlights the limitations of clinical trial data to determine the tolerability of drugs. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Monamine oxidase inhibitors: current and emerging agents for Parkinson disease.

    PubMed

    Fernandez, Hubert H; Chen, Jack J

    2007-01-01

    Monoamine oxidase type B (MAO-B) is the predominant isoform responsible for the metabolic breakdown of dopamine in the brain. Selective inhibition of brain MAO-B results in elevation of synaptosomal dopamine concentrations. Data have been reported regarding the selective MAO-B inhibitors, rasagiline and selegiline, for the symptomatic treatment of Parkinson disease (PD). Selegiline has demonstrated efficacy as monotherapy in patients with early PD (Deprenyl and Tocopherol Antioxidative Therapy of Parkinsonism study), but evidence of selegiline efficacy as adjunctive treatment in levodopa-treated PD patients with motor fluctuations is equivocal. A new formulation of selegiline (Zydis selegiline) has been evaluated in 2 small, placebo-controlled studies as adjunctive therapy to levodopa. The Zydis formulation allows pregastric absorption of selegiline, minimizing first-pass metabolism, and thereby increasing selegiline bioavailability and reducing the concentration of amphetamine metabolites. Rasagiline is a selective, second-generation, irreversible MAO-B inhibitor, with at least 5 times the potency of selegiline in vitro and in animal models. Rasagiline has demonstrated efficacy in 1 large, randomized, double-blind, placebo-controlled trial (TVP-1012 in Early Monotherapy for Parkinson's Disease Outpatients) as initial monotherapy in patients with early PD, and in 2 large, controlled trials (Parkinson's Rasagiline: Efficacy and Safety in the Treatment of "Off," Lasting Effect in Adjunct Therapy With Rasagiline Given Once Daily) as adjunctive treatment in levodopa-treated PD patients with motor fluctuations. Unlike selegiline, rasagiline is an aminoindan derivative with no amphetamine metabolites. A randomized clinical trial is underway to confirm preclinical and preliminary clinical data suggesting rasagiline has disease-modifying effects.

  3. Temporal variation and scale in movement-based resource selection functions

    USGS Publications Warehouse

    Hooten, M.B.; Hanks, E.M.; Johnson, D.S.; Alldredge, M.W.

    2013-01-01

    A common population characteristic of interest in animal ecology studies pertains to the selection of resources. That is, given the resources available to animals, what do they ultimately choose to use? A variety of statistical approaches have been employed to examine this question and each has advantages and disadvantages with respect to the form of available data and the properties of estimators given model assumptions. A wealth of high resolution telemetry data are now being collected to study animal population movement and space use and these data present both challenges and opportunities for statistical inference. We summarize traditional methods for resource selection and then describe several extensions to deal with measurement uncertainty and an explicit movement process that exists in studies involving high-resolution telemetry data. Our approach uses a correlated random walk movement model to obtain temporally varying use and availability distributions that are employed in a weighted distribution context to estimate selection coefficients. The temporally varying coefficients are then weighted by their contribution to selection and combined to provide inference at the population level. The result is an intuitive and accessible statistical procedure that uses readily available software and is computationally feasible for large datasets. These methods are demonstrated using data collected as part of a large-scale mountain lion monitoring study in Colorado, USA.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ben-Naim, Eli; Krapivsky, Paul

    Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less

  5. Improvement of Automated POST Case Success Rate Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Zwack, Matthew R.; Dees, Patrick D.

    2017-01-01

    During early conceptual design of complex systems, concept down selection can have a large impact upon program life-cycle cost. Therefore, any concepts selected during early design will inherently commit program costs and affect the overall probability of program success. For this reason it is important to consider as large a design space as possible in order to better inform the down selection process. For conceptual design of launch vehicles, trajectory analysis and optimization often presents the largest obstacle to evaluating large trade spaces. This is due to the sensitivity of the trajectory discipline to changes in all other aspects of the vehicle design. Small deltas in the performance of other subsystems can result in relatively large fluctuations in the ascent trajectory because the solution space is non-linear and multi-modal [1]. In order to help capture large design spaces for new launch vehicles, the authors have performed previous work seeking to automate the execution of the industry standard tool, Program to Optimize Simulated Trajectories (POST). This work initially focused on implementation of analyst heuristics to enable closure of cases in an automated fashion, with the goal of applying the concepts of design of experiments (DOE) and surrogate modeling to enable near instantaneous throughput of vehicle cases [2]. Additional work was then completed to improve the DOE process by utilizing a graph theory based approach to connect similar design points [3]. The conclusion of the previous work illustrated the utility of the graph theory approach for completing a DOE through POST. However, this approach was still dependent upon the use of random repetitions to generate seed points for the graph. As noted in [3], only 8% of these random repetitions resulted in converged trajectories. This ultimately affects the ability of the random reps method to confidently approach the global optima for a given vehicle case in a reasonable amount of time. With only an 8% pass rate, tens or hundreds of thousands of reps may be needed to be confident that the best repetition is at least close to the global optima. However, typical design study time constraints require that fewer repetitions be attempted, sometimes resulting in seed points that have only a handful of successful completions. If a small number of successful repetitions are used to generate a seed point, the graph method may inherit some inaccuracies as it chains DOE cases from the non-global-optimal seed points. This creates inherent noise in the graph data, which can limit the accuracy of the resulting surrogate models. For this reason, the goal of this work is to improve the seed point generation method and ultimately the accuracy of the resulting POST surrogate model. The work focuses on increasing the case pass rate for seed point generation.

  6. Factors Associated With Time to Site Activation, Randomization, and Enrollment Performance in a Stroke Prevention Trial.

    PubMed

    Demaerschalk, Bart M; Brown, Robert D; Roubin, Gary S; Howard, Virginia J; Cesko, Eldina; Barrett, Kevin M; Longbottom, Mary E; Voeks, Jenifer H; Chaturvedi, Seemant; Brott, Thomas G; Lal, Brajesh K; Meschia, James F; Howard, George

    2017-09-01

    Multicenter clinical trials attempt to select sites that can move rapidly to randomization and enroll sufficient numbers of patients. However, there are few assessments of the success of site selection. In the CREST-2 (Carotid Revascularization and Medical Management for Asymptomatic Carotid Stenosis Trials), we assess factors associated with the time between site selection and authorization to randomize, the time between authorization to randomize and the first randomization, and the average number of randomizations per site per month. Potential factors included characteristics of the site, specialty of the principal investigator, and site type. For 147 sites, the median time between site selection to authorization to randomize was 9.9 months (interquartile range, 7.7, 12.4), and factors associated with early site activation were not identified. The median time between authorization to randomize and a randomization was 4.6 months (interquartile range, 2.6, 10.5). Sites with authorization to randomize in only the carotid endarterectomy study were slower to randomize, and other factors examined were not significantly associated with time-to-randomization. The recruitment rate was 0.26 (95% confidence interval, 0.23-0.28) patients per site per month. By univariate analysis, factors associated with faster recruitment were authorization to randomize in both trials, principal investigator specialties of interventional radiology and cardiology, pre-trial reported performance >50 carotid angioplasty and stenting procedures per year, status in the top half of recruitment in the CREST trial, and classification as a private health facility. Participation in StrokeNet was associated with slower recruitment as compared with the non-StrokeNet sites. Overall, selection of sites with high enrollment rates will likely require customization to align the sites selected to the factor under study in the trial. URL: http://www.clinicaltrials.gov. Unique identifier: NCT02089217. © 2017 American Heart Association, Inc.

  7. Population genetics and molecular evolution of DNA sequences in transposable elements. I. A simulation framework.

    PubMed

    Kijima, T E; Innan, Hideki

    2013-11-01

    A population genetic simulation framework is developed to understand the behavior and molecular evolution of DNA sequences of transposable elements. Our model incorporates random transposition and excision of transposable element (TE) copies, two modes of selection against TEs, and degeneration of transpositional activity by point mutations. We first investigated the relationships between the behavior of the copy number of TEs and these parameters. Our results show that when selection is weak, the genome can maintain a relatively large number of TEs, but most of them are less active. In contrast, with strong selection, the genome can maintain only a limited number of TEs but the proportion of active copies is large. In such a case, there could be substantial fluctuations of the copy number over generations. We also explored how DNA sequences of TEs evolve through the simulations. In general, active copies form clusters around the original sequence, while less active copies have long branches specific to themselves, exhibiting a star-shaped phylogeny. It is demonstrated that the phylogeny of TE sequences could be informative to understand the dynamics of TE evolution.

  8. Assessing the accuracy and stability of variable selection ...

    EPA Pesticide Factsheets

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti

  9. Data-driven confounder selection via Markov and Bayesian networks.

    PubMed

    Häggström, Jenny

    2018-06-01

    To unbiasedly estimate a causal effect on an outcome unconfoundedness is often assumed. If there is sufficient knowledge on the underlying causal structure then existing confounder selection criteria can be used to select subsets of the observed pretreatment covariates, X, sufficient for unconfoundedness, if such subsets exist. Here, estimation of these target subsets is considered when the underlying causal structure is unknown. The proposed method is to model the causal structure by a probabilistic graphical model, for example, a Markov or Bayesian network, estimate this graph from observed data and select the target subsets given the estimated graph. The approach is evaluated by simulation both in a high-dimensional setting where unconfoundedness holds given X and in a setting where unconfoundedness only holds given subsets of X. Several common target subsets are investigated and the selected subsets are compared with respect to accuracy in estimating the average causal effect. The proposed method is implemented with existing software that can easily handle high-dimensional data, in terms of large samples and large number of covariates. The results from the simulation study show that, if unconfoundedness holds given X, this approach is very successful in selecting the target subsets, outperforming alternative approaches based on random forests and LASSO, and that the subset estimating the target subset containing all causes of outcome yields smallest MSE in the average causal effect estimation. © 2017, The International Biometric Society.

  10. Levosimendan for Perioperative Cardioprotection: Myth or Reality?

    PubMed

    Santillo, Elpidio; Migale, Monica; Massini, Carlo; Incalzi, Raffaele Antonelli

    2018-03-21

    Levosimendan is a calcium sensitizer drug causing increased contractility in the myocardium and vasodilation in the vascular system. It is mainly used for the therapy of acute decompensated heart failure. Several studies on animals and humans provided evidence of the cardioprotective properties of levosimendan including preconditioning and anti-apoptotic. In view of these favorable effects, levosimendan has been tested in patients undergoing cardiac surgery for the prevention or treatment of low cardiac output syndrome. However, initial positive results from small studies have not been confirmed in three recent large trials. To summarize levosimendan mechanisms of action and clinical use and to review available evidence on its perioperative use in cardiac surgery setting. We searched two electronic medical databases for randomized controlled trials studying levosimendan in cardiac surgery patients, ranging from January 2000 to August 2017. Meta-analyses, consensus documents and retrospective studies were also reviewed. In the selected interval of time, 54 studies on the use of levosimendan in heart surgery have been performed. Early small size studies and meta-analyses have suggested that perioperative levosimendan infusion could diminish mortality and other adverse outcomes (i.e. intensive care unit stay and need for inotropic support). Instead, three recent large randomized controlled trials (LEVO-CTS, CHEETAH and LICORN) showed no significant survival benefits from levosimendan. However, in LEVO-CTS trial, prophylactic levosimendan administration significantly reduced the incidence of low cardiac output syndrome. Based on most recent randomized controlled trials, levosimendan, although effective for the treatment of acute heart failure, can't be recommended as standard therapy for the management of heart surgery patients. Further studies are needed to clarify whether selected subgroups of heart surgery patients may benefit from perioperative levosimendan infusion. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Rationale and design of the Patient Related OuTcomes with Endeavor versus Cypher stenting Trial (PROTECT): randomized controlled trial comparing the incidence of stent thrombosis and clinical events after sirolimus or zotarolimus drug-eluting stent implantation.

    PubMed

    Camenzind, Edoardo; Wijns, William; Mauri, Laura; Boersma, Eric; Parikh, Keyur; Kurowski, Volkhard; Gao, Runlin; Bode, Christoph; Greenwood, John P; Gershlick, Anthony; O'Neill, William; Serruys, Patrick W; Jorissen, Brenda; Steg, P Gabriel

    2009-12-01

    Drug-eluting stents (DES) reduce restenosis rates compared to bare-metal stents. Most trials using DES enrolled selected patient and lesion subtypes, and primary endpoint focused on angiographic metrics or relatively short-term outcomes. When DES are used in broader types of lesions and patients, important differences may emerge in long-term outcomes between stent types, particularly the incidence of late stent thrombosis. PROTECT is a randomized, open-label trial comparing the long-term safety of the zotarolimus-eluting stent and the sirolimus-eluting stent. The trial has enrolled 8,800 patients representative of those seen in routine clinical practice, undergoing elective, unplanned, or emergency procedures in native coronary arteries in 196 centers in 36 countries. Indications for the procedure and selection of target vessel and lesion characteristics were at the operator's discretion. Procedures could be staged, but no more than 4 target lesions could be treated per patient. Duration of dual antiplatelet therapy was prespecified to achieve similar lengths of treatment in both study arms. The shortest predefined duration was 3 months, as per the manufacturer's instructions. The primary outcome measure is the composite rate of definite and probable stent thrombosis at 3 years, centrally adjudicated using Academic Research Consortium definitions. The main secondary end points are 3-year all-cause mortality, cardiac death, large nonfatal myocardial infarction, and all myocardial infarctions. This large, international, randomized, controlled trial will provide important information on comparative rates of stent thrombosis between 2 different DES systems and safety as assessed by patient-relevant long-term clinical outcomes.

  12. ASSORTATIVE MATING CAN IMPEDE OR FACILITATE FIXATION OF UNDERDOMINANT ALLELES

    PubMed Central

    NEWBERRY, MITCHELL G; MCCANDLISH, DAVID M; PLOTKIN, JOSHUA B

    2017-01-01

    Underdominant mutations have fixed between divergent species, yet classical models suggest that rare underdominant alleles are purged quickly except in small or subdivided populations. We predict that underdominant alleles that also influence mate choice, such as those affecting coloration patterns visible to mates and predators alike, can fix more readily. We analyze a mechanistic model of positive assortative mating in which individuals have n chances to sample compatible mates. This one-parameter model naturally spans random mating (n =1) and complete assortment (n → ∞), yet it produces sexual selection whose strength depends non-monotonically on n. This sexual selection interacts with viability selection to either inhibit or facilitate fixation. As mating opportunities increase, underdominant alleles fix as frequently as neutral mutations, even though sexual selection and underdominance independently each suppress rare alleles. This mechanism allows underdominant alleles to fix in large populations and illustrates how life history can affect evolutionary change. PMID:27497738

  13. [Therapeutic impact of screening for myocardial ischemia among asymptomatic type 2 diabetic subjects].

    PubMed

    Wallemacq, Caroline M; Scheen, André J

    2008-08-27

    Coronary artery disease is the major cause of mortality of type 2 diabetic subjects. Its early diagnosis to prevent progression and clinical events has intuitive appeal. Somehow, rationale for screening has not been clearly established. Screening should not modify the medical therapy because diabetic subjects have to be treated in a secondary prevention strategy. We have no data from randomized trials concerning a better outcome after revascularization in this specific population. The question how to select the high risk population to be screened has no response by now. SPECT and stress echocardiography seem valuable for screening but not for risk stratification. A large randomized clinical trial is required to confirm the cost-utility ratio of such a screening.

  14. Long-term Use of Opioids for Complex Chronic Pain

    PubMed Central

    Von Korff, Michael R.

    2014-01-01

    Increased opioid prescribing for back pain and other chronic musculoskeletal pain conditions has been accompanied by dramatic increases in prescription opioid addiction and fatal overdose. Opioid-related risks appear to increase with dose. While short-term randomized trials of opioids for chronic pain have found modest analgesic benefits (a one-third reduction in pain intensity on average), the long-term safety and effectiveness of opioids for chronic musculoskeletal pain is unknown. Given the lack of large, long-term randomized trials, recent epidemiologic data suggests the need for caution when considering long-term use of opioids to manage chronic musculoskeletal pain, particularly at higher dosage levels. Principles for achieving more selective and cautious use of opioids for chronic musculoskeletal pain are proposed. PMID:24315147

  15. A Review of Clinical Trials in Spinal Cord Injury including Biomarkers.

    PubMed

    Badhiwala, Jetan H; Wilson, Jefferson R; Kwon, Brian K; Casha, Steve; Fehlings, Michael G

    2018-06-11

    Acute traumatic spinal cord injury (SCI) entered the arena of prospective randomized clinical trials almost 40 years ago, with the undertaking of the National Acute Spinal Cord Study (NASCIS) I trial. Since then, a number of clinical trials have been conducted in the field, spurred by the devastating physical, social, and economic consequences of acute SCI for patients, families, and society at large. Many of these have been controversial and attracted criticism. The current review provides a critical summary of select past and current clinical trials in SCI, focusing in particular on the findings of prospective randomized controlled trials (RCTs), the challenges and barriers encountered, and the valuable lessons learned that can be applied to future trials.

  16. Chlorophyll a and inorganic suspended solids in backwaters of the upper Mississippi River system: Backwater lake effects and their associations with selected environmental predictors

    USGS Publications Warehouse

    Rogala, James T.; Gray, Brian R.

    2006-01-01

    The Long Term Resource Monitoring Program (LTRMP) uses a stratified random sampling design to obtain water quality statistics within selected study reaches of the Upper Mississippi River System (UMRS). LTRMP sampling strata are based on aquatic area types generally found in large rivers (e.g., main channel, side channel, backwater, and impounded areas). For hydrologically well-mixed strata (i.e., main channel), variance associated with spatial scales smaller than the strata scale is a relatively minor issue for many water quality parameters. However, analysis of LTRMP water quality data has shown that within-strata variability at the strata scale is high in off-channel areas (i.e., backwaters). A portion of that variability may be associated with differences among individual backwater lakes (i.e., small and large backwater regions separated by channels) that cumulatively make up the backwater stratum. The objective of the statistical modeling presented here is to determine if differences among backwater lakes account for a large portion of the variance observed in the backwater stratum for selected parameters. If variance associated with backwater lakes is high, then inclusion of backwater lake effects within statistical models is warranted. Further, lakes themselves may represent natural experimental units where associations of interest to management may be estimated.

  17. Large motor units are selectively affected following a stroke.

    PubMed

    Lukács, M; Vécsei, L; Beniczky, S

    2008-11-01

    Previous studies have revealed a loss of functioning motor units in stroke patients. However, it remained unclear whether the motor units are affected randomly or in some specific pattern. We assessed whether there is a selective loss of the large (high recruitment threshold) or the small (low recruitment threshold) motor units following a stroke. Forty-five stroke patients and 40 healthy controls participated in the study. Macro-EMG was recorded from the abductor digiti minimi muscle at two levels of force output (low and high). The median macro motor unit potential (macro-MUP) amplitude on the paretic side was compared with those on the unaffected side and in the controls. In the control group and on the unaffected side, the macro-MUPs were significantly larger at the high force output than at the low one. However, on the paretic side the macro-MUPs at the high force output had the same amplitude as those recorded at the low force output. These changes correlated with the severity of the paresis. Following a stroke, there is a selective functional loss of the large, high-threshold motor units. These changes are related to the severity of the symptoms. Our findings furnish further insight into the pathophysiology of the motor deficit following a stroke.

  18. America Goes to War: Managing the Force During Times of Stress and Uncertainty

    DTIC Science & Technology

    2007-01-01

    among our young people . They recognize the draft as an infringement on their liberty, which it is. To them, it represents a government...attested to its unpopularity. In the most perverse way, the draft was effective in the North, not because it brought in large numbers of people , but...When the cause did not enjoy the full support of the people , as in Vietnam, or the selection appeared to be random or biased with

  19. Selection of core animals in the Algorithm for Proven and Young using a simulation model.

    PubMed

    Bradford, H L; Pocrnić, I; Fragomeni, B O; Lourenco, D A L; Misztal, I

    2017-12-01

    The Algorithm for Proven and Young (APY) enables the implementation of single-step genomic BLUP (ssGBLUP) in large, genotyped populations by separating genotyped animals into core and non-core subsets and creating a computationally efficient inverse for the genomic relationship matrix (G). As APY became the choice for large-scale genomic evaluations in BLUP-based methods, a common question is how to choose the animals in the core subset. We compared several core definitions to answer this question. Simulations comprised a moderately heritable trait for 95,010 animals and 50,000 genotypes for animals across five generations. Genotypes consisted of 25,500 SNP distributed across 15 chromosomes. Genotyping errors and missing pedigree were also mimicked. Core animals were defined based on individual generations, equal representation across generations, and at random. For a sufficiently large core size, core definitions had the same accuracies and biases, even if the core animals had imperfect genotypes. When genotyped animals had unknown parents, accuracy and bias were significantly better (p ≤ .05) for random and across generation core definitions. © 2017 The Authors. Journal of Animal Breeding and Genetics Published by Blackwell Verlag GmbH.

  20. 47 CFR 1.1604 - Post-selection hearings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  1. 47 CFR 1.1604 - Post-selection hearings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  2. Evaluation of variable selection methods for random forests and omics data sets.

    PubMed

    Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke

    2017-10-16

    Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.

  3. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  4. Chromatic energy filter and characterization of laser-accelerated proton beams for particle therapy

    NASA Astrophysics Data System (ADS)

    Hofmann, Ingo; Meyer-ter-Vehn, Jürgen; Yan, Xueqing; Al-Omari, Husam

    2012-07-01

    The application of laser accelerated protons or ions for particle therapy has to cope with relatively large energy and angular spreads as well as possibly significant random fluctuations. We suggest a method for combined focusing and energy selection, which is an effective alternative to the commonly considered dispersive energy selection by magnetic dipoles. Our method is based on the chromatic effect of a magnetic solenoid (or any other energy dependent focusing device) in combination with an aperture to select a certain energy width defined by the aperture radius. It is applied to an initial 6D phase space distribution of protons following the simulation output from a Radiation Pressure Acceleration model. Analytical formula for the selection aperture and chromatic emittance are confirmed by simulation results using the TRACEWIN code. The energy selection is supported by properly placed scattering targets to remove the imprint of the chromatic effect on the beam and to enable well-controlled and shot-to-shot reproducible energy and transverse density profiles.

  5. Comparing Standard and Selective Degradation DNA Extraction Methods: Results from a Field Experiment with Sexual Assault Kits.

    PubMed

    Campbell, Rebecca; Pierce, Steven J; Sharma, Dhruv B; Shaw, Jessica; Feeney, Hannah; Nye, Jeffrey; Schelling, Kristin; Fehler-Cabral, Giannina

    2017-01-01

    A growing number of U.S. cities have large numbers of untested sexual assault kits (SAKs) in police property facilities. Testing older kits and maintaining current case work will be challenging for forensic laboratories, creating a need for more efficient testing methods. We evaluated selective degradation methods for DNA extraction using actual case work from a sample of previously unsubmitted SAKs in Detroit, Michigan. We randomly assigned 350 kits to either standard or selective degradation testing methods and then compared DNA testing rates and CODIS entry rates between the two groups. Continuation-ratio modeling showed no significant differences, indicating that the selective degradation method had no decrement in performance relative to customary methods. Follow-up equivalence tests indicated that CODIS entry rates for the two methods could differ by more than ±5%. Selective degradation methods required less personnel time for testing and scientific review than standard testing. © 2016 American Academy of Forensic Sciences.

  6. How to select among available options for the treatment of multiple myeloma.

    PubMed

    Harousseau, J L

    2012-09-01

    The introduction of novel agents (thalidomide, bortezomib and lenalidomide) in the frontline therapy of multiple myeloma has markedly improved the outcome both in younger patients who are candidates for high-dose therapy plus autologous stem-cell transplantation (HDT/ASCT) and in elderly patients. In the HDT/ASCT paradigm, novel agents may be used as induction therapy or after HDT/ASCT as consolidation and/or maintenance therapy. It is now possible to achieve up to 70% complete plus very good partial remission after HDT/ASCT and 70% 3-year progression-free survival (PFS). However long-term non-intensive therapy may also yield high response rates and prolonged PFS. Randomized trials comparing these two strategies are underway. In elderly patients, six randomized studies show the benefit of adding thalidomide to melphalan-prednisone (MP). a large randomized trial has also shown that the combination of bortezomib-MP is superior to MP for all parameters measuring the response and outcome. Finally, the role of maintenance is currently evaluated and a randomized trial shows that low-dose lenalidomide maintenance prolongs PFS.

  7. Robustness of optimal random searches in fragmented environments

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Santos, M. C.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-05-01

    The random search problem is a challenging and interdisciplinary topic of research in statistical physics. Realistic searches usually take place in nonuniform heterogeneous distributions of targets, e.g., patchy environments and fragmented habitats in ecological systems. Here we present a comprehensive numerical study of search efficiency in arbitrarily fragmented landscapes with unlimited visits to targets that can only be found within patches. We assume a random walker selecting uniformly distributed turning angles and step lengths from an inverse power-law tailed distribution with exponent μ . Our main finding is that for a large class of fragmented environments the optimal strategy corresponds approximately to the same value μopt≈2 . Moreover, this exponent is indistinguishable from the well-known exact optimal value μopt=2 for the low-density limit of homogeneously distributed revisitable targets. Surprisingly, the best search strategies do not depend (or depend only weakly) on the specific details of the fragmentation. Finally, we discuss the mechanisms behind this observed robustness and comment on the relevance of our results to both the random search theory in general, as well as specifically to the foraging problem in the biological context.

  8. Effects of topology on network evolution

    NASA Astrophysics Data System (ADS)

    Oikonomou, Panos; Cluzel, Philippe

    2006-08-01

    The ubiquity of scale-free topology in nature raises the question of whether this particular network design confers an evolutionary advantage. A series of studies has identified key principles controlling the growth and the dynamics of scale-free networks. Here, we use neuron-based networks of boolean components as a framework for modelling a large class of dynamical behaviours in both natural and artificial systems. Applying a training algorithm, we characterize how networks with distinct topologies evolve towards a pre-established target function through a process of random mutations and selection. We find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. Whereas homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously. Remarkably, this latter property is robust to variations of the degree exponent. In contrast, homogeneous random networks require a specific tuning of their connectivity to optimize their ability to evolve. These results highlight an organizing principle that governs the evolution of complex networks and that can improve the design of engineered systems.

  9. Comparing spatial regression to random forests for large ...

    EPA Pesticide Factsheets

    Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po

  10. Composition bias and the origin of ORFan genes

    PubMed Central

    Yomtovian, Inbal; Teerakulkittipong, Nuttinee; Lee, Byungkook; Moult, John; Unger, Ron

    2010-01-01

    Motivation: Intriguingly, sequence analysis of genomes reveals that a large number of genes are unique to each organism. The origin of these genes, termed ORFans, is not known. Here, we explore the origin of ORFan genes by defining a simple measure called ‘composition bias’, based on the deviation of the amino acid composition of a given sequence from the average composition of all proteins of a given genome. Results: For a set of 47 prokaryotic genomes, we show that the amino acid composition bias of real proteins, random ‘proteins’ (created by using the nucleotide frequencies of each genome) and ‘proteins’ translated from intergenic regions are distinct. For ORFans, we observed a correlation between their composition bias and their relative evolutionary age. Recent ORFan proteins have compositions more similar to those of random ‘proteins’, while the compositions of more ancient ORFan proteins are more similar to those of the set of all proteins of the organism. This observation is consistent with an evolutionary scenario wherein ORFan genes emerged and underwent a large number of random mutations and selection, eventually adapting to the composition preference of their organism over time. Contact: ron@biocoml.ls.biu.ac.il Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20231229

  11. Learning From Past Failures of Oral Insulin Trials.

    PubMed

    Michels, Aaron W; Gottlieb, Peter A

    2018-07-01

    Very recently one of the largest type 1 diabetes prevention trials using daily administration of oral insulin or placebo was completed. After 9 years of study enrollment and follow-up, the randomized controlled trial failed to delay the onset of clinical type 1 diabetes, which was the primary end point. The unfortunate outcome follows the previous large-scale trial, the Diabetes Prevention Trial-Type 1 (DPT-1), which again failed to delay diabetes onset with oral insulin or low-dose subcutaneous insulin injections in a randomized controlled trial with relatives at risk for type 1 diabetes. These sobering results raise the important question, "Where does the type 1 diabetes prevention field move next?" In this Perspective, we advocate for a paradigm shift in which smaller mechanistic trials are conducted to define immune mechanisms and potentially identify treatment responders. The stage is set for these interventions in individuals at risk for type 1 diabetes as Type 1 Diabetes TrialNet has identified thousands of relatives with islet autoantibodies and general population screening for type 1 diabetes risk is under way. Mechanistic trials will allow for better trial design and patient selection based upon molecular markers prior to large randomized controlled trials, moving toward a personalized medicine approach for the prevention of type 1 diabetes. © 2018 by the American Diabetes Association.

  12. Assessing Multivariate Constraints to Evolution across Ten Long-Term Avian Studies

    PubMed Central

    Teplitsky, Celine; Tarka, Maja; Møller, Anders P.; Nakagawa, Shinichi; Balbontín, Javier; Burke, Terry A.; Doutrelant, Claire; Gregoire, Arnaud; Hansson, Bengt; Hasselquist, Dennis; Gustafsson, Lars; de Lope, Florentino; Marzal, Alfonso; Mills, James A.; Wheelwright, Nathaniel T.; Yarrall, John W.; Charmantier, Anne

    2014-01-01

    Background In a rapidly changing world, it is of fundamental importance to understand processes constraining or facilitating adaptation through microevolution. As different traits of an organism covary, genetic correlations are expected to affect evolutionary trajectories. However, only limited empirical data are available. Methodology/Principal Findings We investigate the extent to which multivariate constraints affect the rate of adaptation, focusing on four morphological traits often shown to harbour large amounts of genetic variance and considered to be subject to limited evolutionary constraints. Our data set includes unique long-term data for seven bird species and a total of 10 populations. We estimate population-specific matrices of genetic correlations and multivariate selection coefficients to predict evolutionary responses to selection. Using Bayesian methods that facilitate the propagation of errors in estimates, we compare (1) the rate of adaptation based on predicted response to selection when including genetic correlations with predictions from models where these genetic correlations were set to zero and (2) the multivariate evolvability in the direction of current selection to the average evolvability in random directions of the phenotypic space. We show that genetic correlations on average decrease the predicted rate of adaptation by 28%. Multivariate evolvability in the direction of current selection was systematically lower than average evolvability in random directions of space. These significant reductions in the rate of adaptation and reduced evolvability were due to a general nonalignment of selection and genetic variance, notably orthogonality of directional selection with the size axis along which most (60%) of the genetic variance is found. Conclusions These results suggest that genetic correlations can impose significant constraints on the evolution of avian morphology in wild populations. This could have important impacts on evolutionary dynamics and hence population persistence in the face of rapid environmental change. PMID:24608111

  13. The gradient boosting algorithm and random boosting for genome-assisted evaluation in large data sets.

    PubMed

    González-Recio, O; Jiménez-Montero, J A; Alenda, R

    2013-01-01

    In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy and bias. This modification may be used to speed the calculus of genome-assisted evaluation in large data sets such us those obtained from consortiums. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Topology in two dimensions. II - The Abell and ACO cluster catalogues

    NASA Astrophysics Data System (ADS)

    Plionis, Manolis; Valdarnini, Riccardo; Coles, Peter

    1992-09-01

    We apply a method for quantifying the topology of projected galaxy clustering to the Abell and ACO catalogues of rich clusters. We use numerical simulations to quantify the statistical bias involved in using high peaks to define the large-scale structure, and we use the results obtained to correct our observational determinations for this known selection effect and also for possible errors introduced by boundary effects. We find that the Abell cluster sample is consistent with clusters being identified with high peaks of a Gaussian random field, but that the ACO shows a slight meatball shift away from the Gaussian behavior over and above that expected purely from the high-peak selection. The most conservative explanation of this effect is that it is caused by some artefact of the procedure used to select the clusters in the two samples.

  15. Pediatric Academic Productivity: Pediatric Benchmarks for the h- and g-Indices.

    PubMed

    Tschudy, Megan M; Rowe, Tashi L; Dover, George J; Cheng, Tina L

    2016-02-01

    To describe h- and g-indices benchmarks in pediatric subspecialties and general academic pediatrics. Academic productivity is measured increasingly through bibliometrics that derive a statistical enumeration of academic output and impact. The h- and g-indices incorporate the number of publications and citations. Benchmarks for pediatrics have not been reported. Thirty programs were selected randomly from pediatric residency programs accredited by the Accreditation Council for Graduate Medical Education. The h- and g-indices of department chairs were calculated. For general academic pediatrics, pediatric gastroenterology, and pediatric nephrology, a random sample of 30 programs with fellowships were selected. Within each program, an MD faculty member from each academic rank was selected randomly. Google Scholar via Harzing's Publish or Perish was used to calculate the h-index, g-index, and total manuscripts. Only peer-reviewed and English language publications were included. For Chairs, calculations from Google Scholar were compared with Scopus. For all specialties, the mean h- and g-indices significantly increased with academic rank (all P < .05) with the greatest h-indices among Chairs. The h- and g-indices were not statistically different between specialty groups of the same rank; however, mean rank h-indices had large SDs. The h-index calculation using different bibliographic databases only differed by ±1. Mean h-indices increased with academic rank and were not significantly different across the pediatric specialties. Benchmarks for h- and g-indices in pediatrics are provided and may be one measure of academic productivity and impact. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Color- and motion-specific units in the tectum opticum of goldfish.

    PubMed

    Gruber, Morna; Behrend, Konstantin; Neumeyer, Christa

    2016-01-05

    Extracellular recordings were performed from 69 units at different depths between 50 and [Formula: see text]m below the surface of tectum opticum in goldfish. Using large field stimuli (86[Formula: see text] visual angle) of 21 colored HKS-papers we were able to record from 54 color-sensitive units. The colored papers were presented for 5[Formula: see text]s each. They were arranged in the sequence of the color circle in humans separated by gray of medium brightness. We found 22 units with best responses between orange, red and pink. About 12 of these red-sensitive units were of the opponent "red-ON/blue-green-OFF" type as found in retinal bipolar- and ganglion cells as well. Most of them were also activated or inhibited by black and/or white. Some units responded specifically to red either with activation or inhibition. 18 units were sensitive to blue and/or green, 10 of them to both colors and most of them to black as well. They were inhibited by red, and belonged to the opponent "blue-green-ON/red-OFF" type. Other units responded more selectively either to blue, to green or to purple. Two units were selectively sensitive to yellow. A total of 15 units were sensitive to motion, stimulated by an excentrically rotating black and white random dot pattern. Activity of these units was also large when a red-green random dot pattern of high L-cone contrast was used. Activity dropped to zero when the red-green pattern did not modulate the L-cones. Neither of these motion selective units responded to any color. The results directly show color-blindness of motion vision, and confirm the hypothesis of separate and parallel processing of "color" and "motion".

  17. Equivalence between Step Selection Functions and Biased Correlated Random Walks for Statistical Inference on Animal Movement.

    PubMed

    Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul

    2015-01-01

    Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.

  18. Randomization in clinical trials in orthodontics: its significance in research design and methods to achieve it.

    PubMed

    Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore

    2011-12-01

    Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.

  19. Mobile access to virtual randomization for investigator-initiated trials.

    PubMed

    Deserno, Thomas M; Keszei, András P

    2017-08-01

    Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization becomes available for investigator-initiated trials and potentially for large multi-center trials.

  20. Toward Large-Graph Comparison Measures to Understand Internet Topology Dynamics

    DTIC Science & Technology

    2013-09-01

    continuously from randomly selected vantage points in these monitors to destination IP addresses . From each IPv4 /24 prefix on the Internet, a destination is...expected to be more similar. This was verified when the esd and vsd measures applied to this dataset gave a low reading 5 An IPv4 address is a 32-bit...integer value. /24 is the prefix of the IPv4 network starting at a given address , having 24 bits allocated for the network prefix. 6 This utility

  1. Regional flood-frequency relations for streams with many years of no flow

    USGS Publications Warehouse

    Hjalmarson, Hjalmar W.; Thomas, Blakemore E.; ,

    1990-01-01

    In the southwestern United States, flood-frequency relations for streams that drain small arid basins are difficult to estimate, largely because of the extreme temporal and spatial variability of floods and the many years of no flow. A method is proposed that is based on the station-year method. The new method produces regional flood-frequency relations using all available annual peak-discharge data. The prediction errors for the relations are directly assessed using randomly selected subsamples of the annual peak discharges.

  2. Complex network structure of musical compositions: Algorithmic generation of appealing music

    NASA Astrophysics Data System (ADS)

    Liu, Xiao Fan; Tse, Chi K.; Small, Michael

    2010-01-01

    In this paper we construct networks for music and attempt to compose music artificially. Networks are constructed with nodes and edges corresponding to musical notes and their co-occurring connections. We analyze classical music from Bach, Mozart, Chopin, as well as other types of music such as Chinese pop music. We observe remarkably similar properties in all networks constructed from the selected compositions. We conjecture that preserving the universal network properties is a necessary step in artificial composition of music. Power-law exponents of node degree, node strength and/or edge weight distributions, mean degrees, clustering coefficients, mean geodesic distances, etc. are reported. With the network constructed, music can be composed artificially using a controlled random walk algorithm, which begins with a randomly chosen note and selects the subsequent notes according to a simple set of rules that compares the weights of the edges, weights of the nodes, and/or the degrees of nodes. By generating a large number of compositions, we find that this algorithm generates music which has the necessary qualities to be subjectively judged as appealing.

  3. Methods for identifying SNP interactions: a review on variations of Logic Regression, Random Forest and Bayesian logistic regression.

    PubMed

    Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula

    2011-01-01

    Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.

  4. Monte Carlo investigation of thrust imbalance of solid rocket motor pairs

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.; Foster, W. A., Jr.

    1976-01-01

    The Monte Carlo method of statistical analysis is used to investigate the theoretical thrust imbalance of pairs of solid rocket motors (SRMs) firing in parallel. Sets of the significant variables are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs using a simplified, but comprehensive, model of the internal ballistics. The treatment of burning surface geometry allows for the variations in the ovality and alignment of the motor case and mandrel as well as those arising from differences in the basic size dimensions and propellant properties. The analysis is used to predict the thrust-time characteristics of 130 randomly selected pairs of Titan IIIC SRMs. A statistical comparison of the results with test data for 20 pairs shows the theory underpredicts the standard deviation in maximum thrust imbalance by 20% with variability in burning times matched within 2%. The range in thrust imbalance of Space Shuttle type SRM pairs is also estimated using applicable tolerances and variabilities and a correction factor based on the Titan IIIC analysis.

  5. Prevalence and risk factors for Maedi-Visna in sheep farms in Mecklenburg-Western-Pomerania.

    PubMed

    Hüttner, Klim; Seelmann, Matthias; Feldhusen, Frerk

    2010-01-01

    Despite indications of a considerable spread of Maedi-Visna among sheep flocks in Germany, prevalence studies of this important infection are hardly available. Prior to any health schemes and guidelines, knowledge about regional disease distribution is essential. Depending upon herd size, 70 farms were randomly selected, of which 41 cooperated. A total of 2229 blood samples were taken at random and serologically examined. For assessment of selected farm characteristics a questionnaire exercise was conducted at all farms involved. The average herd prevalence is 51.2%, the within-herd prevalence is 28,8%. In the unvariate analysis of risk factors, small (10-100 sheep) and large (> 250 sheep) farms are more MVV-affected than medium sized farms. The average stable and pasture space per sheep is larger at non-infected- compared to infected farms. Owners judgement on general herd health turns out to be better at non-infected compared to infected farms. Taking infected farms only, the risk of within-herd prevalence above 20% is significant higher in crossbred than in purebred flocks.

  6. Extraordinarily Adaptive Properties of the Genetically Encoded Amino Acids

    PubMed Central

    Ilardo, Melissa; Meringer, Markus; Freeland, Stephen; Rasulev, Bakhtiyor; Cleaves II, H. James

    2015-01-01

    Using novel advances in computational chemistry, we demonstrate that the set of 20 genetically encoded amino acids, used nearly universally to construct all coded terrestrial proteins, has been highly influenced by natural selection. We defined an adaptive set of amino acids as one whose members thoroughly cover relevant physico-chemical properties, or “chemistry space.” Using this metric, we compared the encoded amino acid alphabet to random sets of amino acids. These random sets were drawn from a computationally generated compound library containing 1913 alternative amino acids that lie within the molecular weight range of the encoded amino acids. Sets that cover chemistry space better than the genetically encoded alphabet are extremely rare and energetically costly. Further analysis of more adaptive sets reveals common features and anomalies, and we explore their implications for synthetic biology. We present these computations as evidence that the set of 20 amino acids found within the standard genetic code is the result of considerable natural selection. The amino acids used for constructing coded proteins may represent a largely global optimum, such that any aqueous biochemistry would use a very similar set. PMID:25802223

  7. Model selection and averaging in the assessment of the drivers of household food waste to reduce the probability of false positives.

    PubMed

    Grainger, Matthew James; Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce

    2018-01-01

    Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions.

  8. Model selection and averaging in the assessment of the drivers of household food waste to reduce the probability of false positives

    PubMed Central

    Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce

    2018-01-01

    Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions. PMID:29389949

  9. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  10. Evaluation of rubella screening in pregnant women

    PubMed Central

    Gyorkos, T W; Tannenbaum, T N; Abrahamowicz, M; Delage, G; Carsley, J; Marchand, S

    1998-01-01

    BACKGROUND: The rationale for rubella vaccination in the general population and for screening for rubella in pregnant women is the prevention of congenital rubella syndrome. The objective of this study was to evaluate the effectiveness of the prenatal rubella screening program in Quebec. METHODS: A historical cross-sectional study was designed. Sixteen hospitals with obstetric services were randomly selected, 8 from among the 35 "large" hospitals in the province (500 or more live births/year) and 8 from among the 50 "small" hospitals (fewer than 500 live births/year). A total of 2551 women were randomly selected from all mothers of infants born between Apr. 1, 1993, and Mar. 31, 1994, by means of stratified 2-stage sampling. The proportions of women screened and vaccinated were ascertained from information obtained from the hospital chart, the physician's office and the patient. RESULTS: The overall (adjusted) screening rate was 94.0%. The rates were significantly different between large and small hospitals (94.4% v. 89.6%). Five large hospitals and one small hospital had rates above 95.0%. The likelihood of not having been screened was statistically significantly higher for women who had been pregnant previously than for women pregnant for the first time (4.8% v. 1.4%; p < 0.001). Of the 200 women who were seronegative at the time of screening (8.4%), 79 had been vaccinated postpartum, had a positive serological result on subsequent testing or did not require vaccination, and 59 had not been vaccinated postpartum; for 62, subsequent vaccination status was unknown. INTERPRETATION: Continued improvement in screening practices is needed, especially in small hospitals. Because vaccination rates are unacceptably low, it is crucial that steps be taken to address this issue. PMID:9835876

  11. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    PubMed

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  12. GIS-based support vector machine modeling of earthquake-triggered landslide susceptibility in the Jianjiang River watershed, China

    NASA Astrophysics Data System (ADS)

    Xu, Chong; Dai, Fuchu; Xu, Xiwei; Lee, Yuan Hsi

    2012-04-01

    Support vector machine (SVM) modeling is based on statistical learning theory. It involves a training phase with associated input and target output values. In recent years, the method has become increasingly popular. The main purpose of this study is to evaluate the mapping power of SVM modeling in earthquake triggered landslide-susceptibility mapping for a section of the Jianjiang River watershed using a Geographic Information System (GIS) software. The river was affected by the Wenchuan earthquake of May 12, 2008. Visual interpretation of colored aerial photographs of 1-m resolution and extensive field surveys provided a detailed landslide inventory map containing 3147 landslides related to the 2008 Wenchuan earthquake. Elevation, slope angle, slope aspect, distance from seismogenic faults, distance from drainages, and lithology were used as the controlling parameters. For modeling, three groups of positive and negative training samples were used in concert with four different kernel functions. Positive training samples include the centroids of 500 large landslides, those of all 3147 landslides, and 5000 randomly selected points in landslide polygons. Negative training samples include 500, 3147, and 5000 randomly selected points on slopes that remained stable during the Wenchuan earthquake. The four kernel functions are linear, polynomial, radial basis, and sigmoid. In total, 12 cases of landslide susceptibility were mapped. Comparative analyses of landslide-susceptibility probability and area relation curves show that both the polynomial and radial basis functions suitably classified the input data as either landslide positive or negative though the radial basis function was more successful. The 12 generated landslide-susceptibility maps were compared with known landslide centroid locations and landslide polygons to verify the success rate and predictive accuracy of each model. The 12 results were further validated using area-under-curve analysis. Group 3 with 5000 randomly selected points on the landslide polygons, and 5000 randomly selected points along stable slopes gave the best results with a success rate of 79.20% and predictive accuracy of 79.13% under the radial basis function. Of all the results, the sigmoid kernel function was the least skillful when used in concert with the centroid data of all 3147 landslides as positive training samples, and the negative training samples of 3147 randomly selected points in regions of stable slope (success rate = 54.95%; predictive accuracy = 61.85%). This paper also provides suggestions and reference data for selecting appropriate training samples and kernel function types for earthquake triggered landslide-susceptibility mapping using SVM modeling. Predictive landslide-susceptibility maps could be useful in hazard mitigation by helping planners understand the probability of landslides in different regions.

  13. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context

    PubMed Central

    Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan

    2012-01-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720

  14. Interacting particle systems on graphs

    NASA Astrophysics Data System (ADS)

    Sood, Vishal

    In this dissertation, the dynamics of socially or biologically interacting populations are investigated. The individual members of the population are treated as particles that interact via links on a social or biological network represented as a graph. The effect of the structure of the graph on the properties of the interacting particle system is studied using statistical physics techniques. In the first chapter, the central concepts of graph theory and social and biological networks are presented. Next, interacting particle systems that are drawn from physics, mathematics and biology are discussed in the second chapter. In the third chapter, the random walk on a graph is studied. The mean time for a random walk to traverse between two arbitrary sites of a random graph is evaluated. Using an effective medium approximation it is found that the mean first-passage time between pairs of sites, as well as all moments of this first-passage time, are insensitive to the density of links in the graph. The inverse of the mean-first passage time varies non-monotonically with the density of links near the percolation transition of the random graph. Much of the behavior can be understood by simple heuristic arguments. Evolutionary dynamics, by which mutants overspread an otherwise uniform population on heterogeneous graphs, are studied in the fourth chapter. Such a process underlies' epidemic propagation, emergence of fads, social cooperation or invasion of an ecological niche by a new species. The first part of this chapter is devoted to neutral dynamics, in which the mutant genotype does not have a selective advantage over the resident genotype. The time to extinction of one of the two genotypes is derived. In the second part of this chapter, selective advantage or fitness is introduced such that the mutant genotype has a higher birth rate or a lower death rate. This selective advantage leads to a dynamical competition in which selection dominates for large populations, while for small populations the dynamics are similar to the neutral case. The likelihood for the fitter mutants to drive the resident genotype to extinction is calculated.

  15. Habitat manipulation influences northern bobwhite resource selection on a reclaimed surface mine

    USGS Publications Warehouse

    Brooke, Jarred M.; Peters, David C.; Unger, Ashley M.; Tanner, Evan P.; Harper, Craig A.; Keyser, Patrick D.; Clark, Joseph D.; Morgan, John J.

    2015-01-01

    More than 600,000 ha of mine land have been reclaimed in the eastern United States, providing large contiguous tracts of early successional vegetation that can be managed for northern bobwhite (Colinus virginianus). However, habitat quality on reclaimed mine land can be limited by extensive coverage of non-native invasive species, which are commonly planted during reclamation. We used discrete-choice analysis to investigate bobwhite resource selection throughout the year on Peabody Wildlife Management Area, a 3,330-ha reclaimed surface mine in western Kentucky. We used a treatment-control design to study resource selection at 2 spatial scales to identify important aspects of mine land vegetation and whether resource selection differed between areas with habitat management (i.e., burning, disking, herbicide; treatment) and unmanaged units (control). Our objectives were to estimate bobwhite resource selection on reclaimed mine land and to estimate the influence of habitat management practices on resource selection. We used locations from 283 individuals during the breeding season (1 Apr–30 Sep) and 136 coveys during the non-breeding season (1 Oct–Mar 31) from August 2009 to March 2014. Individuals were located closer to shrub cover than would be expected at random throughout the year. During the breeding season, individuals on treatment units used areas with smaller contagion index values (i.e., greater interspersion) compared with individuals on control units. During the non-breeding season, birds selected areas with greater shrub-open edge density compared with random. At the microhabitat scale, individuals selected areas with increased visual obstruction >1 m aboveground. During the breeding season, birds were closer to disked areas (linear and non-linear) than would be expected at random. Individuals selected non-linear disked areas during winter but did not select linear disked areas (firebreaks) because they were planted to winter wheat each fall and lacked cover during the non-breeding season. Individuals also selected areas treated with herbicide to control sericea lespedeza (Lespedeza cuneata) throughout the year. During the breeding season, bobwhites avoided areas burned during the previous dormant season. Habitat quality of reclaimed mine lands may be limited by a lack of shrub cover and extensive coverage of non-native herbaceous vegetation. Managers aiming to increase bobwhite abundance should focus on increasing interspersion of shrub cover, with no area >100 m from shrub cover. We suggest disking and herbicide application to control invasive species and improve the structure and composition of vegetation for bobwhites.

  16. Cosmic ray sources, acceleration and propagation

    NASA Technical Reports Server (NTRS)

    Ptuskin, V. S.

    1986-01-01

    A review is given of selected papers on the theory of cosmic ray (CR) propagation and acceleration. The high isotropy and a comparatively large age of galactic CR are explained by the effective interaction of relativistic particles with random and regular electromagnetic fields in interstellar medium. The kinetic theory of CR propagation in the Galaxy is formulated similarly to the elaborate theory of CR propagation in heliosphere. The substantial difference between these theories is explained by the necessity to take into account in some cases the collective effects due to a rather high density of relativisitc particles. In particular, the kinetic CR stream instability and the hydrodynamic Parker instability is studied. The interaction of relativistic particles with an ensemble of given weak random magnetic fields is calculated by perturbation theory. The theory of CR transfer is considered to be basically completed for this case. The main problem consists in poor information about the structure of the regular and the random galactic magnetic fields. An account is given of CR transfer in a turbulent medium.

  17. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less

  18. Kinetics of Aggregation with Choice

    DOE PAGES

    Ben-Naim, Eli; Krapivsky, Paul

    2016-12-01

    Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less

  19. An Overview of Randomization and Minimization Programs for Randomized Clinical Trials

    PubMed Central

    Saghaei, Mahmoud

    2011-01-01

    Randomization is an essential component of sound clinical trials, which prevents selection biases and helps in blinding the allocations. Randomization is a process by which subsequent subjects are enrolled into trial groups only by chance, which essentially eliminates selection biases. A serious consequence of randomization is severe imbalance among the treatment groups with respect to some prognostic factors, which invalidate the trial results or necessitate complex and usually unreliable secondary analysis to eradicate the source of imbalances. Minimization on the other hand tends to allocate in such a way as to minimize the differences among groups, with respect to prognostic factors. Pure minimization is therefore completely deterministic, that is, one can predict the allocation of the next subject by knowing the factor levels of a previously enrolled subject and having the properties of the next subject. To eliminate the predictability of randomization, it is necessary to include some elements of randomness in the minimization algorithms. In this article brief descriptions of randomization and minimization are presented followed by introducing selected randomization and minimization programs. PMID:22606659

  20. Nest-site selection and reproductive success of greater sage-grouse in a fire-affected habitat of northwestern Nevada

    USGS Publications Warehouse

    Lockyer, Zachary B.; Coates, Peter S.; Casazza, Michael L.; Espinosa, Shawn; Delehanty, David J.

    2015-01-01

    Identifying links between micro-habitat selection and wildlife reproduction is imperative to population persistence and recovery. This information is particularly important for landscape species such as greater sage-grouse (Centrocercus urophasianus; sage-grouse). Although this species has been widely studied, because environmental factors can affect sage-grouse populations, local and regional studies are crucial for developing viable conservation strategies. We studied the habitat-use patterns of 71 radio-marked sage-grouse inhabiting an area affected by wildfire in the Virginia Mountains of northwestern Nevada during 2009–2011 to determine the effect of micro-habitat attributes on reproductive success. We measured standard vegetation parameters at nest and random sites using a multi-scale approach (range = 0.01–15,527 ha). We used an information-theoretic modeling approach to identify environmental factors influencing nest-site selection and survival, and determine whether nest survival was a function of resource selection. Sage-grouse selected micro-sites with greater shrub canopy cover and less cheatgrass (Bromus tectorum) cover than random sites. Total shrub canopy, including sagebrush (Artemisia spp.) and other shrub species, at small spatial scales (0.8 ha and 3.1 ha) was the single contributing selection factor to higher nest survival. These results indicate that reducing the risk of wildfire to maintain important sagebrush habitats could be emphasized in sage-grouse conservation strategies in Nevada. Managers may seek to mitigate the influx of annual grass invasion by preserving large intact sagebrush-dominated stands with a mixture of other shrub species. For this area of Nevada, the results suggest that ≥40% total shrub canopy cover in sage-grouse nesting areas could yield improved reproductive success. 

  1. Predictors of Start of Different Antidepressants in Patient Charts among Patients with Depression

    PubMed Central

    Kim, Hyungjin Myra; Zivin, Kara; Choe, Hae Mi; Stano, Clare M.; Ganoczy, Dara; Walters, Heather; Valenstein, Marcia

    2016-01-01

    Background In usual psychiatric care, antidepressant treatments are selected based on physician and patient preferences rather than being randomly allocated, resulting in spurious associations between these treatments and outcome studies. Objectives To identify factors recorded in electronic medical chart progress notes predictive of antidepressant selection among patients who had received a depression diagnosis. Methods This retrospective study sample consisted of 556 randomly selected Veterans Health Administration (VHA) patients diagnosed with depression from April 1, 1999 to September 30, 2004, stratified by the antidepressant agent, geographic region, gender, and year of depression cohort entry. Predictors were obtained from administrative data, and additional variables were abstracted from electronic medical chart notes in the year prior to the start of the antidepressant in five categories: clinical symptoms and diagnoses, substance use, life stressors, behavioral/ideation measures (e.g., suicide attempts), and treatments received. Multinomial logistic regression analysis was used to assess the predictors associated with different antidepressant prescribing, and adjusted relative risk ratios (RRR) are reported. Results Of the administrative data-based variables, gender, age, illicit drug abuse or dependence, and number of psychiatric medications in prior year were significantly associated with antidepressant selection. After adjusting for administrative data-based variables, sleep problems (RRR = 2.47) or marital issues (RRR = 2.64) identified in the charts were significantly associated with prescribing mirtazapine rather than sertraline; however, no other chart-based variables showed a significant association or an association with a large magnitude. Conclusion Some chart data-based variables were predictive of antidepressant selection, but we neither found many nor found them highly predictive of antidepressant selection in patients treated for depression. PMID:25943003

  2. A Network Meta-Analysis Comparing Effects of Various Antidepressant Classes on the Digit Symbol Substitution Test (DSST) as a Measure of Cognitive Dysfunction in Patients with Major Depressive Disorder.

    PubMed

    Baune, Bernhard T; Brignone, Mélanie; Larsen, Klaus Groes

    2018-02-01

    Major depressive disorder is a common condition that often includes cognitive dysfunction. A systematic literature review of studies and a network meta-analysis were carried out to assess the relative effect of antidepressants on cognitive dysfunction in major depressive disorder. MEDLINE, Embase, Cochrane, CDSR, and PsychINFO databases; clinical trial registries; and relevant conference abstracts were searched for randomized controlled trials assessing the effects of antidepressants/placebo on cognition. A network meta-analysis comparing antidepressants was conducted using a random effects model. The database search retrieved 11337 citations, of which 72 randomized controlled trials from 103 publications met the inclusion criteria. The review identified 86 cognitive tests assessing the effect of antidepressants on cognitive functioning. However, the Digit Symbol Substitution Test, which targets multiple domains of cognition and is recognized as being sensitive to change, was the only test that was used across 12 of the included randomized controlled trials and that allowed the construction of a stable network suitable for the network meta-analysis. The interventions assessed included selective serotonin reuptake inhibitors, serotonin-norepinephrine reuptake inhibitors, and other non-selective serotonin reuptake inhibitors/serotonin-norepinephrine reuptake inhibitors. The network meta-analysis using the Digit Symbol Substitution Test showed that vortioxetine was the only antidepressant that improved cognitive dysfunction on the Digit Symbol Substitution Test vs placebo {standardized mean difference: 0.325 (95% CI = 0.120; 0.529, P=.009}. Compared with other antidepressants, vortioxetine was statistically more efficacious on the Digit Symbol Substitution Test vs escitalopram, nortriptyline, and the selective serotonin reuptake inhibitor and tricyclic antidepressant classes. This study highlighted the large variability in measures used to assess cognitive functioning. The findings on the Digit Symbol Substitution Test indicate differential effects of various antidepressants on improving cognitive function in patients with major depressive disorder. © The Author 2017. Published by Oxford University Press on behalf of CINP.

  3. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  4. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  5. Patterns of medicinal plant use: an examination of the Ecuadorian Shuar medicinal flora using contingency table and binomial analyses.

    PubMed

    Bennett, Bradley C; Husby, Chad E

    2008-03-28

    Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.

  6. Estimation of breeding values using selected pedigree records.

    PubMed

    Morton, Richard; Howarth, Jordan M

    2005-06-01

    Fish bred in tanks or ponds cannot be easily tagged individually. The parentage of any individual may be determined by DNA fingerprinting, but is sufficiently expensive that large numbers cannot be so finger-printed. The measurement of the objective trait can be made on a much larger sample relatively cheaply. This article deals with experimental designs for selecting individuals to be finger-printed and for the estimation of the individual and family breeding values. The general setup provides estimates for both genetic effects regarded as fixed or random and for fixed effects due to known regressors. The family effects can be well estimated when even very small numbers are finger-printed, provided that they are the individuals with the most extreme phenotypes.

  7. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  8. Effects of prey abundance, distribution, visual contrast and morphology on selection by a pelagic piscivore

    USGS Publications Warehouse

    Hansen, Adam G.; Beauchamp, David A.

    2014-01-01

    Most predators eat only a subset of possible prey. However, studies evaluating diet selection rarely measure prey availability in a manner that accounts for temporal–spatial overlap with predators, the sensory mechanisms employed to detect prey, and constraints on prey capture.We evaluated the diet selection of cutthroat trout (Oncorhynchus clarkii) feeding on a diverse planktivore assemblage in Lake Washington to test the hypothesis that the diet selection of piscivores would reflect random (opportunistic) as opposed to non-random (targeted) feeding, after accounting for predator–prey overlap, visual detection and capture constraints.Diets of cutthroat trout were sampled in autumn 2005, when the abundance of transparent, age-0 longfin smelt (Spirinchus thaleichthys) was low, and 2006, when the abundance of smelt was nearly seven times higher. Diet selection was evaluated separately using depth-integrated and depth-specific (accounted for predator–prey overlap) prey abundance. The abundance of different prey was then adjusted for differences in detectability and vulnerability to predation to see whether these factors could explain diet selection.In 2005, cutthroat trout fed non-randomly by selecting against the smaller, transparent age-0 longfin smelt, but for the larger age-1 longfin smelt. After adjusting prey abundance for visual detection and capture, cutthroat trout fed randomly. In 2006, depth-integrated and depth-specific abundance explained the diets of cutthroat trout well, indicating random feeding. Feeding became non-random after adjusting for visual detection and capture. Cutthroat trout selected strongly for age-0 longfin smelt, but against similar sized threespine stickleback (Gasterosteus aculeatus) and larger age-1 longfin smelt in 2006. Overlap with juvenile sockeye salmon (O. nerka) was minimal in both years, and sockeye salmon were rare in the diets of cutthroat trout.The direction of the shift between random and non-random selection depended on the presence of a weak versus a strong year class of age-0 longfin smelt. These fish were easy to catch, but hard to see. When their density was low, poor detection could explain their rarity in the diet. When their density was high, poor detection was compensated by higher encounter rates with cutthroat trout, sufficient to elicit a targeted feeding response. The nature of the feeding selectivity of a predator can be highly dependent on fluctuations in the abundance and suitability of key prey.

  9. Coevolutionary dynamics in large, but finite populations

    NASA Astrophysics Data System (ADS)

    Traulsen, Arne; Claussen, Jens Christian; Hauert, Christoph

    2006-07-01

    Coevolving and competing species or game-theoretic strategies exhibit rich and complex dynamics for which a general theoretical framework based on finite populations is still lacking. Recently, an explicit mean-field description in the form of a Fokker-Planck equation was derived for frequency-dependent selection with two strategies in finite populations based on microscopic processes [A. Traulsen, J. C. Claussen, and C. Hauert, Phys. Rev. Lett. 95, 238701 (2005)]. Here we generalize this approach in a twofold way: First, we extend the framework to an arbitrary number of strategies and second, we allow for mutations in the evolutionary process. The deterministic limit of infinite population size of the frequency-dependent Moran process yields the adjusted replicator-mutator equation, which describes the combined effect of selection and mutation. For finite populations, we provide an extension taking random drift into account. In the limit of neutral selection, i.e., whenever the process is determined by random drift and mutations, the stationary strategy distribution is derived. This distribution forms the background for the coevolutionary process. In particular, a critical mutation rate uc is obtained separating two scenarios: above uc the population predominantly consists of a mixture of strategies whereas below uc the population tends to be in homogeneous states. For one of the fundamental problems in evolutionary biology, the evolution of cooperation under Darwinian selection, we demonstrate that the analytical framework provides excellent approximations to individual based simulations even for rather small population sizes. This approach complements simulation results and provides a deeper, systematic understanding of coevolutionary dynamics.

  10. Group Counseling With Emotionally Disturbed School Children in Taiwan.

    ERIC Educational Resources Information Center

    Chiu, Peter

    The application of group counseling to emotionally disturbed school children in Chinese culture was examined. Two junior high schools located in Tao-Yuan Province were randomly selected with two eighth-grade classes randomly selected from each school. Ten emotionally disturbed students were chosen from each class and randomly assigned to two…

  11. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  12. On Measuring and Reducing Selection Bias with a Quasi-Doubly Randomized Preference Trial

    ERIC Educational Resources Information Center

    Joyce, Ted; Remler, Dahlia K.; Jaeger, David A.; Altindag, Onur; O'Connell, Stephen D.; Crockett, Sean

    2017-01-01

    Randomized experiments provide unbiased estimates of treatment effects, but are costly and time consuming. We demonstrate how a randomized experiment can be leveraged to measure selection bias by conducting a subsequent observational study that is identical in every way except that subjects choose their treatment--a quasi-doubly randomized…

  13. Not a Copernican observer: biased peculiar velocity statistics in the local Universe

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.; Nusser, Adi; Feix, Martin; Bilicki, Maciej

    2017-05-01

    We assess the effect of the local large-scale structure on the estimation of two-point statistics of the observed radial peculiar velocities of galaxies. A large N-body simulation is used to examine these statistics from the perspective of random observers as well as 'Local Group-like' observers conditioned to reside in an environment resembling the observed Universe within 20 Mpc. The local environment systematically distorts the shape and amplitude of velocity statistics with respect to ensemble-averaged measurements made by a Copernican (random) observer. The Virgo cluster has the most significant impact, introducing large systematic deviations in all the statistics. For a simple 'top-hat' selection function, an idealized survey extending to ˜160 h-1 Mpc or deeper is needed to completely mitigate the effects of the local environment. Using shallower catalogues leads to systematic deviations of the order of 50-200 per cent depending on the scale considered. For a flat redshift distribution similar to the one of the CosmicFlows-3 survey, the deviations are even more prominent in both the shape and amplitude at all separations considered (≲100 h-1 Mpc). Conclusions based on statistics calculated without taking into account the impact of the local environment should be revisited.

  14. Evolutionary dynamics on any population structure

    NASA Astrophysics Data System (ADS)

    Allen, Benjamin; Lippner, Gabor; Chen, Yu-Ting; Fotouhi, Babak; Momeni, Naghmeh; Yau, Shing-Tung; Nowak, Martin A.

    2017-03-01

    Evolution occurs in populations of reproducing individuals. The structure of a population can affect which traits evolve. Understanding evolutionary game dynamics in structured populations remains difficult. Mathematical results are known for special structures in which all individuals have the same number of neighbours. The general case, in which the number of neighbours can vary, has remained open. For arbitrary selection intensity, the problem is in a computational complexity class that suggests there is no efficient algorithm. Whether a simple solution for weak selection exists has remained unanswered. Here we provide a solution for weak selection that applies to any graph or network. Our method relies on calculating the coalescence times of random walks. We evaluate large numbers of diverse population structures for their propensity to favour cooperation. We study how small changes in population structure—graph surgery—affect evolutionary outcomes. We find that cooperation flourishes most in societies that are based on strong pairwise ties.

  15. Increased Mycoplasma hyopneumoniae Disease Prevalence in Domestic Hybrids Among Free-Living Wild Boar.

    PubMed

    Goedbloed, Daniel J; van Hooft, Pim; Lutz, Walburga; Megens, Hendrik-Jan; van Wieren, Sip E; Ydenberg, Ron C; Prins, Herbert H T

    2015-12-01

    Wildlife immune genes are subject to natural selection exerted by pathogens. In contrast, domestic immune genes are largely protected from pathogen selection by veterinary care. Introgression of domestic alleles into the wild could lead to increased disease susceptibility, but observations are scarce due to low introgression rates, low disease prevalence and reduced survival of domestic hybrids. Here we report the first observation of a deleterious effect of domestic introgression on disease prevalence in a free-living large mammal. A fraction of 462 randomly sampled free-living European wild boar (Sus scrofa) was genetically identified as recent wild boar-domestic pig hybrids based on 351 SNP data. Analysis of antibody prevalence against the bacterial pathogen Mycoplasma hyopneumoniae (Mhyo) showed an increased Mhyo prevalence in wild-domestic hybrids. We argue that the most likely mechanism explaining the observed association between domestic hybrid status and Mhyo antibody prevalence would be introgression of deleterious domestic alleles. We hypothesise that large-scale use of antibiotics in the swine breeding sector may have played a role in shaping the relatively deleterious properties of domestic swine immune genes and that domestic introgression may also lead to increased wildlife disease susceptibility in the case of other species.

  16. Five years after the Metric Conversion Act, where do we stand? Survey of large US manufacturing and mining firms (the Fortune Magazine 1000)

    NASA Astrophysics Data System (ADS)

    1980-12-01

    A mail survey of randomly chosen 202 of the 1000 largest manufacturing and mining firms, as listed by Fortune magazine, was conducted in late 1979 and early 1980. About 64 percent (112 firms) responded with useful data. This Executive Summary draws on the full report (U.S. Metric Board 1979 Survey of Selected Large U.S. Firms and Industries, Lisa King, King Research, Inc., May 1980; AD-A-091-618) and provides an overview of the study's findings. Some selected findings are: (1) about 30 percent of the large firms produce at least one hard metric product; (2) about 48 percent of foreign sales are of metric products; (3) little corporate coordination and planning seems to accompany conversion to the metric system; (4) about one-third of the firms see laws and regulations as impeding conversion; (5) over 50 percent see lack of customers demand as inhibiting conversion; (6) the most realistic time period for conversion is 10 years, the minimum time for conversion (under pressure) is three years, and the preferred time (at the firm's own pace) is eight years.

  17. Assortative mating can impede or facilitate fixation of underdominant alleles.

    PubMed

    Newberry, Mitchell G; McCandlish, David M; Plotkin, Joshua B

    2016-12-01

    Underdominant mutations have fixed between divergent species, yet classical models suggest that rare underdominant alleles are purged quickly except in small or subdivided populations. We predict that underdominant alleles that also influence mate choice, such as those affecting coloration patterns visible to mates and predators alike, can fix more readily. We analyze a mechanistic model of positive assortative mating in which individuals have n chances to sample compatible mates. This one-parameter model naturally spans random mating (n=1) and complete assortment (n→∞), yet it produces sexual selection whose strength depends non-monotonically on n. This sexual selection interacts with viability selection to either inhibit or facilitate fixation. As mating opportunities increase, underdominant alleles fix as frequently as neutral mutations, even though sexual selection and underdominance independently each suppress rare alleles. This mechanism allows underdominant alleles to fix in large populations and illustrates how life history can affect evolutionary change. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Random sphere packing model of heterogeneous propellants

    NASA Astrophysics Data System (ADS)

    Kochevets, Sergei Victorovich

    It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.

  19. Population genetics of polymorphism and divergence for diploid selection models with arbitrary dominance.

    PubMed

    Williamson, Scott; Fledel-Alon, Adi; Bustamante, Carlos D

    2004-09-01

    We develop a Poisson random-field model of polymorphism and divergence that allows arbitrary dominance relations in a diploid context. This model provides a maximum-likelihood framework for estimating both selection and dominance parameters of new mutations using information on the frequency spectrum of sequence polymorphisms. This is the first DNA sequence-based estimator of the dominance parameter. Our model also leads to a likelihood-ratio test for distinguishing nongenic from genic selection; simulations indicate that this test is quite powerful when a large number of segregating sites are available. We also use simulations to explore the bias in selection parameter estimates caused by unacknowledged dominance relations. When inference is based on the frequency spectrum of polymorphisms, genic selection estimates of the selection parameter can be very strongly biased even for minor deviations from the genic selection model. Surprisingly, however, when inference is based on polymorphism and divergence (McDonald-Kreitman) data, genic selection estimates of the selection parameter are nearly unbiased, even for completely dominant or recessive mutations. Further, we find that weak overdominant selection can increase, rather than decrease, the substitution rate relative to levels of polymorphism. This nonintuitive result has major implications for the interpretation of several popular tests of neutrality.

  20. Evolution in fluctuating environments: decomposing selection into additive components of the Robertson-Price equation.

    PubMed

    Engen, Steinar; Saether, Bernt-Erik

    2014-03-01

    We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  1. Continuous and discontinuous phase transitions in the evolution of a polygenic trait under stabilizing selective pressure

    NASA Astrophysics Data System (ADS)

    Fierro, Annalisa; Cocozza, Sergio; Monticelli, Antonella; Scala, Giovanni; Miele, Gennaro

    2017-06-01

    The presence of phenomena analogous to phase transition in Statistical Mechanics has been suggested in the evolution of a polygenic trait under stabilizing selection, mutation and genetic drift. By using numerical simulations of a model system, we analyze the evolution of a population of N diploid hermaphrodites in random mating regime. The population evolves under the effect of drift, selective pressure in form of viability on an additive polygenic trait, and mutation. The analysis allows to determine a phase diagram in the plane of mutation rate and strength of selection. The involved pattern of phase transitions is characterized by a line of critical points for weak selective pressure (smaller than a threshold), whereas discontinuous phase transitions, characterized by metastable hysteresis, are observed for strong selective pressure. A finite-size scaling analysis suggests the analogy between our system and the mean-field Ising model for selective pressure approaching the threshold from weaker values. In this framework, the mutation rate, which allows the system to explore the accessible microscopic states, is the parameter controlling the transition from large heterozygosity ( disordered phase) to small heterozygosity ( ordered one).

  2. Effects on readiness to change of an educational intervention on depressive disorders for general physicians in primary care based on a modified Prochaska model--a randomized controlled study.

    PubMed

    Shirazi, M; Zeinaloo, A A; Parikh, S V; Sadeghi, M; Taghva, A; Arbabi, M; Kashani, A Sabouri; Alaeddini, F; Lonka, K; Wahlström, R

    2008-04-01

    The Prochaska model of readiness to change has been proposed to be used in educational interventions to improve medical care. To evaluate the impact on readiness to change of an educational intervention on management of depressive disorders based on a modified version of the Prochaska model in comparison with a standard programme of continuing medical education (CME). This is a randomized controlled trial within primary care practices in southern Tehran, Iran. The participants included 192 general physicians working in primary care (GPs) were recruited after random selection and randomized to intervention (96) and control (96). Intervention consisted of interactive, learner-centred educational methods in large and small group settings depending on the GPs' stages of readiness to change. Change in stage of readiness to change measured by the modified version of the Prochaska questionnaire was the The final number of participants was 78 (81%) in the intervention arm and 81 (84%) in the control arm. Significantly (P < 0.01), more GPs (57/96 = 59% versus 12/96 = 12%) in the intervention group changed to higher stages of readiness to change. The intervention effect was 46% points (P < 0.001) and 50% points (P < 0.001) in the large and small group setting, respectively. Educational formats that suit different stages of learning can support primary care doctors to reach higher stages of behavioural change in the topic of depressive disorders. Our findings have practical implications for conducting CME programmes in Iran and are possibly also applicable in other parts of the world.

  3. PROspective Multicenter Imaging Study for Evaluation of chest pain: rationale and design of the PROMISE trial.

    PubMed

    Douglas, Pamela S; Hoffmann, Udo; Lee, Kerry L; Mark, Daniel B; Al-Khalidi, Hussein R; Anstrom, Kevin; Dolor, Rowena J; Kosinski, Andrzej; Krucoff, Mitchell W; Mudrick, Daniel W; Patel, Manesh R; Picard, Michael H; Udelson, James E; Velazquez, Eric J; Cooper, Lawton

    2014-06-01

    Suspected coronary artery disease (CAD) is one of the most common, potentially life-threatening diagnostic problems clinicians encounter. However, no large outcome-based randomized trials have been performed to guide the selection of diagnostic strategies for these patients. The PROMISE study is a prospective, randomized trial comparing the effectiveness of 2 initial diagnostic strategies in patients with symptoms suspicious for CAD. Patients are randomized to either (1) functional testing (exercise electrocardiogram, stress nuclear imaging, or stress echocardiogram) or (2) anatomical testing with ≥64-slice multidetector coronary computed tomographic angiography. Tests are interpreted locally in real time by subspecialty certified physicians, and all subsequent care decisions are made by the clinical care team. Sites are provided results of central core laboratory quality and completeness assessment. All subjects are followed up for ≥1 year. The primary end point is the time to occurrence of the composite of death, myocardial infarction, major procedural complications (stroke, major bleeding, anaphylaxis, and renal failure), or hospitalization for unstable angina. More than 10,000 symptomatic subjects were randomized in 3.2 years at 193 US and Canadian cardiology, radiology, primary care, urgent care, and anesthesiology sites. Multispecialty community practice enrollment into a large pragmatic trial of diagnostic testing strategies is both feasible and efficient. The PROMISE trial will compare the clinical effectiveness of an initial strategy of functional testing against an initial strategy of anatomical testing in symptomatic patients with suspected CAD. Quality of life, resource use, cost-effectiveness, and radiation exposure will be assessed. Copyright © 2014 Mosby, Inc. All rights reserved.

  4. PROspective Multicenter Imaging Study for Evaluation of Chest Pain: Rationale and Design of the PROMISE Trial

    PubMed Central

    Douglas, Pamela S.; Hoffmann, Udo; Lee, Kerry L.; Mark, Daniel B.; Al-Khalidi, Hussein R.; Anstrom, Kevin; Dolor, Rowena J.; Kosinski, Andrzej; Krucoff, Mitchell W.; Mudrick, Daniel W.; Patel, Manesh R.; Picard, Michael H.; Udelson, James E.; Velazquez, Eric J.; Cooper, Lawton

    2014-01-01

    Background Suspected coronary artery disease (CAD) is one of the most common, potentially life threatening diagnostic problems clinicians encounter. However, no large outcome-based randomized trials have been performed to guide the selection of diagnostic strategies for these patients. Methods The PROMISE study is a prospective, randomized trial comparing the effectiveness of two initial diagnostic strategies in patients with symptoms suspicious for CAD. Patients are randomized to either: 1) functional testing (exercise electrocardiogram, stress nuclear imaging, or stress echocardiogram); or 2) anatomic testing with >=64 slice multidetector coronary computed tomographic angiography. Tests are interpreted locally in real time by subspecialty certified physicians and all subsequent care decisions are made by the clinical care team. Sites are provided results of central core lab quality and completeness assessment. All subjects are followed for ≥1 year. The primary end-point is the time to occurrence of the composite of death, myocardial infarction, major procedural complications (stroke, major bleeding, anaphylaxis and renal failure) or hospitalization for unstable angina. Results Over 10,000 symptomatic subjects were randomized in 3.2 years at 193 US and Canadian cardiology, radiology, primary care, urgent care and anesthesiology sites. Conclusion Multi-specialty community practice enrollment into a large pragmatic trial of diagnostic testing strategies is both feasible and efficient. PROMISE will compare the clinical effectiveness of an initial strategy of functional testing against an initial strategy of anatomic testing in symptomatic patients with suspected CAD. Quality of life, resource use, cost effectiveness and radiation exposure will be assessed. Clinical trials.gov identifier NCT01174550 PMID:24890527

  5. A case management intervention targeted to reduce healthcare consumption for frequent Emergency Department visitors: results from an adaptive randomized trial.

    PubMed

    Edgren, Gustaf; Anderson, Jacqueline; Dolk, Anders; Torgerson, Jarl; Nyberg, Svante; Skau, Tommy; Forsberg, Birger C; Werr, Joachim; Öhlen, Gunnar

    2016-10-01

    A small group of frequent visitors to Emergency Departments accounts for a disproportionally large fraction of healthcare consumption including unplanned hospitalizations and overall healthcare costs. In response, several case and disease management programs aimed at reducing healthcare consumption in this group have been tested; however, results vary widely. To investigate whether a telephone-based, nurse-led case management intervention can reduce healthcare consumption for frequent Emergency Department visitors in a large-scale setup. A total of 12 181 frequent Emergency Department users in three counties in Sweden were randomized using Zelen's design or a traditional randomized design to receive either a nurse-led case management intervention or no intervention, and were followed for healthcare consumption for up to 2 years. The traditional design showed an overall 12% (95% confidence interval 4-19%) decreased rate of hospitalization, which was mostly driven by effects in the last year. Similar results were achieved in the Zelen studies, with a significant reduction in hospitalization in the last year, but mixed results in the early development of the project. Our study provides evidence that a carefully designed telephone-based intervention with accurate and systematic patient selection and appropriate staff training in a centralized setup can lead to significant decreases in healthcare consumption and costs. Further, our results also show that the effects are sensitive to the delivery model chosen.

  6. The Coalescent Process in Models with Selection

    PubMed Central

    Kaplan, N. L.; Darden, T.; Hudson, R. R.

    1988-01-01

    Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685

  7. Evolution of Endovascular Therapy in Acute Stroke: Implications of Device Development

    PubMed Central

    Balasubramaian, Adithya; Mitchell, Peter; Dowling, Richard

    2015-01-01

    Intravenous thrombolysis is an effective treatment for acute ischaemic stroke. However, vascular recanalization rates remain poor especially in the setting of large artery occlusion. On the other hand, endovascular intra-arterial therapy addresses this issue with superior recanalization rates compared with intravenous thrombolysis. Although previous randomized controlled studies of intra-arterial therapy failed to demonstrate superiority, the failings may be attributed to a combination of inferior intra-arterial devices and suboptimal selection criteria. The recent results of several randomized controlled trials have demonstrated significantly improved outcomes, underpinning the advantage of newer intra-arterial devices and superior recanalization rates, leading to renewed interest in establishing intra-arterial therapy as the gold standard for acute ischaemic stroke. The aim of this review is to outline the history and development of different intra-arterial devices and future directions in research. PMID:26060800

  8. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  9. Selection response and genetic parameters for residual feed intake in Yorkshire swine.

    PubMed

    Cai, W; Casey, D S; Dekkers, J C M

    2008-02-01

    Residual feed intake (RFI) is a measure of feed efficiency defined as the difference between the observed feed intake and that predicted from the average requirements for growth and maintenance. The objective of this study was to evaluate the response in a selection experiment consisting of a line selected for low RFI and a random control line and to estimate the genetic parameters for RFI and related production and carcass traits. Beginning with random allocation of purebred Yorkshire littermates, in each generation, electronically measured ADFI, ADG, and ultrasound backfat (BF) were evaluated during a approximately 40- to approximately 115-kg of BW test period on approximately 90 boars from first parity and approximately 90 gilts from second parity sows of the low RFI line. After evaluation of first parity boars, approximately 12 boars and approximately 70 gilts from the low RFI line were selected to produce approximately 50 litters for the next generation. Approximately 30 control line litters were produced by random selection and mating. Selection was on EBV for RFI from an animal model analysis of ADFI, with on-test group and sex (fixed), pen within group and litter (random), and covariates for interactions of on- and off-test BW, on-test age, ADG, and BF with generations. The RFI explained 34% of phenotypic variation in ADFI. After 4 generations of selection, estimates of heritability for RFI, ADFI, ADG, feed efficiency (FE, which is the reciprocal of the feed conversion ratio and equals ADG/ ADFI), and ultrasound-predicted BF, LM area (LMA), and intramuscular fat (IMF) were 0.29, 0.51, 0.42, 0.17, 0.68, 0.57, and 0.28, respectively; predicted responses based on average EBV in the low RFI line were -114, -202, and -39 g/d for RFI (= 0.9 phenotypic SD), ADFI (0.9 SD), and ADG (0.4 SD), respectively, and 1.56% for FE (0.5 SD), -0.37 mm for BF (0.1 SD), 0.35 cm(2) for LMA (0.1 SD), and -0.10% for IMF (0.3 SD). Direct phenotypic comparison of the low RFI and control lines based on 92 low RFI and 76 control gilts from the second parity of generation 4 showed that selection had significantly decreased RFI by 96 g/d (P = 0.002) and ADFI by 165 g/d (P < 0.0001). The low RFI line also had 33 g/d lower ADG (P = 0.022), 1.36% greater FE (P = 0.09), and 1.99 mm less BF (P = 0.013). There was not a significant difference in LMA and other carcass traits, including subjective marbling score, despite a large observed difference in ultrasound-predicted IMF (-1.05% with P < 0.0001). In conclusion, RFI is a heritable trait, and selection for low RFI has significantly decreased the feed required for a given rate of growth and backfat.

  10. Effects of Selected Meditative Asanas on Kinaesthetic Perception and Speed of Movement

    ERIC Educational Resources Information Center

    Singh, Kanwaljeet; Bal, Baljinder S.; Deol, Nishan S.

    2009-01-01

    Study aim: To assess the effects of selected meditative "asanas" on kinesthetic perception and movement speed. Material and methods: Thirty randomly selected male students aged 18-24 years volunteered to participate in the study. They were randomly assigned into two groups: A (medidative) and B (control). The Nelson's movement speed and…

  11. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  12. Random one-of-N selector

    DOEpatents

    Kronberg, J.W.

    1993-04-20

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  13. Random one-of-N selector

    DOEpatents

    Kronberg, James W.

    1993-01-01

    An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.

  14. Indirect estimation of signal-dependent noise with nonadaptive heterogeneous samples.

    PubMed

    Azzari, Lucio; Foi, Alessandro

    2014-08-01

    We consider the estimation of signal-dependent noise from a single image. Unlike conventional algorithms that build a scatterplot of local mean-variance pairs from either small or adaptively selected homogeneous data samples, our proposed approach relies on arbitrarily large patches of heterogeneous data extracted at random from the image. We demonstrate the feasibility of our approach through an extensive theoretical analysis based on mixture of Gaussian distributions. A prototype algorithm is also developed in order to validate the approach on simulated data as well as on real camera raw images.

  15. "Open mesh" or "strictly selected population" recruitment? The experience of the randomized controlled MeMeMe trial.

    PubMed

    Cortellini, Mauro; Berrino, Franco; Pasanisi, Patrizia

    2017-01-01

    Among randomized controlled trials (RCTs), trials for primary prevention require large samples and long follow-up to obtain a high-quality outcome; therefore the recruitment process and the drop-out rates largely dictate the adequacy of the results. We are conducting a Phase III trial on persons with metabolic syndrome to test the hypothesis that comprehensive lifestyle changes and/or metformin treatment prevents age-related chronic diseases (the MeMeMe trial, EudraCT number: 2012-005427-32, also registered on ClinicalTrials.gov [NCT02960711]). Here, we briefly analyze and discuss the reasons which may lead to participants dropping out from trials. In our experience, participants may back out of a trial for different reasons. Drug-induced side effects are certainly the most compelling reason. But what are the other reasons, relating to the participants' perception of the progress of the trial which led them to withdraw after randomization? What about the time-dependent drop-out rate in primary prevention trials? The primary outcome of this analysis is the point of drop-out from trial, defined as the time from the randomization date to the withdrawal date. Survival functions were non-parametrically estimated using the product-limit estimator. The curves were statistically compared using the log-rank test ( P =0.64, not significant). Researchers involved in primary prevention RCTs seem to have to deal with the paradox of the proverbial "short blanket syndrome". Recruiting only highly motivated candidates might be useful for the smooth progress of the trial but it may lead to a very low enrollment rate. On the other hand, what about enrolling all the eligible subjects without considering their motivation? This might boost the enrollment rate, but it can lead to biased results on account of large proportions of drop-outs. Our experience suggests that participants do not change their mind depending on the allocation group (intervention or control). There is no single answer to sort out the short blanket syndrome.

  16. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.

    2014-12-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M {sub stellar} > 10{sup 11.56} M {sub ☉}. We study the topology at two smoothing lengths: R {sub G} = 21 h {sup –1} Mpc and R {sub G} = 34 h {sup –1} Mpc. The genus topology studied at the R {sub G} = 21 h {sup –1} Mpc scale results in the highest genusmore » amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.« less

  17. Beach erosion and nest site selection by the leatherback sea turtle Dermochelys coriacea (Testudines: Dermochelyidae) and implications for management practices at Playa Gandoca, Costa Rica.

    PubMed

    Spanier, Matthew J

    2010-12-01

    Leatherback sea turtles (Dermochelys coriacea) nest on dynamic, erosion-prone beaches. Erosive processes and resulting nest loss have long been presumed to be a hindrance to clutch survival. In order to better understand how leatherbacks cope with unstable nesting beaches, I investigated the role of beach erosion in leatherback nest site selection at Playa Gandoca, Costa Rica. I also examined the potential effect of nest relocation, a conservation strategy in place at Playa Gandoca to prevent nest loss to erosion, on the temperature of incubating clutches. I monitored changes in beach structure as a result of erosion at natural nest sites during the time the nest was laid, as well as in subsequent weeks. To investigate slope as a cue for nest site selection, I measured the slope of the beach where turtles ascended from the sea to nest, as well as the slopes at other random locations on the beach for comparison. I examined temperature differences between natural and relocated nest sites with thermocouples placed in the sand at depths typical of leatherback nests. Nests were distributed non-randomly in a clumped distribution along the length of the beach and laid at locations that were not undergoing erosion. The slope at nest sites was significantly different than at randomly chosen locations on the beach. The sand temperature at nest depths was significantly warmer at natural nest sites than at locations of relocated nests. The findings of this study suggest leatherbacks actively select nest sites that are not undergoing erosive processes, with slope potentially being used as a cue for site selection. The relocation of nests appears to be inadvertently cooling the nest environment. Due to the fact that leatherback clutches undergo temperature-dependent sex determination, the relocation of nests may be producing an unnatural male biasing of hatchlings. The results of this study suggest that the necessity of relocation practices, largely in place to protect nests from erosion, should be reevaluated to ensure the proper conservation of this critically endangered species.

  18. Phage display of the serpin alpha-1 proteinase inhibitor randomized at consecutive residues in the reactive centre loop and biopanned with or without thrombin.

    PubMed

    Scott, Benjamin M; Matochko, Wadim L; Gierczak, Richard F; Bhakta, Varsha; Derda, Ratmir; Sheffield, William P

    2014-01-01

    In spite of the power of phage display technology to identify variant proteins with novel properties in large libraries, it has only been previously applied to one member of the serpin superfamily. Here we describe phage display of human alpha-1 proteinase inhibitor (API) in a T7 bacteriophage system. API M358R fused to the C-terminus of T7 capsid protein 10B was directly shown to form denaturation-resistant complexes with thrombin by electrophoresis and immunoblotting following exposure of intact phages to thrombin. We therefore developed a biopanning protocol in which thrombin-reactive phages were selected using biotinylated anti-thrombin antibodies and streptavidin-coated magnetic beads. A library consisting of displayed API randomized at residues 357 and 358 (P2-P1) yielded predominantly Pro-Arg at these positions after five rounds of thrombin selection; in contrast the same degree of mock selection yielded only non-functional variants. A more diverse library of API M358R randomized at residues 352-356 (P7-P3) was also probed, yielding numerous variants fitting a loose consensus of DLTVS as judged by sequencing of the inserts of plaque-purified phages. The thrombin-selected sequences were transferred en masse into bacterial expression plasmids, and lysates from individual colonies were screening for API-thrombin complexing. The most active candidates from this sixth round of screening contained DITMA and AAFVS at P7-P3 and inhibited thrombin 2.1-fold more rapidly than API M358R with no change in reaction stoichiometry. Deep sequencing using the Ion Torrent platform confirmed that over 800 sequences were significantly enriched in the thrombin-panned versus naïve phage display library, including some detected using the combined phage display/bacterial lysate screening approach. Our results show that API joins Plasminogen Activator Inhibitor-1 (PAI-1) as a serpin amenable to phage display and suggest the utility of this approach for the selection of "designer serpins" with novel reactivity and/or specificity.

  19. Phage Display of the Serpin Alpha-1 Proteinase Inhibitor Randomized at Consecutive Residues in the Reactive Centre Loop and Biopanned with or without Thrombin

    PubMed Central

    Scott, Benjamin M.; Matochko, Wadim L.; Gierczak, Richard F.; Bhakta, Varsha; Derda, Ratmir; Sheffield, William P.

    2014-01-01

    In spite of the power of phage display technology to identify variant proteins with novel properties in large libraries, it has only been previously applied to one member of the serpin superfamily. Here we describe phage display of human alpha-1 proteinase inhibitor (API) in a T7 bacteriophage system. API M358R fused to the C-terminus of T7 capsid protein 10B was directly shown to form denaturation-resistant complexes with thrombin by electrophoresis and immunoblotting following exposure of intact phages to thrombin. We therefore developed a biopanning protocol in which thrombin-reactive phages were selected using biotinylated anti-thrombin antibodies and streptavidin-coated magnetic beads. A library consisting of displayed API randomized at residues 357 and 358 (P2–P1) yielded predominantly Pro-Arg at these positions after five rounds of thrombin selection; in contrast the same degree of mock selection yielded only non-functional variants. A more diverse library of API M358R randomized at residues 352–356 (P7–P3) was also probed, yielding numerous variants fitting a loose consensus of DLTVS as judged by sequencing of the inserts of plaque-purified phages. The thrombin-selected sequences were transferred en masse into bacterial expression plasmids, and lysates from individual colonies were screening for API-thrombin complexing. The most active candidates from this sixth round of screening contained DITMA and AAFVS at P7–P3 and inhibited thrombin 2.1-fold more rapidly than API M358R with no change in reaction stoichiometry. Deep sequencing using the Ion Torrent platform confirmed that over 800 sequences were significantly enriched in the thrombin-panned versus naïve phage display library, including some detected using the combined phage display/bacterial lysate screening approach. Our results show that API joins Plasminogen Activator Inhibitor-1 (PAI-1) as a serpin amenable to phage display and suggest the utility of this approach for the selection of “designer serpins” with novel reactivity and/or specificity. PMID:24427287

  20. Effect of expanding medicaid for parents on children's health insurance coverage: lessons from the Oregon experiment.

    PubMed

    DeVoe, Jennifer E; Marino, Miguel; Angier, Heather; O'Malley, Jean P; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J; Bailey, Steffani R; Gallia, Charles; Gold, Rachel

    2015-01-01

    In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon's randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. To estimate the effect on a child's health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. Oregon Experiment randomized natural experiment assessing the results of Oregon's 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child's Medicaid or Children's Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children's coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. Children's Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent's selection date. In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent's selection compared with children whose parents were not selected (adjusted odds ratio [AOR]=1.18; 95% CI, 1.10-1.27). The effect remained significant during months 7 to 12 (AOR=1.11; 95% CI, 1.03-1.19); months 13 to 18 showed a positive but not significant effect (AOR=1.07; 95% CI, 0.99-1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR=2.37; 95% CI, 2.14-2.64). Children's odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents' access to Medicaid coverage and their children's coverage.

  1. On the growth rate of gallstones in the human gallbladder

    NASA Astrophysics Data System (ADS)

    Nudelman, I.

    1993-05-01

    The growth rate of a single symmetrically oval shaped gallbladder stone weighing 10.8 g was recorded over a period of six years before surgery and removal. The length of the stone was measured by ultrasonography and the growth rate was found to be linear with time, with a value of 0.4 mm/year. A smaller stone growing in the wall of the gallbladder was detected only three years before removal and grew at a rate of ˜ 1.33 mm/year. The morphology and metallic ion chemical composition of the large stone and of a randomly selected small stone weighing about 1.1 g, extracted from another patient, were analyzed and compared. It was found that the large stone contained besides calcium also lead, whereas the small stone contained mainly calcium. It is possible that the lead causes a difference in mechanism between the growth of a single large and growth of multiple small gallstones.

  2. Therapist facilitative interpersonal skills and training status: A randomized clinical trial on alliance and outcome.

    PubMed

    Anderson, Timothy; Crowley, Mary Ellen J; Himawan, Lina; Holmberg, Jennifer K; Uhlin, Brian D

    2016-09-01

    Therapist effects, independent of the treatment provided, have emerged as a contributor to psychotherapy outcomes. However, past research largely has not identified which therapist factors might be contributing to these effects, though research on psychotherapy implicates relational characteristics. The present Randomized Clinical Trial tested the efficacy of therapists who were selected by their facilitative interpersonal skills (FIS) and training status. Sixty-five clients were selected from 2713 undergraduates using a screening and clinical interview procedure. Twenty-three therapists met with 2 clients for 7 sessions and 20 participants served in a no-treatment control group. Outcome and alliance differences for Training Status were negligible. High FIS therapists had greater pre-post client outcome, and higher rates of change across sessions, than low FIS therapists. All clients treated by therapists improved more than the silent control, but effects were greater with high FIS than low FIS therapists. From the first session, high FIS therapists also had higher alliances than low FIS therapists as well as significant improvements on client-rated alliance. Results were consistent with the hypothesis that therapists' common relational skills are independent contributors to therapeutic alliance and outcome.

  3. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    NASA Astrophysics Data System (ADS)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  4. Metabolite and transcript markers for the prediction of potato drought tolerance.

    PubMed

    Sprenger, Heike; Erban, Alexander; Seddig, Sylvia; Rudack, Katharina; Thalhammer, Anja; Le, Mai Q; Walther, Dirk; Zuther, Ellen; Köhl, Karin I; Kopka, Joachim; Hincha, Dirk K

    2018-04-01

    Potato (Solanum tuberosum L.) is one of the most important food crops worldwide. Current potato varieties are highly susceptible to drought stress. In view of global climate change, selection of cultivars with improved drought tolerance and high yield potential is of paramount importance. Drought tolerance breeding of potato is currently based on direct selection according to yield and phenotypic traits and requires multiple trials under drought conditions. Marker-assisted selection (MAS) is cheaper, faster and reduces classification errors caused by noncontrolled environmental effects. We analysed 31 potato cultivars grown under optimal and reduced water supply in six independent field trials. Drought tolerance was determined as tuber starch yield. Leaf samples from young plants were screened for preselected transcript and nontargeted metabolite abundance using qRT-PCR and GC-MS profiling, respectively. Transcript marker candidates were selected from a published RNA-Seq data set. A Random Forest machine learning approach extracted metabolite and transcript markers for drought tolerance prediction with low error rates of 6% and 9%, respectively. Moreover, by combining transcript and metabolite markers, the prediction error was reduced to 4.3%. Feature selection from Random Forest models allowed model minimization, yielding a minimal combination of only 20 metabolite and transcript markers that were successfully tested for their reproducibility in 16 independent agronomic field trials. We demonstrate that a minimum combination of transcript and metabolite markers sampled at early cultivation stages predicts potato yield stability under drought largely independent of seasonal and regional agronomic conditions. © 2017 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  5. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  6. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  7. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  8. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  9. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...

  10. Models of Protocellular Structure, Function and Evolution

    NASA Technical Reports Server (NTRS)

    New, Michael H.; Pohorille, Andrew; Szostak, Jack W.; Keefe, Tony; Lanyi, Janos K.; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    In the absence of any record of protocells, the most direct way to test our understanding, of the origin of cellular life is to construct laboratory models that capture important features of protocellular systems. Such efforts are currently underway in a collaborative project between NASA-Ames, Harvard Medical School and University of California. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures. The centerpiece of this project is a method for the in vitro evolution of protein enzymes toward arbitrary catalytic targets. A similar approach has already been developed for nucleic acids in which a small number of functional molecules are selected from a large, random population of candidates. The selected molecules are next vastly multiplied using the polymerase chain reaction.

  11. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    NASA Astrophysics Data System (ADS)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  12. Some practical problems in implementing randomization.

    PubMed

    Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet

    2010-06-01

    While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.

  13. Peculiarities of the statistics of spectrally selected fluorescence radiation in laser-pumped dye-doped random media

    NASA Astrophysics Data System (ADS)

    Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.

    2018-04-01

    We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.

  14. Selective investment promotes cooperation in public goods game

    NASA Astrophysics Data System (ADS)

    Li, Jing; Wu, Te; Zeng, Gang; Wang, Long

    2012-08-01

    Most previous investigations on spatial Public Goods Game assume that individuals treat neighbors equivalently, which is in sharp contrast with realistic situations, where bias is ubiquitous. We construct a model to study how a selective investment mechanism affects the evolution of cooperation. Cooperators selectively contribute to just a fraction among their neighbors. According to the interaction result, the investment network can be adapted. On selecting investees, three patterns are considered. In the random pattern, cooperators choose their investees among the neighbors equiprobably. In the social-preference pattern, cooperators tend to invest to individuals possessing large social ties. In the wealth-preference pattern, cooperators are more likely to invest to neighbors with higher payoffs. Our result shows robustness of selective investment mechanism that boosts emergence and maintenance of cooperation. Cooperation is more or less hampered under the latter two patterns, and we prove the anti-social-preference or anti-wealth-preference pattern of selecting investees can accelerate cooperation to some extent. Furthermore, the theoretical analysis of our mechanism on double-star networks coincides with simulation results. We hope our finding could shed light on better understanding of the emergence of cooperation among adaptive populations.

  15. Decoys Selection in Benchmarking Datasets: Overview and Perspectives

    PubMed Central

    Réau, Manon; Langenfeld, Florent; Zagury, Jean-François; Lagarde, Nathalie; Montes, Matthieu

    2018-01-01

    Virtual Screening (VS) is designed to prospectively help identifying potential hits, i.e., compounds capable of interacting with a given target and potentially modulate its activity, out of large compound collections. Among the variety of methodologies, it is crucial to select the protocol that is the most adapted to the query/target system under study and that yields the most reliable output. To this aim, the performance of VS methods is commonly evaluated and compared by computing their ability to retrieve active compounds in benchmarking datasets. The benchmarking datasets contain a subset of known active compounds together with a subset of decoys, i.e., assumed non-active molecules. The composition of both the active and the decoy compounds subsets is critical to limit the biases in the evaluation of the VS methods. In this review, we focus on the selection of decoy compounds that has considerably changed over the years, from randomly selected compounds to highly customized or experimentally validated negative compounds. We first outline the evolution of decoys selection in benchmarking databases as well as current benchmarking databases that tend to minimize the introduction of biases, and secondly, we propose recommendations for the selection and the design of benchmarking datasets. PMID:29416509

  16. Effects of feature-selective and spatial attention at different stages of visual processing.

    PubMed

    Andersen, Søren K; Fuchs, Sandra; Müller, Matthias M

    2011-01-01

    We investigated mechanisms of concurrent attentional selection of location and color using electrophysiological measures in human subjects. Two completely overlapping random dot kinematograms (RDKs) of two different colors were presented on either side of a central fixation cross. On each trial, participants attended one of these four RDKs, defined by its specific combination of color and location, in order to detect coherent motion targets. Sustained attentional selection while monitoring for targets was measured by means of steady-state visual evoked potentials (SSVEPs) elicited by the frequency-tagged RDKs. Attentional selection of transient targets and distractors was assessed by behavioral responses and by recording event-related potentials to these stimuli. Spatial attention and attention to color had independent and largely additive effects on the amplitudes of SSVEPs elicited in early visual areas. In contrast, behavioral false alarms and feature-selective modulation of P3 amplitudes to targets and distractors were limited to the attended location. These results suggest that feature-selective attention produces an early, global facilitation of stimuli having the attended feature throughout the visual field, whereas the discrimination of target events takes place at a later stage of processing that is only applied to stimuli at the attended position.

  17. Open quantum random walks: Bistability on pure states and ballistically induced diffusion

    NASA Astrophysics Data System (ADS)

    Bauer, Michel; Bernard, Denis; Tilloy, Antoine

    2013-12-01

    Open quantum random walks (OQRWs) deal with quantum random motions on a line for systems with internal and orbital degrees of freedom. The internal system behaves as a quantum random gyroscope coding for the direction of the orbital moves. We reveal the existence of a transition, depending on OQRW moduli, in the internal system behaviors from simple oscillations to random flips between two unstable pure states. This induces a transition in the orbital motions from the usual diffusion to ballistically induced diffusion with a large mean free path and large effective diffusion constant at large times. We also show that mixed states of the internal system are converted into random pure states during the process. We touch upon possible experimental realizations.

  18. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    PubMed

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  19. A Robust and Versatile Method of Combinatorial Chemical Synthesis of Gene Libraries via Hierarchical Assembly of Partially Randomized Modules

    PubMed Central

    Popova, Blagovesta; Schubert, Steffen; Bulla, Ingo; Buchwald, Daniela; Kramer, Wilfried

    2015-01-01

    A major challenge in gene library generation is to guarantee a large functional size and diversity that significantly increases the chances of selecting different functional protein variants. The use of trinucleotides mixtures for controlled randomization results in superior library diversity and offers the ability to specify the type and distribution of the amino acids at each position. Here we describe the generation of a high diversity gene library using tHisF of the hyperthermophile Thermotoga maritima as a scaffold. Combining various rational criteria with contingency, we targeted 26 selected codons of the thisF gene sequence for randomization at a controlled level. We have developed a novel method of creating full-length gene libraries by combinatorial assembly of smaller sub-libraries. Full-length libraries of high diversity can easily be assembled on demand from smaller and much less diverse sub-libraries, which circumvent the notoriously troublesome long-term archivation and repeated proliferation of high diversity ensembles of phages or plasmids. We developed a generally applicable software tool for sequence analysis of mutated gene sequences that provides efficient assistance for analysis of library diversity. Finally, practical utility of the library was demonstrated in principle by assessment of the conformational stability of library members and isolating protein variants with HisF activity from it. Our approach integrates a number of features of nucleic acids synthetic chemistry, biochemistry and molecular genetics to a coherent, flexible and robust method of combinatorial gene synthesis. PMID:26355961

  20. A Robust and Versatile Method of Combinatorial Chemical Synthesis of Gene Libraries via Hierarchical Assembly of Partially Randomized Modules.

    PubMed

    Popova, Blagovesta; Schubert, Steffen; Bulla, Ingo; Buchwald, Daniela; Kramer, Wilfried

    2015-01-01

    A major challenge in gene library generation is to guarantee a large functional size and diversity that significantly increases the chances of selecting different functional protein variants. The use of trinucleotides mixtures for controlled randomization results in superior library diversity and offers the ability to specify the type and distribution of the amino acids at each position. Here we describe the generation of a high diversity gene library using tHisF of the hyperthermophile Thermotoga maritima as a scaffold. Combining various rational criteria with contingency, we targeted 26 selected codons of the thisF gene sequence for randomization at a controlled level. We have developed a novel method of creating full-length gene libraries by combinatorial assembly of smaller sub-libraries. Full-length libraries of high diversity can easily be assembled on demand from smaller and much less diverse sub-libraries, which circumvent the notoriously troublesome long-term archivation and repeated proliferation of high diversity ensembles of phages or plasmids. We developed a generally applicable software tool for sequence analysis of mutated gene sequences that provides efficient assistance for analysis of library diversity. Finally, practical utility of the library was demonstrated in principle by assessment of the conformational stability of library members and isolating protein variants with HisF activity from it. Our approach integrates a number of features of nucleic acids synthetic chemistry, biochemistry and molecular genetics to a coherent, flexible and robust method of combinatorial gene synthesis.

  1. Effect of air-supported, continuous, postural oscillation on the risk of early ICU pneumonia in nontraumatic critical illness.

    PubMed

    deBoisblanc, B P; Castro, M; Everret, B; Grender, J; Walker, C D; Summer, W R

    1993-05-01

    We hypothesized that continuous, automatic turning utilizing a patient-friendly, low air loss surface would reduce the incidence of early ICU pneumonia in selected groups of critically ill medical patients. Prospective, randomized, controlled clinical trial. Medical ICU of a large community teaching hospital. One hundred twenty-four critically ill new admissions to the medical ICU at Charity Hospital in New Orleans. Patients were prospectively randomized within one of five diagnosis-related groups (DRG)--sepsis (SEPSIS), obstructive airways disease (OAD), metabolic coma, drug overdose, and stroke--to either routine turning on a standard ICU bed or to continuous turning on an oscillating air-flotation bed for a total of five days. Patients were monitored daily during the treatment period for the development of pneumonia. The incidence of pneumonia during the first five ICU days was 22 percent in patients randomized to the standard ICU bed vs 9 percent for the oscillating bed (p = 0.05). This treatment effect was greatest in the SEPSIS DRG (23 percent vs 3 percent, p = 0.04). Continuous automatic oscillation did not significantly change the number of days of required mechanical ventilation, ICU stay, hospital stay, or hospital mortality overall or within any of the DRGs. We conclude that air-supported automated turning during the first five ICU days reduces the incidence of early ICU pneumonia in selected DRGs; however, this form of automated turning does not reduce other measured clinical outcome parameters.

  2. Multi-class computational evolution: development, benchmark evaluation and application to RNA-Seq biomarker discovery.

    PubMed

    Crabtree, Nathaniel M; Moore, Jason H; Bowyer, John F; George, Nysia I

    2017-01-01

    A computational evolution system (CES) is a knowledge discovery engine that can identify subtle, synergistic relationships in large datasets. Pareto optimization allows CESs to balance accuracy with model complexity when evolving classifiers. Using Pareto optimization, a CES is able to identify a very small number of features while maintaining high classification accuracy. A CES can be designed for various types of data, and the user can exploit expert knowledge about the classification problem in order to improve discrimination between classes. These characteristics give CES an advantage over other classification and feature selection algorithms, particularly when the goal is to identify a small number of highly relevant, non-redundant biomarkers. Previously, CESs have been developed only for binary class datasets. In this study, we developed a multi-class CES. The multi-class CES was compared to three common feature selection and classification algorithms: support vector machine (SVM), random k-nearest neighbor (RKNN), and random forest (RF). The algorithms were evaluated on three distinct multi-class RNA sequencing datasets. The comparison criteria were run-time, classification accuracy, number of selected features, and stability of selected feature set (as measured by the Tanimoto distance). The performance of each algorithm was data-dependent. CES performed best on the dataset with the smallest sample size, indicating that CES has a unique advantage since the accuracy of most classification methods suffer when sample size is small. The multi-class extension of CES increases the appeal of its application to complex, multi-class datasets in order to identify important biomarkers and features.

  3. Mendelian randomization with fine-mapped genetic data: Choosing from large numbers of correlated instrumental variables.

    PubMed

    Burgess, Stephen; Zuber, Verena; Valdes-Marquez, Elsa; Sun, Benjamin B; Hopewell, Jemma C

    2017-12-01

    Mendelian randomization uses genetic variants to make causal inferences about the effect of a risk factor on an outcome. With fine-mapped genetic data, there may be hundreds of genetic variants in a single gene region any of which could be used to assess this causal relationship. However, using too many genetic variants in the analysis can lead to spurious estimates and inflated Type 1 error rates. But if only a few genetic variants are used, then the majority of the data is ignored and estimates are highly sensitive to the particular choice of variants. We propose an approach based on summarized data only (genetic association and correlation estimates) that uses principal components analysis to form instruments. This approach has desirable theoretical properties: it takes the totality of data into account and does not suffer from numerical instabilities. It also has good properties in simulation studies: it is not particularly sensitive to varying the genetic variants included in the analysis or the genetic correlation matrix, and it does not have greatly inflated Type 1 error rates. Overall, the method gives estimates that are less precise than those from variable selection approaches (such as using a conditional analysis or pruning approach to select variants), but are more robust to seemingly arbitrary choices in the variable selection step. Methods are illustrated by an example using genetic associations with testosterone for 320 genetic variants to assess the effect of sex hormone related pathways on coronary artery disease risk, in which variable selection approaches give inconsistent inferences. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.

  4. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    PubMed

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.

  5. Sentinel node status prediction by four statistical models: results from a large bi-institutional series (n = 1132).

    PubMed

    Mocellin, Simone; Thompson, John F; Pasquali, Sandro; Montesco, Maria C; Pilati, Pierluigi; Nitti, Donato; Saw, Robyn P; Scolyer, Richard A; Stretch, Jonathan R; Rossi, Carlo R

    2009-12-01

    To improve selection for sentinel node (SN) biopsy (SNB) in patients with cutaneous melanoma using statistical models predicting SN status. About 80% of patients currently undergoing SNB are node negative. In the absence of conclusive evidence of a SNBassociated survival benefit, these patients may be over-treated. Here, we tested the efficiency of 4 different models in predicting SN status. The clinicopathologic data (age, gender, tumor thickness, Clark level, regression, ulceration, histologic subtype, and mitotic index) of 1132 melanoma patients who had undergone SNB at institutions in Italy and Australia were analyzed. Logistic regression, classification tree, random forest, and support vector machine models were fitted to the data. The predictive models were built with the aim of maximizing the negative predictive value (NPV) and reducing the rate of SNB procedures though minimizing the error rate. After cross-validation logistic regression, classification tree, random forest, and support vector machine predictive models obtained clinically relevant NPV (93.6%, 94.0%, 97.1%, and 93.0%, respectively), SNB reduction (27.5%, 29.8%, 18.2%, and 30.1%, respectively), and error rates (1.8%, 1.8%, 0.5%, and 2.1%, respectively). Using commonly available clinicopathologic variables, predictive models can preoperatively identify a proportion of patients ( approximately 25%) who might be spared SNB, with an acceptable (1%-2%) error. If validated in large prospective series, these models might be implemented in the clinical setting for improved patient selection, which ultimately would lead to better quality of life for patients and optimization of resource allocation for the health care system.

  6. Identifying Human Papillomavirus Vaccination Practices Among Primary Care Providers of Minority, Low-Income and Immigrant Patient Populations

    PubMed Central

    Bruno, Denise M.; Wilson, Tracey E.; Gany, Francesca; Aragones, Abraham

    2014-01-01

    Objective Minority populations in the United States are disproportionally affected by Human Papillomavirus (HPV) infection and HPV-related cancer. We sought to understand physician practices, knowledge and beliefs that affect utilization of the HPV vaccine in primary care settings serving large minority populations in areas with increased rates of HPV-related cancer. Study Design Cross-sectional survey of randomly selected primary care providers, including pediatricians, family practice physicians and internists, serving large minority populations in Brooklyn, N.Y. and in areas with higher than average cervical cancer rates. Results Of 156 physicians randomly selected, 121 eligible providers responded to the survey; 64% were pediatricians, 19% were internists and 17% were family practitioners. Thirty-four percent of respondents reported that they routinely offered HPV vaccine to their eligible patients. Seventy percent of physicians reported that the lack of preventive care visits for patients in the eligible age group limited their ability to recommend the HPV vaccine and 70% of those who reported this barrier do not routinely recommend HPV vaccine. The lack of time to educate parents about the HPV vaccine and cost of the vaccine to their patients were two commonly reported barriers that affected whether providers offered the vaccine. Conclusions Our study found that the majority of providers serving the highest risk populations for HPV infection and HPV-related cancers are not routinely recommending the HPV vaccine to their patients. Reasons for providers' failure to recommend the HPV vaccine routinely are identified and possible areas for targeted interventions to increase HPV vaccination rates are discussed. PMID:24886959

  7. Propensity score to detect baseline imbalance in cluster randomized trials: the role of the c-statistic.

    PubMed

    Leyrat, Clémence; Caille, Agnès; Foucher, Yohann; Giraudeau, Bruno

    2016-01-22

    Despite randomization, baseline imbalance and confounding bias may occur in cluster randomized trials (CRTs). Covariate imbalance may jeopardize the validity of statistical inferences if they occur on prognostic factors. Thus, the diagnosis of a such imbalance is essential to adjust statistical analysis if required. We developed a tool based on the c-statistic of the propensity score (PS) model to detect global baseline covariate imbalance in CRTs and assess the risk of confounding bias. We performed a simulation study to assess the performance of the proposed tool and applied this method to analyze the data from 2 published CRTs. The proposed method had good performance for large sample sizes (n =500 per arm) and when the number of unbalanced covariates was not too small as compared with the total number of baseline covariates (≥40% of unbalanced covariates). We also provide a strategy for pre selection of the covariates needed to be included in the PS model to enhance imbalance detection. The proposed tool could be useful in deciding whether covariate adjustment is required before performing statistical analyses of CRTs.

  8. Impact of a school-based dating violence prevention program among Latino teens: randomized controlled effectiveness trial.

    PubMed

    Jaycox, Lisa H; McCaffrey, Daniel; Eiseman, Beth; Aronoff, Jessica; Shelley, Gene A; Collins, Rebecca L; Marshall, Grant N

    2006-11-01

    Given the high rate of dating violence between teens and associated deleterious outcomes, the need for effective prevention and early intervention programs is clear. Break the Cycle's Ending Violence curriculum, a three-class-session prevention program focused on legal issues, is evaluated here for its impact on Latino/a youth. Tracks within large urban high schools that had at least 80% Latino/a students were randomized to immediate or delayed curriculum. Classrooms were randomly selected within tracks and individual student outcomes were assessed pre- and postintervention and six months later. Students in intervention classrooms showed improved knowledge, less acceptance of female-on-male aggression, and enhanced perception of the helpfulness and likelihood of seeking assistance from a number of sources immediately after the program. Improved knowledge and perceived helpfulness of an attorney were maintained six months later. There were no differences in recent abusive/fearful dating experiences or violence victimization or perpetration. The Ending Violence curriculum has an impact on teen norms, knowledge, and help-seeking proclivities that may aid in early intervention for dating violence among Latino/a students.

  9. Partial breast radiation for early-stage breast cancer.

    PubMed

    McCormick, Beryl

    2012-02-01

    This review is to provide an update on the current status of partial breast irradiation (PBI) for women presenting with early-stage breast cancer, as an alternate radiation technique to fractionated, whole breast radiation, following conservation surgery. As more women are asking for and receiving this treatment, both on and off protocols, understanding recent additions to the literature is important to physicians caring for this patient population. Newly published retrospective studies, with follow-up times out to 10 years and the status of both recently completed and still open large prospective phase III trials will be covered, with emphasis on unexpected side effects reported, and some hypothesis-generating radiobiology observations. A recent consensus treatment guideline for PBI use is also discussed. Selected retrospective studies continue to report outcomes matching those achieved with whole breast radiation; however, results from large prospective randomized trials comparing PBI to whole breast radiation have been reported only with short follow-up times, or in two studies, are still pending. A recent consensus guideline is useful at present in selecting patients for discussion of this treatment.

  10. THE SELECTION OF A NATIONAL RANDOM SAMPLE OF TEACHERS FOR EXPERIMENTAL CURRICULUM EVALUATION.

    ERIC Educational Resources Information Center

    WELCH, WAYNE W.; AND OTHERS

    MEMBERS OF THE EVALUATION SECTION OF HARVARD PROJECT PHYSICS, DESCRIBING WHAT IS SAID TO BE THE FIRST ATTEMPT TO SELECT A NATIONAL RANDOM SAMPLE OF (HIGH SCHOOL PHYSICS) TEACHERS, LIST THE STEPS AS (1) PURCHASE OF A LIST OF PHYSICS TEACHERS FROM THE NATIONAL SCIENCE TEACHERS ASSOCIATION (MOST COMPLETE AVAILABLE), (2) SELECTION OF 136 NAMES BY A…

  11. Lessons Learned from Large-Scale Randomized Experiments

    ERIC Educational Resources Information Center

    Slavin, Robert E.; Cheung, Alan C. K.

    2017-01-01

    Large-scale randomized studies provide the best means of evaluating practical, replicable approaches to improving educational outcomes. This article discusses the advantages, problems, and pitfalls of these evaluations, focusing on alternative methods of randomization, recruitment, ensuring high-quality implementation, dealing with attrition, and…

  12. Unbiased feature selection in learning random forests for high-dimensional data.

    PubMed

    Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi

    2015-01-01

    Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.

  13. Occurrence and distribution of methyl tert-butyl ether and other volatile organic compounds in drinking water in the Northeast and Mid-Atlantic regions of the United States, 1993-98

    USGS Publications Warehouse

    Grady, S.J.; Casey, G.D.

    2001-01-01

    Data on volatile organic compounds (VOCs) in drinking water supplied by 2,110 randomly selected community water systems (CWSs) in 12 Northeast and Mid-Atlantic States indicate 64 VOC analytes were detected at least once during 1993-98. Selection of the 2,110 CWSs inventoried for this study targeted 20 percent of the 10,479 active CWSs in the region and represented a random subset of the total distribution by State, source of water, and size of system. The data include 21,635 analyses of drinking water collected for compliance monitoring under the Safe Drinking Water Act; the data mostly represent finished drinking water collected at the pointof- entry to, or at more distal locations within, each CWS?s distribution system following any watertreatment processes. VOC detections were more common in drinking water supplied by large systems (serving more than 3,300 people) that tap surface-water sources or both surface- and groundwater sources than in small systems supplied exclusively by ground-water sources. Trihalomethane (THM) compounds, which are potentially formed during the process of disinfecting drinking water with chlorine, were detected in 45 percent of the randomly selected CWSs. Chloroform was the most frequently detected THM, reported in 39 percent of the CWSs. The gasoline additive methyl tert-butyl ether (MTBE) was the most frequently detected VOC in drinking water after the THMs. MTBE was detected in 8.9 percent of the 1,194 randomly selected CWSs that analyzed samples for MTBE at any reporting level, and it was detected in 7.8 percent of the 1,074 CWSs that provided MTBE data at the 1.0-?g/L (microgram per liter) reporting level. As with other VOCs reported in drinking water, most MTBE concentrations were less than 5.0 ?g/L, and less than 1 percent of CWSs reported MTBE concentrations at or above the 20.0-?g/L lower limit recommended by the U.S. Environmental Protection Agency?s Drinking-Water Advisory. The frequency of MTBE detections in drinking water is significantly related to high- MTBE-use patterns. Detections are five times more likely in areas where MTBE is or has been used in gasoline at greater than 5 percent by volume as part of the oxygenated or reformulated (OXY/RFG) fuels program. Detection frequencies of the individual gasoline compounds (benzene, toluene, ethylbenzene, and xylenes (BTEX)) were mostly less than 3 percent of the randomly selected CWSs, but collectively, BTEX compounds were detected in 8.4 percent of CWSs. BTEX concentrations also were low and just three drinkingwater samples contained BTEX at concentrations exceeding 20 ?g/L. Co-occurrence of MTBE and BTEX was rare, and only 0.8 percent of CWSs reported simultaneous detections of MTBE and BTEX compounds. Low concentrations and cooccurrence of MTBE and BTEX indicate most gasoline contaminants in drinking water probably represent nonpoint sources. Solvents were frequently detected in drinking water in the 12-State area. One or more of 27 individual solvent VOCs were detected at any reporting level in 3,080 drinking-water samples from 304 randomly selected CWSs (14 percent) and in 206 CWSs (9.8 percent) at concentrations at or above 1.0 ?g/L. High co-occurrence among solvents probably reflects common sources and the presence of transformation by-products. Other VOCs were relatively rarely detected in drinking water in the 12-State area. Six percent (127) of the 2,110 randomly selected CWSs reported concentrations of 16 VOCs at or above drinking-water criteria. The 127 CWSs collectively serve 2.6 million people. The occurrence of VOCs in drinking water was significantly associated (p<0.0001) with high population- density urban areas. New Jersey, Massachusetts, and Rhode Island, States with substantial urbanization and high population density, had the highest frequency of VOC detections among the 12 States. More than two-thirds of the randomly selected CWSs in New Jersey reported detecting VOC concentrations in drinking water at or above 1

  14. High-Tg Polynorbornene-Based Block and Random Copolymers for Butanol Pervaporation Membranes

    NASA Astrophysics Data System (ADS)

    Register, Richard A.; Kim, Dong-Gyun; Takigawa, Tamami; Kashino, Tomomasa; Burtovyy, Oleksandr; Bell, Andrew

    Vinyl addition polymers of substituted norbornene (NB) monomers possess desirably high glass transition temperatures (Tg); however, until very recently, the lack of an applicable living polymerization chemistry has precluded the synthesis of such polymers with controlled architecture, or copolymers with controlled sequence distribution. We have recently synthesized block and random copolymers of NB monomers bearing hydroxyhexafluoroisopropyl and n-butyl substituents (HFANB and BuNB) via living vinyl addition polymerization with Pd-based catalysts. Both series of polymers were cast into the selective skin layers of thin film composite (TFC) membranes, and these organophilic membranes investigated for the isolation of n-butanol from dilute aqueous solution (model fermentation broth) via pervaporation. The block copolymers show well-defined microphase-separated morphologies, both in bulk and as the selective skin layers on TFC membranes, while the random copolymers are homogeneous. Both block and random vinyl addition copolymers are effective as n-butanol pervaporation membranes, with the block copolymers showing a better flux-selectivity balance. While polyHFANB has much higher permeability and n-butanol selectivity than polyBuNB, incorporating BuNB units into the polymer (in either a block or random sequence) limits the swelling of the polyHFANB and thereby improves the n-butanol pervaporation selectivity.

  15. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  16. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  17. Frequency regularities of acoustic modes and multi-colour mode identification in rapidly rotating stars

    NASA Astrophysics Data System (ADS)

    Reese, D. R.; Lignières, F.; Ballot, J.; Dupret, M.-A.; Barban, C.; van't Veer-Menneret, C.; MacGregor, K. B.

    2017-05-01

    Context. Mode identification has remained a major obstacle in the interpretation of pulsation spectra in rapidly rotating stars. This has motivated recent work on calculating realistic multi-colour mode visibilities in this type of star. Aims: We would like to test mode identification methods and seismic diagnostics in rapidly rotating stars, using oscillation spectra that are based on these new theoretical predictions. Methods: We investigate the auto-correlation function and Fourier transform of theoretically calculated frequency spectra, in which modes are selected according to their visibilities. Given that intrinsic mode amplitudes are determined by non-linear saturation and cannot currently be theoretically predicted, we experimented with various ad-hoc prescriptions for setting the mode amplitudes, including using random values. Furthermore, we analyse the ratios between mode amplitudes observed in different photometric bands to see up to what extent they can identify modes. Results: When non-random intrinsic mode amplitudes are used, our results show that it is possible to extract a mean value for the large frequency separation or half its value and, sometimes, twice the rotation rate, from the auto-correlation of the frequency spectra. Furthermore, the Fourier transforms are mostly sensitive to the large frequency separation or half its value. The combination of the two methods may therefore measure and distinguish the two types of separations. When the intrinsic mode amplitudes include random factors, which seems more representative of real stars, the results are far less favourable. It is only when the large separation or half its value coincides with twice the rotation rate, that it might be possible to detect the signature of a frequency regularity. We also find that amplitude ratios are a good way of grouping together modes with similar characteristics. By analysing the frequencies of these groups, it is possible to constrain mode identification, as well as determine the large frequency separation and the rotation rate.

  18. A Case for Less Intensive Blood Pressure Control: It Matters to Achieve Target Blood Pressure Early and Sustained Below 140/90mmHg.

    PubMed

    Mariampillai, Julian E; Eskås, Per Anders; Heimark, Sondre; Kjeldsen, Sverre E; Narkiewicz, Krzysztof; Mancia, Giuseppe

    Although high blood pressure (BP) is the leading risk factors for cardiovascular (CV) disease, the optimal BP treatment target in order to reduce CV risk is unclear in the aftermath of the SPRINT study. The aim of this review is to assess large, randomized, and controlled trials on BP targets, as well as review selected observational analyses from other large randomized BP trials in order to evaluate the benefit of intense vs. standard BP control. None of the studies, except SPRINT, favored intense BP treatment. Some of the studies suggested favorable effects of lowering treatment target in patients with diabetes or high risk of stroke. In SPRINT, a new BP measurement method was introduced, and the results must be interpreted in light of this. The results of the observational analyses indicated the best preventive effect when achieving early and sustained BP control rather than low targets. In conclusion, today's guidelines' recommended treatment target of <140/90mmHg seems sufficient for most patients. Early and sustained BP control should be the main focus. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Robust portfolio selection based on asymmetric measures of variability of stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  20. Carotid artery stenting vs. carotid endarterectomy in the management of carotid artery stenosis: Lessons learned from randomized controlled trials

    PubMed Central

    Salem, Mohamed M.; Alturki, Abdulrahman Y.; Fusco, Matthew R.; Thomas, Ajith J.; Carter, Bob S.; Chen, Clark C.; Kasper, Ekkehard M.

    2018-01-01

    Background: Carotid artery stenosis, both symptomatic and asymptomatic, has been well studied with several multicenter randomized trials. The superiority of carotid endarterectomy (CEA) to medical therapy alone in both symptomatic and asymptomatic carotid artery stenosis has been well established in previous trials in the 1990s. The consequent era of endovascular carotid artery stenting (CAS) has offered another option for treating carotid artery stenosis. A series of randomized trials have now been conducted to compare CEA and CAS in the treatment of carotid artery disease. The large number of similar trials has created some confusion due to inconsistent results. Here, the authors review the trials that compare CEA and CAS in the management of carotid artery stenosis. Methods: The PubMed database was searched systematically for randomized controlled trials published in English that compared CEA and CAS. Only human studies on adult patients were assessed. The references of identified articles were reviewed for additional manuscripts to be included if inclusion criteria were met. The following terms were used during search: carotid stenosis, endarterectomy, stenting. Retrospective or single-center studies were excluded from the review. Results: Thirteen reports of seven large-scale prospective multicenter studies, comparing both interventions for symptomatic or asymptomatic extracranial carotid artery stenosis, were identified. Conclusions: While the superiority of intervention to medical management for symptomatic patients has been well established in the literatures, careful selection of asymptomatic patients for intervention should be undertaken and only be pursued after institution of appropriate medical therapy until further reports on trials comparing medical therapy to intervention in this patient group are available. PMID:29740506

  1. Adaptive consensus of scale-free multi-agent system by randomly selecting links

    NASA Astrophysics Data System (ADS)

    Mou, Jinping; Ge, Huafeng

    2016-06-01

    This paper investigates an adaptive consensus problem for distributed scale-free multi-agent systems (SFMASs) by randomly selecting links, where the degree of each node follows a power-law distribution. The randomly selecting links are based on the assumption that every agent decides to select links among its neighbours according to the received data with a certain probability. Accordingly, a novel consensus protocol with the range of the received data is developed, and each node updates its state according to the protocol. By the iterative method and Cauchy inequality, the theoretical analysis shows that all errors among agents converge to zero, and in the meanwhile, several criteria of consensus are obtained. One numerical example shows the reliability of the proposed methods.

  2. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    PubMed

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Effect of Expanding Medicaid for Parents on Children’s Health Insurance Coverage

    PubMed Central

    DeVoe, Jennifer E.; Marino, Miguel; Angier, Heather; O’Malley, Jean P.; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J.; Bailey, Steffani R.; Gallia, Charles; Gold, Rachel

    2016-01-01

    IMPORTANCE In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon’s randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. OBJECTIVE To estimate the effect on a child’s health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. DESIGN, SETTING, AND PARTICIPANTS Oregon Experiment randomized natural experiment assessing the results of Oregon’s 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child’s Medicaid or Children’s Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children’s coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14 409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. EXPOSURES For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. MAIN OUTCOMES AND MEASURES Children’s Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent’s selection date. RESULTS In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent’s selection compared with children whose parents were not selected (adjusted odds ratio [AOR] = 1.18; 95% CI, 1.10–1.27). The effect remained significant during months 7 to 12 (AOR = 1.11; 95% CI, 1.03–1.19); months 13 to 18 showed a positive but not significant effect (AOR = 1.07; 95% CI, 0.99–1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR = 2.37; 95% CI, 2.14–2.64). CONCLUSIONS AND RELEVANCE Children’s odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents’ access to Medicaid coverage and their children’s coverage. PMID:25561041

  4. Extrapolating Weak Selection in Evolutionary Games

    PubMed Central

    Wu, Bin; García, Julián; Hauert, Christoph; Traulsen, Arne

    2013-01-01

    In evolutionary games, reproductive success is determined by payoffs. Weak selection means that even large differences in game outcomes translate into small fitness differences. Many results have been derived using weak selection approximations, in which perturbation analysis facilitates the derivation of analytical results. Here, we ask whether results derived under weak selection are also qualitatively valid for intermediate and strong selection. By “qualitatively valid” we mean that the ranking of strategies induced by an evolutionary process does not change when the intensity of selection increases. For two-strategy games, we show that the ranking obtained under weak selection cannot be carried over to higher selection intensity if the number of players exceeds two. For games with three (or more) strategies, previous examples for multiplayer games have shown that the ranking of strategies can change with the intensity of selection. In particular, rank changes imply that the most abundant strategy at one intensity of selection can become the least abundant for another. We show that this applies already to pairwise interactions for a broad class of evolutionary processes. Even when both weak and strong selection limits lead to consistent predictions, rank changes can occur for intermediate intensities of selection. To analyze how common such games are, we show numerically that for randomly drawn two-player games with three or more strategies, rank changes frequently occur and their likelihood increases rapidly with the number of strategies . In particular, rank changes are almost certain for , which jeopardizes the predictive power of results derived for weak selection. PMID:24339769

  5. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: An instrumental variables re-analysis of randomized clinical trials

    PubMed Central

    Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.

    2014-01-01

    Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504

  6. Estimating the efficacy of Alcoholics Anonymous without self-selection bias: an instrumental variables re-analysis of randomized clinical trials.

    PubMed

    Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H

    2014-11-01

    Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.

  7. Redshift Survey Strategies

    NASA Astrophysics Data System (ADS)

    Jones, A. W.; Bland-Hawthorn, J.; Kaiser, N.

    1994-12-01

    In the first half of 1995, the Anglo-Australian Observatory is due to commission a wide field (2.1(deg) ), 400-fiber, double spectrograph system (2dF) at the f/3.3 prime focus of the AAT 3.9m bi-national facility. The instrument should be able to measure ~ 4000 galaxy redshifts (assuming a magnitude limit of b_J ~\\ 20) in a single dark night and is therefore ideally suited to studies of large-scale structure. We have carried out simple 3D numerical simulations to judge the relative merits of sparse surveys and contiguous surveys. We generate a survey volume and fill it randomly with particles according to a selection function which mimics a magnitude-limited survey at b_J = 19.7. Each of the particles is perturbed by a gaussian random field according to the dimensionless power spectrum k(3) P(k) / 2pi (2) determined by Feldman, Kaiser & Peacock (1994) from the IRAS QDOT survey. We introduce some redshift-space distortion as described by Kaiser (1987), a `thermal' component measured from pairwise velocities (Davis & Peebles 1983), and `fingers of god' due to rich clusters at random density enhancements. Our particular concern is to understand how the window function W(2(k)) of the survey geometry compromises the accuracy of statistical measures [e.g., P(k), xi (r), xi (r_sigma ,r_pi )] commonly used in the study of large-scale structure. We also examine the reliability of various tools (e.g. genus) for describing the topological structure within a contiguous region of the survey.

  8. A case management intervention targeted to reduce healthcare consumption for frequent Emergency Department visitors: results from an adaptive randomized trial

    PubMed Central

    Anderson, Jacqueline; Dolk, Anders; Torgerson, Jarl; Nyberg, Svante; Skau, Tommy; Forsberg, Birger C.; Werr, Joachim; Öhlen, Gunnar

    2016-01-01

    Background A small group of frequent visitors to Emergency Departments accounts for a disproportionally large fraction of healthcare consumption including unplanned hospitalizations and overall healthcare costs. In response, several case and disease management programs aimed at reducing healthcare consumption in this group have been tested; however, results vary widely. Objectives To investigate whether a telephone-based, nurse-led case management intervention can reduce healthcare consumption for frequent Emergency Department visitors in a large-scale setup. Methods A total of 12 181 frequent Emergency Department users in three counties in Sweden were randomized using Zelen’s design or a traditional randomized design to receive either a nurse-led case management intervention or no intervention, and were followed for healthcare consumption for up to 2 years. Results The traditional design showed an overall 12% (95% confidence interval 4–19%) decreased rate of hospitalization, which was mostly driven by effects in the last year. Similar results were achieved in the Zelen studies, with a significant reduction in hospitalization in the last year, but mixed results in the early development of the project. Conclusion Our study provides evidence that a carefully designed telephone-based intervention with accurate and systematic patient selection and appropriate staff training in a centralized setup can lead to significant decreases in healthcare consumption and costs. Further, our results also show that the effects are sensitive to the delivery model chosen. PMID:25969342

  9. Genetic structured antedependence and random regression models applied to the longitudinal feed conversion ratio in growing Large White pigs.

    PubMed

    Huynh-Tran, V H; Gilbert, H; David, I

    2017-11-01

    The objective of the present study was to compare a random regression model, usually used in genetic analyses of longitudinal data, with the structured antedependence (SAD) model to study the longitudinal feed conversion ratio (FCR) in growing Large White pigs and to propose criteria for animal selection when used for genetic evaluation. The study was based on data from 11,790 weekly FCR measures collected on 1,186 Large White male growing pigs. Random regression (RR) using orthogonal polynomial Legendre and SAD models was used to estimate genetic parameters and predict FCR-based EBV for each of the 10 wk of the test. The results demonstrated that the best SAD model (1 order of antedependence of degree 2 and a polynomial of degree 2 for the innovation variance for the genetic and permanent environmental effects, i.e., 12 parameters) provided a better fit for the data than RR with a quadratic function for the genetic and permanent environmental effects (13 parameters), with Bayesian information criteria values of -10,060 and -9,838, respectively. Heritabilities with the SAD model were higher than those of RR over the first 7 wk of the test. Genetic correlations between weeks were higher than 0.68 for short intervals between weeks and decreased to 0.08 for the SAD model and -0.39 for RR for the longest intervals. These differences in genetic parameters showed that, contrary to the RR approach, the SAD model does not suffer from border effect problems and can handle genetic correlations that tend to 0. Summarized breeding values were proposed for each approach as linear combinations of the individual weekly EBV weighted by the coefficients of the first or second eigenvector computed from the genetic covariance matrix of the additive genetic effects. These summarized breeding values isolated EBV trajectories over time, capturing either the average general value or the slope of the trajectory. Finally, applying the SAD model over a reduced period of time suggested that similar selection choices would result from the use of the records from the first 8 wk of the test. To conclude, the SAD model performed well for the genetic evaluation of longitudinal phenotypes.

  10. Moving a randomized clinical trial into an observational cohort.

    PubMed

    Goodman, Phyllis J; Hartline, Jo Ann; Tangen, Catherine M; Crowley, John J; Minasian, Lori M; Klein, Eric A; Cook, Elise D; Darke, Amy K; Arnold, Kathryn B; Anderson, Karen; Yee, Monica; Meyskens, Frank L; Baker, Laurence H

    2013-02-01

    The Selenium and Vitamin E Cancer Prevention Trial (SELECT) was a randomized, double-blind, placebo-controlled prostate cancer prevention study funded by the National Cancer Institute (NCI) and conducted by the Southwest Oncology Group (SWOG). A total of 35,533 men were assigned randomly to one of the four treatment groups (vitamin E + placebo, selenium + placebo, vitamin E + selenium, and placebo + placebo). The independent Data and Safety Monitoring Committee (DSMC) recommended the discontinuation of study supplements because of the lack of efficacy for risk reduction and because futility analyses demonstrated no possibility of benefit of the supplements to the anticipated degree (25% reduction in prostate cancer incidence) with additional follow-up. Study leadership agreed that the randomized trial should be terminated but believed that the cohort should be maintained and followed as the additional follow-up would contribute important information to the understanding of the biologic consequences of the intervention. Since the participants no longer needed to be seen in person to assess acute toxicities or to be given study supplements, it was determined that the most efficient and cost-effective way to follow them was via a central coordinated effort. A number of changes were necessary at the local Study Sites and SELECT Statistical Center to transition to following participants via a Central Coordinating Center. We describe the transition process from a randomized clinical trial to the observational Centralized Follow-Up (CFU) study. The process of transitioning SELECT, implemented at more than 400 Study Sites across the United States, Canada, and Puerto Rico, entailed many critical decisions and actions including updates to online documents such as the SELECT Workbench and Study Manual, a protocol amendment, reorganization of the Statistical Center, creation of a Transition Committee, development of materials for SELECT Study Sites, development of procedures to close Study Sites, and revision of data collection procedures and the process by which to contact participants. At the time of the publication of the primary SELECT results in December 2008, there were 32,569 men alive and currently active in the trial. As of 31 December 2011, 17,761 participants had been registered to the CFU study. This number is less than had been anticipated due to unforeseen difficulties with local Study Site institutional review boards (IRBs). However, from this cohort, we estimate that an additional 580 prostate cancer cases and 215 Gleason 7 or higher grade cancers will be identified. Over 109,000 individual items have been mailed to participants. Active SELECT ancillary studies have continued. The substantial SELECT biorepository is available to researchers; requests to use the specimens are reviewed for feasibility and scientific merit. As of April 2012, 12 proposals had been approved. The accrual goal of the follow-up study was not met, limiting our power to address the study objectives satisfactorily. The CFU study is also dependent on a number of factors including continued funding, continued interest of investigators in the biorepository, and the continued contribution of the participants. Our experience may be less pertinent to investigators who wish to follow participants in a treatment trial or participants in prevention trials in other medical areas. Extended follow-up of participants in prevention research is important to study the long-term effects of the interventions, such as those used in SELECT. The approach taken by SELECT investigators was to continue to follow participants centrally via an annual questionnaire and with a web-based option. The participants enrolled in the CFU study represent a large, well-characterized, generally healthy cohort. The CFU has enabled us to collect additional prostate and other cancer endpoints and longer follow-up on the almost 18,000 participants enrolled. The utility of the extensive biorepository that was developed during the course of the SELECT is enhanced by longer follow-up.

  11. Moving a Randomized Clinical Trial into an Observational Cohort

    PubMed Central

    Goodman, Phyllis J.; Hartline, Jo Ann; Tangen, Catherine M.; Crowley, John J.; Minasian, Lori M.; Klein, Eric A.; Cook, Elise D.; Darke, Amy K.; Arnold, Kathryn B.; Anderson, Karen; Yee, Monica; Meyskens, Frank L.; Baker, Laurence H.

    2013-01-01

    Background The Selenium and Vitamin E Cancer Prevention Trial (SELECT) was a randomized, double blind, placebo-controlled prostate cancer prevention study funded by the National Cancer Institute and conducted by SWOG (Southwest Oncology Group). A total of 35,533 men were assigned randomly to one of four treatment groups (vitamin E + placebo, selenium + placebo, vitamin E + selenium, placebo + placebo. The independent Data and Safety Monitoring Committee recommended the discontinuation of study supplements because of the lack of efficacy for risk reduction and because futility analyses demonstrated no possibility of benefit of the supplements to the anticipated degree (25% reduction in prostate cancer incidence) with additional follow-up. Study leadership agreed that the randomized trial should be terminated but believed that the cohort should be maintained and followed as the additional follow-up would contribute important information to the understanding of the biologic consequences of the intervention. Since the participants no longer needed to be seen in person to assess acute toxicities or to be given study supplements, it was determined that the most efficient and cost-effective way to follow them was via a central coordinated effort. Purpose A number of changes were necessary at the local Study Sites and SELECT Statistical Center to transition to following participants via a Central Coordinating Center. We describe the transition process from a randomized clinical trial to the observational Centralized Follow-up (CFU) study. Methods The process of transitioning SELECT, implemented at more than 400 Study Sites across the United States, Canada and Puerto Rico, entailed many critical decisions and actions including updates to online documents such as the SELECT Workbench and Study Manual, a protocol amendment, reorganization of the Statistical Center, creation of a Transition Committee, development of materials for SELECT Study Sites, development of procedures to close Study Sites, and revision of data collection procedures and the process by which to contact participants. Results At the time of the publication of the primary SELECT results in December 2008, there were 32,569 men alive and currently active in the trial. As of December 31, 2011, 17,761 participants had been registered to the CFU study. This number is less than had been anticipated due to unforeseen difficulties with local Study Site IRBs. However, from this cohort we estimate that an additional 580 prostate cancer cases and 215 Gleason 7 or higher cancers will be identified. Over 109,000 individual items have been mailed to participants. Active SELECT ancillary studies have continued. The substantial SELECT biorepository is available to researchers; requests to use the specimens are reviewed for feasibility and scientific merit. As of April 2012, 12 proposals had been approved. Limitations The accrual goal of the follow-up study was not met, limiting our power to address the study objectives satisfactorily. The CFU study is also dependent on a number of factors including continued funding, continued interest of investigators in the biorepository and the continued contribution of the participants. Our experience may be less pertinent to investigators who wish to follow participants in a treatment trial or participants in prevention trials in other medical areas. Conclusions Extended follow-up of participants in prevention research is important to study the long-term effects of the interventions, such as those used in SELECT. The approach taken by SELECT investigators was to continue to follow participants centrally via an annual questionnaire and with a web-based option. The participants enrolled in the CFU study represent a large, well-characterized, generally healthy cohort. The CFU has enabled us to collect additional prostate and other cancer endpoints and longer follow-up on the almost 18,000 participants enrolled. The utility of the extensive biorepository that was developed during the course of the SELECT is enhanced by longer follow-up. PMID:23064404

  12. Engineering of ribozyme-based aminoglycoside switches of gene expression by in vivo genetic selection in Saccharomyces cerevisiae.

    PubMed

    Klauser, Benedikt; Rehm, Charlotte; Summerer, Daniel; Hartig, Jörg S

    2015-01-01

    Synthetic RNA-based switches are a growing class of genetic controllers applied in synthetic biology to engineer cellular functions. In this chapter, we detail a protocol for the selection of posttranscriptional controllers of gene expression in yeast using the Schistosoma mansoni hammerhead ribozyme as a central catalytic unit. Incorporation of a small molecule-sensing aptamer domain into the ribozyme renders its activity ligand-dependent. Aptazymes display numerous advantages over conventional protein-based transcriptional controllers, namely, the use of little genomic space for encryption, their modular architecture allowing for easy reprogramming to new inputs, the physical linkage to the message to be controlled, and the ability to function without protein cofactors. Herein, we describe the method to select ribozyme-based switches of gene expression in Saccharomyces cerevisiae that we successfully implemented to engineer neomycin- and theophylline-responsive switches. We also highlight how to adapt the protocol to screen for switches responsive to other ligands. Reprogramming of the sensor unit and incorporation into any RNA of interest enables the fulfillment of a variety of regulatory functions. However, proper functioning of the aptazyme is largely dependent on optimal connection between the aptamer and the catalytic core. We obtained functional switches from a pool of variants carrying randomized connection sequences by an in vivo selection in MaV203 yeast cells that allows screening of a large sequence space of up to 1×10(9) variants. The protocol given explains how to construct aptazyme libraries, carry out the in vivo selection and characterize novel ON- and OFF-switches. © 2015 Elsevier Inc. All rights reserved.

  13. Evolving artificial metalloenzymes via random mutagenesis

    NASA Astrophysics Data System (ADS)

    Yang, Hao; Swartz, Alan M.; Park, Hyun June; Srivastava, Poonam; Ellis-Guardiola, Ken; Upp, David M.; Lee, Gihoon; Belsare, Ketaki; Gu, Yifan; Zhang, Chen; Moellering, Raymond E.; Lewis, Jared C.

    2018-03-01

    Random mutagenesis has the potential to optimize the efficiency and selectivity of protein catalysts without requiring detailed knowledge of protein structure; however, introducing synthetic metal cofactors complicates the expression and screening of enzyme libraries, and activity arising from free cofactor must be eliminated. Here we report an efficient platform to create and screen libraries of artificial metalloenzymes (ArMs) via random mutagenesis, which we use to evolve highly selective dirhodium cyclopropanases. Error-prone PCR and combinatorial codon mutagenesis enabled multiplexed analysis of random mutations, including at sites distal to the putative ArM active site that are difficult to identify using targeted mutagenesis approaches. Variants that exhibited significantly improved selectivity for each of the cyclopropane product enantiomers were identified, and higher activity than previously reported ArM cyclopropanases obtained via targeted mutagenesis was also observed. This improved selectivity carried over to other dirhodium-catalysed transformations, including N-H, S-H and Si-H insertion, demonstrating that ArMs evolved for one reaction can serve as starting points to evolve catalysts for others.

  14. Rare royal families in honeybees, Apis mellifera

    NASA Astrophysics Data System (ADS)

    Moritz, Robin F. A.; Lattorff, H. Michael G.; Neumann, Peter; Kraus, F. Bernhard; Radloff, Sarah E.; Hepburn, H. Randall

    2005-10-01

    The queen is the dominant female in the honeybee colony, Apis mellifera, and controls reproduction. Queen larvae are selected by the workers and are fed a special diet (royal jelly), which determines caste. Because queens mate with many males a large number of subfamilies coexist in the colony. As a consequence, there is a considerable potential for conflict among the subfamilies over queen rearing. Here we show that honeybee queens are not reared at random but are preferentially reared from rare “royal” subfamilies, which have extremely low frequencies in the colony's worker force but a high frequency in the queens reared.

  15. SAS procedures for designing and analyzing sample surveys

    USGS Publications Warehouse

    Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.

    2003-01-01

    Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).

  16. A survey of students` ethical attitudes using computer-related scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanchey, C.M.; Kingsbury, J.

    Many studies exist that examine ethical beliefs and attitudes of university students ascending medium or large institutions. There are also many studies which examine ethical attitudes and beliefs of computer science and computer information systems majors. None, however, examines ethical attitudes of university students (regardless of undergraduate major) at a small, Christian, liberal arts institution regarding computer-related situations. This paper will present data accumulated by an on-going study in which students are presented seven scenarios--all of which involve some aspect of computing technology. These students were randomly selected from a small, Christian, liberal-arts university.

  17. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  18. Preferential partner selection in an evolutionary study of Prisoner's Dilemma.

    PubMed

    Ashlock, D; Smucker, M D; Stanley, E A; Tesfatsion, L

    1996-01-01

    Partner selection is an important process in many social interactions, permitting individuals to decrease the risks associated with cooperation. In large populations, defectors may escape punishment by roving from partner to partner, but defectors in smaller populations risk social isolation. We investigate these possibilities for an evolutionary Prisoner's Dilemma in which agents use expected payoffs to choose and refuse partners. In comparison to random or round-robin partner matching, we find that the average payoffs attained with preferential partner selection tend to be more narrowly confined to a few isolated payoff regions. Most ecologies evolve to essentially full cooperative behavior, but when agents are intolerant of defections, or when the costs of refusal and social isolation are small, we also see the emergence of wallflower ecologies in which all agents are socially isolated. Between these two extremes, we see the emergence of ecologies whose agents tend to engage in a small number of defections followed by cooperation thereafter. The latter ecologies exhibit a plethora of interesting social interaction patterns.

  19. An Examination of Fluoxetine for the Treatment of Selective Mutism Using a Nonconcurrent Multiple-Baseline Single-Case Design Across 5 Cases.

    PubMed

    Barterian, Justin A; Sanchez, Joel M; Magen, Jed; Siroky, Allison K; Mash, Brittany L; Carlson, John S

    2018-01-01

    This study examined the utility of fluoxetine in the treatment of 5 children, aged 5 to 14 years, diagnosed with selective mutism who also demonstrated symptoms of social anxiety. A nonconcurrent, randomized, multiple-baseline, single-case design with a single-blind placebo-controlled procedure was used. Parents and the study psychiatrist completed multiple methods of assessment including Direct Behavior Ratings and questionnaires. Treatment outcomes were evaluated by calculating effect sizes for each participant as an individual and for the participants as a group. Information regarding adverse effects with an emphasis on behavioral disinhibition and ratings of parental acceptance of the intervention was gathered. All 5 children experienced improvement in social anxiety, responsive speech, and spontaneous speech with medium to large effect sizes; however, children still met criteria for selective mutism at the end of the study. Adverse events were minimal, with only 2 children experiencing brief occurrences of minor behavioral disinhibition. Parents found the treatment highly acceptable.

  20. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  1. Improved knowledge diffusion model based on the collaboration hypernetwork

    NASA Astrophysics Data System (ADS)

    Wang, Jiang-Pan; Guo, Qiang; Yang, Guang-Yong; Liu, Jian-Guo

    2015-06-01

    The process for absorbing knowledge becomes an essential element for innovation in firms and in adapting to changes in the competitive environment. In this paper, we present an improved knowledge diffusion hypernetwork (IKDH) model based on the idea that knowledge will spread from the target node to all its neighbors in terms of the hyperedge and knowledge stock. We apply the average knowledge stock V(t) , the variable σ2(t) , and the variance coefficient c(t) to evaluate the performance of knowledge diffusion. By analyzing different knowledge diffusion ways, selection ways of the highly knowledgeable nodes, hypernetwork sizes and hypernetwork structures for the performance of knowledge diffusion, results show that the diffusion speed of IKDH model is 3.64 times faster than that of traditional knowledge diffusion (TKDH) model. Besides, it is three times faster to diffuse knowledge by randomly selecting "expert" nodes than that by selecting large-hyperdegree nodes as "expert" nodes. Furthermore, either the closer network structure or smaller network size results in the faster knowledge diffusion.

  2. The breeding bird survey, 1966

    USGS Publications Warehouse

    Robbins, Chandler S.; Van Velzen, Willet T.

    1967-01-01

    A Breeding Bird Survey of a large section on North America was conducted during June 1966. Cooperators ran a total of 585 Survey routes in 26 eastern States and 4 Canadian Provinces. Future coverage of established routes will enable changes in the abundance of North American breeding birds to be measured. Routes are selected at random on the basis of one-degree blocks of latitude and longitude. Each 241/2-mile route, with 3-minute stops spaced one-half mile apart, is driven by automobile. All birds heard or seen at the stops are recorded on special forms and the data are then transferred to machine punch cards. The average number of birds per route is tabulated by State, along with the total number of each species and the percent of routes and stops upon which they were recorded. Maps are presented showing the range and abundance of selected species. Also, a year-to-year comparison is made of populations of selected species on Maryland routes in 1965 and 1966.

  3. Resolving the Conflict Between Associative Overdominance and Background Selection

    PubMed Central

    Zhao, Lei; Charlesworth, Brian

    2016-01-01

    In small populations, genetic linkage between a polymorphic neutral locus and loci subject to selection, either against partially recessive mutations or in favor of heterozygotes, may result in an apparent selective advantage to heterozygotes at the neutral locus (associative overdominance) and a retardation of the rate of loss of variability by genetic drift at this locus. In large populations, selection against deleterious mutations has previously been shown to reduce variability at linked neutral loci (background selection). We describe analytical, numerical, and simulation studies that shed light on the conditions under which retardation vs. acceleration of loss of variability occurs at a neutral locus linked to a locus under selection. We consider a finite, randomly mating population initiated from an infinite population in equilibrium at a locus under selection. With mutation and selection, retardation occurs only when S, the product of twice the effective population size and the selection coefficient, is of order 1. With S >> 1, background selection always causes an acceleration of loss of variability. Apparent heterozygote advantage at the neutral locus is, however, always observed when mutations are partially recessive, even if there is an accelerated rate of loss of variability. With heterozygote advantage at the selected locus, loss of variability is nearly always retarded. The results shed light on experiments on the loss of variability at marker loci in laboratory populations and on the results of computer simulations of the effects of multiple selected loci on neutral variability. PMID:27182952

  4. Refernce Conditions for Streams in the Grand Prairie Natural Division of Illinois

    NASA Astrophysics Data System (ADS)

    Sangunett, B.; Dewalt, R.

    2005-05-01

    As part of the Critical Trends Assessment Program (CTAP) of the Illinois Department of Natural Resources (IDNR), 12 potential reference quality stream sites in the Grand Prairie Natural Division were evaluated in May 2004. This agriculturally dominated region, located in east central Illinois, is the most highly modified in the state. The quality of these sites was assessed using a modified Hilsenhoff Biotic Index and species richness of Ephemeroptera, Plecoptera, and Trichoptera (EPT) insect orders and a 12 parameter Habitat Quality Index (HQI). Illinois EPA high quality fish stations, Illinois Natural History Survey insect collection data, and best professional knowledge were used to choose which streams to evaluate. For analysis, reference quality streams were compared to 37 randomly selected meandering streams and 26 randomly selected channelized streams which were assessed by CTAP between 1997 and 2001. The results showed that the reference streams exceeded both taxa richness and habitat quality of randomly selected streams in the region. Both random meandering sites and reference quality sites increased in taxa richness and HQI as stream width increased. Randomly selected channelized streams had about the same taxa richness and HQI regardless of width.

  5. Methods and analysis of realizing randomized grouping.

    PubMed

    Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi

    2011-07-01

    Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.

  6. Hide and vanish: data sets where the most parsimonious tree is known but hard to find, and their implications for tree search methods.

    PubMed

    Goloboff, Pablo A

    2014-10-01

    Three different types of data sets, for which the uniquely most parsimonious tree can be known exactly but is hard to find with heuristic tree search methods, are studied. Tree searches are complicated more by the shape of the tree landscape (i.e. the distribution of homoplasy on different trees) than by the sheer abundance of homoplasy or character conflict. Data sets of Type 1 are those constructed by Radel et al. (2013). Data sets of Type 2 present a very rugged landscape, with narrow peaks and valleys, but relatively low amounts of homoplasy. For such a tree landscape, subjecting the trees to TBR and saving suboptimal trees produces much better results when the sequence of clipping for the tree branches is randomized instead of fixed. An unexpected finding for data sets of Types 1 and 2 is that starting a search from a random tree instead of a random addition sequence Wagner tree may increase the probability that the search finds the most parsimonious tree; a small artificial example where these probabilities can be calculated exactly is presented. Data sets of Type 3, the most difficult data sets studied here, comprise only congruent characters, and a single island with only one most parsimonious tree. Even if there is a single island, missing entries create a very flat landscape which is difficult to traverse with tree search algorithms because the number of equally parsimonious trees that need to be saved and swapped to effectively move around the plateaus is too large. Minor modifications of the parameters of tree drifting, ratchet, and sectorial searches allow travelling around these plateaus much more efficiently than saving and swapping large numbers of equally parsimonious trees with TBR. For these data sets, two new related criteria for selecting taxon addition sequences in Wagner trees (the "selected" and "informative" addition sequences) produce much better results than the standard random or closest addition sequences. These new methods for Wagner trees and for moving around plateaus can be useful when analyzing phylogenomic data sets formed by concatenation of genes with uneven taxon representation ("sparse" supermatrices), which are likely to present a tree landscape with extensive plateaus. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Preferential selection based on degree difference in the spatial prisoner's dilemma games

    NASA Astrophysics Data System (ADS)

    Huang, Changwei; Dai, Qionglin; Cheng, Hongyan; Li, Haihong

    2017-10-01

    Strategy evolution in spatial evolutionary games is generally implemented through imitation processes between individuals. In most previous studies, it is assumed that individuals pick up one of their neighbors randomly to learn from. However, by considering the heterogeneity of individuals' influence in the real society, preferential selection is more realistic. Here, we introduce a preferential selection mechanism based on degree difference into spatial prisoner's dilemma games on Erdös-Rényi networks and Barabási-Albert scale-free networks and investigate the effects of the preferential selection on cooperation. The results show that, when the individuals prefer to choose the neighbors who have small degree difference with themselves to imitate, cooperation is hurt by the preferential selection. In contrast, when the individuals prefer to choose those large degree difference neighbors to learn from, there exists optimal preference strength resulting in the maximal cooperation level no matter what the network structure is. In addition, we investigate the robustness of the results against variations of the noise, the average degree and the size of network in the model, and find that the qualitative features of the results are unchanged.

  8. [The research protocol III. Study population].

    PubMed

    Arias-Gómez, Jesús; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe

    2016-01-01

    The study population is defined as a set of cases, determined, limited, and accessible, that will constitute the subjects for the selection of the sample, and must fulfill several characteristics and distinct criteria. The objectives of this manuscript are focused on specifying each one of the elements required to make the selection of the participants of a research project, during the elaboration of the protocol, including the concepts of study population, sample, selection criteria and sampling methods. After delineating the study population, the researcher must specify the criteria that each participant has to comply. The criteria that include the specific characteristics are denominated selection or eligibility criteria. These criteria are inclusion, exclusion and elimination, and will delineate the eligible population. The sampling methods are divided in two large groups: 1) probabilistic or random sampling and 2) non-probabilistic sampling. The difference lies in the employment of statistical methods to select the subjects. In every research, it is necessary to establish at the beginning the specific number of participants to be included to achieve the objectives of the study. This number is the sample size, and can be calculated or estimated with mathematical formulas and statistic software.

  9. Fixation Probability in a Two-Locus Model by the Ancestral Recombination–Selection Graph

    PubMed Central

    Lessard, Sabin; Kermany, Amir R.

    2012-01-01

    We use the ancestral influence graph (AIG) for a two-locus, two-allele selection model in the limit of a large population size to obtain an analytic approximation for the probability of ultimate fixation of a single mutant allele A. We assume that this new mutant is introduced at a given locus into a finite population in which a previous mutant allele B is already segregating with a wild type at another linked locus. We deduce that the fixation probability increases as the recombination rate increases if allele A is either in positive epistatic interaction with B and allele B is beneficial or in no epistatic interaction with B and then allele A itself is beneficial. This holds at least as long as the recombination fraction and the selection intensity are small enough and the population size is large enough. In particular this confirms the Hill–Robertson effect, which predicts that recombination renders more likely the ultimate fixation of beneficial mutants at different loci in a population in the presence of random genetic drift even in the absence of epistasis. More importantly, we show that this is true from weak negative epistasis to positive epistasis, at least under weak selection. In the case of deleterious mutants, the fixation probability decreases as the recombination rate increases. This supports Muller’s ratchet mechanism to explain the accumulation of deleterious mutants in a population lacking recombination. PMID:22095080

  10. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    PubMed

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  11. A single point acupuncture treatment at large intestine meridian: a randomized controlled trial in acute tonsillitis and pharyngitis.

    PubMed

    Fleckenstein, Johannes; Lill, Christian; Lüdtke, Rainer; Gleditsch, Jochen; Rasp, Gerd; Irnich, Dominik

    2009-09-01

    One out of 4 patients visiting a general practitioner reports of a sore throat associated with pain on swallowing. This study was established to examine the immediate pain alleviating effect of a single point acupuncture treatment applied to the large intestine meridian of patients with sore throat. Sixty patients with acute tonsillitis and pharyngitis were enrolled in this randomized placebo-controlled trial. They either received acupuncture, or sham laser acupuncture, directed to the large intestine meridian section between acupuncture points LI 8 and LI 10. The main outcome measure was the change of pain intensity on swallowing a sip of water evaluated by a visual analog scale 15 minutes after treatment. A credibility assessment regarding the respective treatment was performed. The pain intensity for the acupuncture group before and immediately after therapy was 5.6+/-2.8 and 3.0+/-3.0, and for the sham group 5.6+/-2.5 and 3.8+/-2.5, respectively. Despite the articulation of a more pronounced improvement among the acupuncture group, there was no significant difference between groups (Delta=0.9, confidence interval: -0.2-2.0; P=0.12; analysis of covariance). Patients' satisfaction was high in both treatment groups. The study was prematurely terminated due to a subsequent lack of suitable patients. A single acupuncture treatment applied to a selected area of the large intestine meridian was no more effective in the alleviation of pain associated with clinical sore throat than sham laser acupuncture applied to the same area. Hence, clinically relevant improvement could be achieved. Pain alleviation might partly be due to the intense palpation of the large intestine meridian. The benefit of a comprehensive acupuncture treatment protocol in this condition should be subject to further trials.

  12. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    NASA Astrophysics Data System (ADS)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  13. A randomized controlled trial investigating the use of a predictive nomogram for the selection of the FSH starting dose in IVF/ICSI cycles.

    PubMed

    Allegra, Adolfo; Marino, Angelo; Volpes, Aldo; Coffaro, Francesco; Scaglione, Piero; Gullo, Salvatore; La Marca, Antonio

    2017-04-01

    The number of oocytes retrieved is a relevant intermediate outcome in women undergoing IVF/intracytoplasmic sperm injection (ICSI). This trial compared the efficiency of the selection of the FSH starting dose according to a nomogram based on multiple biomarkers (age, day 3 FSH, anti-Müllerian hormone) versus an age-based strategy. The primary outcome measure was the proportion of women with an optimal number of retrieved oocytes defined as 8-14. At their first IVF/ICSI cycle, 191 patients underwent a long gonadotrophin-releasing hormone agonist protocol and were randomized to receive a starting dose of recombinant (human) FSH, based on their age (150 IU if ≤35 years, 225 IU if >35 years) or based on the nomogram. Optimal response was observed in 58/92 patients (63%) in the nomogram group and in 42/99 (42%) in the control group (+21%, 95% CI = 0.07 to 0.35, P = 0.0037). No significant differences were found in the clinical pregnancy rate or the number of embryos cryopreserved per patient. The study showed that the FSH starting dose selected according to ovarian reserve is associated with an increase in the proportion of patients with an optimal response: large trials are recommended to investigate any possible effect on the live-birth rate. Copyright © 2017 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  14. Infection control in intensive care units and prevention of ventilator-associated pneumonia.

    PubMed

    Bonten, M J; Weinstein, R A

    2000-12-01

    Ventilator-associated pneumonia (VAP) is considered the most frequent infection in the intensive care unit (ICU), although incidence rates depend on the diagnostic methods. Because VAP has been associated with increased mortality and greater costs for medical care, prevention remains an important goal for intensive care medicine. Selective digestive decontamination (SDD), the most frequently studied method of infection prevention, is still controversial despite more than 30 prospective randomized trials and 6 metaanalyses. SDD reduces the incidence of VAP diagnoses, but beneficial effects on duration of ventilation or ICU stay, antibiotic use, and patient survival have not been shown unequivocally. Although recent metaanalyses suggest a 20% to 40% decrease in ICU mortality for SDD used with systemic prophylaxis, this benefit should be confirmed in a large, prospective, randomized study, preferably with a cost-benefit analysis. Selection of pathogens resistant to the antibiotics used in SDD remains the most important drawback of SDD, rendering SDD contraindicated in wards with endemic resistant problems. Other methods of infection prevention that do not create a selective growth advantage for resistant microorganisms may be more useful. Among these are the use of endotracheal tubes with the possibility of continuous aspiration of subglottic secretions, oropharyngeal decontamination with antiseptics, or the semirecumbent treatment position of patients. Although these methods were successful in single studies, more data are needed. Notwithstanding the potential benefits of these interventions, such classic infection control measures as handwashing remain the cornerstone of infection prevention.

  15. Genetic improvement in mastitis resistance: comparison of selection criteria from cross-sectional and random regression sire models for somatic cell score.

    PubMed

    Odegård, J; Klemetsdal, G; Heringstad, B

    2005-04-01

    Several selection criteria for reducing incidence of mastitis were developed from a random regression sire model for test-day somatic cell score (SCS). For comparison, sire transmitting abilities were also predicted based on a cross-sectional model for lactation mean SCS. Only first-crop daughters were used in genetic evaluation of SCS, and the different selection criteria were compared based on their correlation with incidence of clinical mastitis in second-crop daughters (measured as mean daughter deviations). Selection criteria were predicted based on both complete and reduced first-crop daughter groups (261 or 65 daughters per sire, respectively). For complete daughter groups, predicted transmitting abilities at around 30 d in milk showed the best predictive ability for incidence of clinical mastitis, closely followed by average predicted transmitting abilities over the entire lactation. Both of these criteria were derived from the random regression model. These selection criteria improved accuracy of selection by approximately 2% relative to a cross-sectional model. However, for reduced daughter groups, the cross-sectional model yielded increased predictive ability compared with the selection criteria based on the random regression model. This result may be explained by the cross-sectional model being more robust, i.e., less sensitive to precision of (co)variance components estimates and effects of data structure.

  16. The Effects of Total Physical Response by Storytelling and the Traditional Teaching Styles of a Foreign Language in a Selected High School

    ERIC Educational Resources Information Center

    Kariuki, Patrick N. K.; Bush, Elizabeth Danielle

    2008-01-01

    The purpose of this study was to examine the effects of Total Physical Response by Storytelling and the traditional teaching method on a foreign language in a selected high school. The sample consisted of 30 students who were randomly selected and randomly assigned to experimental and control group. The experimental group was taught using Total…

  17. Engineering RNA phage MS2 virus-like particles for peptide display

    NASA Astrophysics Data System (ADS)

    Jordan, Sheldon Keith

    Phage display is a powerful and versatile technology that enables the selection of novel binding functions from large populations of randomly generated peptide sequences. Random sequences are genetically fused to a viral structural protein to produce complex peptide libraries. From a sufficiently complex library, phage bearing peptides with practically any desired binding activity can be physically isolated by affinity selection, and, since each particle carries in its genome the genetic information for its own replication, the selectants can be amplified by infection of bacteria. For certain applications however, existing phage display platforms have limitations. One such area is in the field of vaccine development, where the goal is to identify relevant epitopes by affinity-selection against an antibody target, and then to utilize them as immunogens to elicit a desired antibody response. Today, affinity selection is usually conducted using display on filamentous phages like M13. This technology provides an efficient means for epitope identification, but, because filamentous phages do not display peptides in the high-density, multivalent arrays the immune system prefers to recognize, they generally make poor immunogens and are typically useless as vaccines. This makes it necessary to confer immunogenicity by conjugating synthetic versions of the peptides to more immunogenic carriers. Unfortunately, when introduced into these new structural environments, the epitopes often fail to elicit relevant antibody responses. Thus, it would be advantageous to combine the epitope selection and immunogen functions into a single platform where the structural constraints present during affinity selection can be preserved during immunization. This dissertation describes efforts to develop a peptide display system based on the virus-like particles (VLPs) of bacteriophage MS2. Phage display technologies rely on (1) the identification of a site in a viral structural protein that is present on the surface of the virus particle and can accept foreign sequence insertions without disruption of protein folding and viral particle assembly, and (2) on the encapsidation of nucleic acid sequences encoding both the VLP and the peptide it displays. The experiments described here are aimed at satisfying the first of these two requirements by engineering efficient peptide display at two different sites in MS2 coat protein. First, we evaluated the suitability of the N-terminus of MS2 coat for peptide insertions. It was observed that random N-terminal 10-mer fusions generally disrupted protein folding and VLP assembly, but by bracketing the foreign sequences with certain specific dipeptides, these defects could be suppressed. Next, the suitability of a coat protein surface loop for foreign sequence insertion was tested. Specifically, random sequence peptides were inserted into the N-terminal-most AB-loop of a coat protein single-chain dimer. Again we found that efficient display required the presence of appropriate dipeptides bracketing the peptide insertion. Finally, it was shown that an N-terminal fusion that tended to interfere specifically with capsid assembly could be efficiently incorporated into mosaic particles when co-expressed with wild-type coat protein.

  18. Most Undirected Random Graphs Are Amplifiers of Selection for Birth-Death Dynamics, but Suppressors of Selection for Death-Birth Dynamics.

    PubMed

    Hindersin, Laura; Traulsen, Arne

    2015-11-01

    We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process.

  19. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti.

    PubMed

    Wampler, Peter J; Rediske, Richard R; Molla, Azizur R

    2013-01-18

    A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This method provides an important technique that can be applied to other developing countries where a randomized study design is needed but infrastructure is lacking to implement more traditional participant selection methods.

  20. The role of color and attention-to-color in mirror-symmetry perception.

    PubMed

    Gheorghiu, Elena; Kingdom, Frederick A A; Remkes, Aaron; Li, Hyung-Chul O; Rainville, Stéphane

    2016-07-11

    The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) 'segregated' - symmetric blobs were of one color, random blobs of the other color(s); (2) 'random-segregated' - as above but with the symmetric color randomly selected on each trial; (3) 'non-segregated' - symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) 'anti-symmetric' - symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective.

  1. The role of color and attention-to-color in mirror-symmetry perception

    PubMed Central

    Gheorghiu, Elena; Kingdom, Frederick A. A.; Remkes, Aaron; Li, Hyung-Chul O.; Rainville, Stéphane

    2016-01-01

    The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) ‘segregated’ – symmetric blobs were of one color, random blobs of the other color(s); (2) ‘random-segregated’ – as above but with the symmetric color randomly selected on each trial; (3) ‘non-segregated’ – symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) ‘anti-symmetric’ – symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective. PMID:27404804

  2. Ensemble Feature Learning of Genomic Data Using Support Vector Machine

    PubMed Central

    Anaissi, Ali; Goyal, Madhu; Catchpoole, Daniel R.; Braytee, Ali; Kennedy, Paul J.

    2016-01-01

    The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algorithm were further explored with Singular Value Decomposition (SVD) which reveals significant clusters with the selected data. PMID:27304923

  3. Randomization Methods in Emergency Setting Trials: A Descriptive Review

    ERIC Educational Resources Information Center

    Corbett, Mark Stephen; Moe-Byrne, Thirimon; Oddie, Sam; McGuire, William

    2016-01-01

    Background: Quasi-randomization might expedite recruitment into trials in emergency care settings but may also introduce selection bias. Methods: We searched the Cochrane Library and other databases for systematic reviews of interventions in emergency medicine or urgent care settings. We assessed selection bias (baseline imbalances) in prognostic…

  4. Middle Level Practices in European International and Department of Defense Schools.

    ERIC Educational Resources Information Center

    Waggoner, V. Christine; McEwin, C. Kenneth

    1993-01-01

    Discusses results of a 1989-90 survey of 70 randomly selected international schools and 70 randomly selected Department of Defense Schools in Europe. Programs and practices surveyed included enrollments, grade organization, curriculum and instructional plans, core subjects, grouping patterns, exploratory courses, advisory programs, and scheduling.…

  5. Variable selection with random forest: Balancing stability, performance, and interpretation in ecological and environmental modeling

    EPA Science Inventory

    Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...

  6. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.

    PubMed

    Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano

    2017-11-08

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.

  7. Fast Principal-Component Analysis Reveals Convergent Evolution of ADH1B in Europe and East Asia

    PubMed Central

    Galinsky, Kevin J.; Bhatia, Gaurav; Loh, Po-Ru; Georgiev, Stoyan; Mukherjee, Sayan; Patterson, Nick J.; Price, Alkes L.

    2016-01-01

    Searching for genetic variants with unusual differentiation between subpopulations is an established approach for identifying signals of natural selection. However, existing methods generally require discrete subpopulations. We introduce a method that infers selection using principal components (PCs) by identifying variants whose differentiation along top PCs is significantly greater than the null distribution of genetic drift. To enable the application of this method to large datasets, we developed the FastPCA software, which employs recent advances in random matrix theory to accurately approximate top PCs while reducing time and memory cost from quadratic to linear in the number of individuals, a computational improvement of many orders of magnitude. We apply FastPCA to a cohort of 54,734 European Americans, identifying 5 distinct subpopulations spanning the top 4 PCs. Using the PC-based test for natural selection, we replicate previously known selected loci and identify three new genome-wide significant signals of selection, including selection in Europeans at ADH1B. The coding variant rs1229984∗T has previously been associated to a decreased risk of alcoholism and shown to be under selection in East Asians; we show that it is a rare example of independent evolution on two continents. We also detect selection signals at IGFBP3 and IGH, which have also previously been associated to human disease. PMID:26924531

  8. Automatic learning-based beam angle selection for thoracic IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationallymore » efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume coverage and organ at risk sparing and were superior over plans produced with fixed sets of common beam angles. The great majority of the automatic plans (93%) were approved as clinically acceptable by three radiation therapy specialists. Conclusions: The results demonstrated the feasibility of utilizing a learning-based approach for automatic selection of beam angles in thoracic IMRT planning. The proposed method may assist in reducing the manual planning workload, while sustaining plan quality.« less

  9. Informational masking and musical training

    NASA Astrophysics Data System (ADS)

    Oxenham, Andrew J.; Fligor, Brian J.; Mason, Christine R.; Kidd, Gerald

    2003-09-01

    The relationship between musical training and informational masking was studied for 24 young adult listeners with normal hearing. The listeners were divided into two groups based on musical training. In one group, the listeners had little or no musical training; the other group was comprised of highly trained, currently active musicians. The hypothesis was that musicians may be less susceptible to informational masking, which is thought to reflect central, rather than peripheral, limitations on the processing of sound. Masked thresholds were measured in two conditions, similar to those used by Kidd et al. [J. Acoust. Soc. Am. 95, 3475-3480 (1994)]. In both conditions the signal was comprised of a series of repeated tone bursts at 1 kHz. The masker was comprised of a series of multitone bursts, gated with the signal. In one condition the frequencies of the masker were selected randomly for each burst; in the other condition the masker frequencies were selected randomly for the first burst of each interval and then remained constant throughout the interval. The difference in thresholds between the two conditions was taken as a measure of informational masking. Frequency selectivity, using the notched-noise method, was also estimated in the two groups. The results showed no difference in frequency selectivity between the two groups, but showed a large and significant difference in the amount of informational masking between musically trained and untrained listeners. This informational masking task, which requires no knowledge specific to musical training (such as note or interval names) and is generally not susceptible to systematic short- or medium-term training effects, may provide a basis for further studies of analytic listening abilities in different populations.

  10. Spectral Band Selection for Urban Material Classification Using Hyperspectral Libraries

    NASA Astrophysics Data System (ADS)

    Le Bris, A.; Chehata, N.; Briottet, X.; Paparoditis, N.

    2016-06-01

    In urban areas, information concerning very high resolution land cover and especially material maps are necessary for several city modelling or monitoring applications. That is to say, knowledge concerning the roofing materials or the different kinds of ground areas is required. Airborne remote sensing techniques appear to be convenient for providing such information at a large scale. However, results obtained using most traditional processing methods based on usual red-green-blue-near infrared multispectral images remain limited for such applications. A possible way to improve classification results is to enhance the imagery spectral resolution using superspectral or hyperspectral sensors. In this study, it is intended to design a superspectral sensor dedicated to urban materials classification and this work particularly focused on the selection of the optimal spectral band subsets for such sensor. First, reflectance spectral signatures of urban materials were collected from 7 spectral libraires. Then, spectral optimization was performed using this data set. The band selection workflow included two steps, optimising first the number of spectral bands using an incremental method and then examining several possible optimised band subsets using a stochastic algorithm. The same wrapper relevance criterion relying on a confidence measure of Random Forests classifier was used at both steps. To cope with the limited number of available spectra for several classes, additional synthetic spectra were generated from the collection of reference spectra: intra-class variability was simulated by multiplying reference spectra by a random coefficient. At the end, selected band subsets were evaluated considering the classification quality reached using a rbf svm classifier. It was confirmed that a limited band subset was sufficient to classify common urban materials. The important contribution of bands from the Short Wave Infra-Red (SWIR) spectral domain (1000-2400 nm) to material classification was also shown.

  11. High Dimensional Classification Using Features Annealed Independence Rules.

    PubMed

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  12. Assessment of Useful Plants in the Catchment Area of the Proposed Ntabelanga Dam in the Eastern Cape Province, South Africa

    PubMed Central

    2017-01-01

    Background The developmental projects, particularly construction of dams, result in permanent changes of terrestrial ecosystems through inundation. Objective The present study was undertaken aiming at documenting useful plant species in Ntabelanga dam catchment area that will be impacted by the construction of the proposed dam. Methods A total of 55 randomly selected quadrats were used to assess plant species diversity and composition. Participatory rural appraisal (PRA) methods were used to identify useful plant species growing in the catchment area through interviews with 108 randomly selected participants. Results A total of 197 plant species were recorded with 95 species (48.2%) utilized for various purposes. Use categories included ethnoveterinary and herbal medicines (46 species), food plants (37 species), construction timber and thatching (14 species), firewood (five species), browse, live fence, and ornamental (four species each), and brooms and crafts (two species). Conclusion This study showed that plant species play an important role in the daily life and culture of local people. The construction of Ntabelanga dam is, therefore, associated with several positive and negative impacts on plant resources which are not fully integrated into current decision-making, largely because of lack of multistakeholder dialogue on the socioeconomic issues of such an important project. PMID:28828397

  13. Segmentation and determination of joint space width in foot radiographs

    NASA Astrophysics Data System (ADS)

    Schenk, O.; de Muinck Keizer, D. M.; Bernelot Moens, H. J.; Slump, C. H.

    2016-03-01

    Joint damage in rheumatoid arthritis is frequently assessed using radiographs of hands and feet. Evaluation includes measurements of the joint space width (JSW) and detection of erosions. Current visual scoring methods are timeconsuming and subject to inter- and intra-observer variability. Automated measurement methods avoid these limitations and have been fairly successful in hand radiographs. This contribution aims at foot radiographs. Starting from an earlier proposed automated segmentation method we have developed a novel model based image analysis algorithm for JSW measurements. This method uses active appearance and active shape models to identify individual bones. The model compiles ten submodels, each representing a specific bone of the foot (metatarsals 1-5, proximal phalanges 1-5). We have performed segmentation experiments using 24 foot radiographs, randomly selected from a large database from the rheumatology department of a local hospital: 10 for training and 14 for testing. Segmentation was considered successful if the joint locations are correctly determined. Segmentation was successful in only 14%. To improve results a step-by-step analysis will be performed. We performed JSW measurements on 14 randomly selected radiographs. JSW was successfully measured in 75%, mean and standard deviation are 2.30+/-0.36mm. This is a first step towards automated determination of progression of RA and therapy response in feet using radiographs.

  14. DHSpred: support-vector-machine-based human DNase I hypersensitive sites prediction using the optimal features selected by random forest.

    PubMed

    Manavalan, Balachandran; Shin, Tae Hwan; Lee, Gwang

    2018-01-05

    DNase I hypersensitive sites (DHSs) are genomic regions that provide important information regarding the presence of transcriptional regulatory elements and the state of chromatin. Therefore, identifying DHSs in uncharacterized DNA sequences is crucial for understanding their biological functions and mechanisms. Although many experimental methods have been proposed to identify DHSs, they have proven to be expensive for genome-wide application. Therefore, it is necessary to develop computational methods for DHS prediction. In this study, we proposed a support vector machine (SVM)-based method for predicting DHSs, called DHSpred (DNase I Hypersensitive Site predictor in human DNA sequences), which was trained with 174 optimal features. The optimal combination of features was identified from a large set that included nucleotide composition and di- and trinucleotide physicochemical properties, using a random forest algorithm. DHSpred achieved a Matthews correlation coefficient and accuracy of 0.660 and 0.871, respectively, which were 3% higher than those of control SVM predictors trained with non-optimized features, indicating the efficiency of the feature selection method. Furthermore, the performance of DHSpred was superior to that of state-of-the-art predictors. An online prediction server has been developed to assist the scientific community, and is freely available at: http://www.thegleelab.org/DHSpred.html.

  15. DHSpred: support-vector-machine-based human DNase I hypersensitive sites prediction using the optimal features selected by random forest

    PubMed Central

    Manavalan, Balachandran; Shin, Tae Hwan; Lee, Gwang

    2018-01-01

    DNase I hypersensitive sites (DHSs) are genomic regions that provide important information regarding the presence of transcriptional regulatory elements and the state of chromatin. Therefore, identifying DHSs in uncharacterized DNA sequences is crucial for understanding their biological functions and mechanisms. Although many experimental methods have been proposed to identify DHSs, they have proven to be expensive for genome-wide application. Therefore, it is necessary to develop computational methods for DHS prediction. In this study, we proposed a support vector machine (SVM)-based method for predicting DHSs, called DHSpred (DNase I Hypersensitive Site predictor in human DNA sequences), which was trained with 174 optimal features. The optimal combination of features was identified from a large set that included nucleotide composition and di- and trinucleotide physicochemical properties, using a random forest algorithm. DHSpred achieved a Matthews correlation coefficient and accuracy of 0.660 and 0.871, respectively, which were 3% higher than those of control SVM predictors trained with non-optimized features, indicating the efficiency of the feature selection method. Furthermore, the performance of DHSpred was superior to that of state-of-the-art predictors. An online prediction server has been developed to assist the scientific community, and is freely available at: http://www.thegleelab.org/DHSpred.html PMID:29416743

  16. Prevalence of cardiovascular risk factors amongst traders in an urban market in Lagos, Nigeria.

    PubMed

    Odugbemi, T O; Onajole, A T; Osibogun, A O

    2012-03-01

    A descriptive cross-sectional study was carried out to determine the prevalence of cardiovascular risk factors amongst traders in an urban market in Lagos State. Tejuosho market, one of the large popular markets was selected from a list of markets that met the inclusion criteria of being major markets dealing in general goods using a simple random sampling technique by balloting. Four hundred (400) traders were selected using a systematic random sampling. Each trader was interviewed with a well-structured questionnaire and had blood pressure and anthropometric measurements (height, weight and body mass index). Female traders made up (74.3%) 297 of the total population. The mean age was 45.48+11.88 and 42.29+10.96 years for males and females respectively. Majority 239 (59.8%) fell within the age range of 35 - 55 years. The cardiovascular risk factors identified and their prevalence rates were hypertension (34.8%), physical inactivity (92%), previously diagnosed diabetes mellitus (0.8%), risky alcohol consumption (1%), cigarette smoking (0.3%) in females and (17.5%) in males, obesity (12.3%) and overweight (39.9%). The study recommended that any health promoting, preventive or intervention programme for this population would have to be worked into their market activities if it is to make an impact.

  17. Adaptive Electronic Camouflage Using Texture Synthesis

    DTIC Science & Technology

    2012-04-01

    algorithm begins by computing the GLCMs, GIN and GOUT , of the input image (e.g., image of local environment) and output image (randomly generated...respectively. The algorithm randomly selects a pixel from the output image and cycles its gray-level through all values. For each value, GOUT is updated...The value of the selected pixel is permanently changed to the gray-level value that minimizes the error between GIN and GOUT . Without selecting a

  18. Phase behavior of binary and polydisperse suspensions of compressible microgels controlled by selective particle deswelling

    NASA Astrophysics Data System (ADS)

    Scotti, A.; Gasser, U.; Herman, E. S.; Han, Jun; Menzel, A.; Lyon, L. A.; Fernandez-Nieves, A.

    2017-09-01

    We investigate the phase behavior of suspensions of poly(N -isopropylacrylamide) (pNIPAM) microgels with either bimodal or polydisperse size distribution. We observe a shift of the fluid-crystal transition to higher concentrations depending on the polydispersity or the fraction of large particles in suspension. Crystallization is observed up to polydispersities as high as 18.5%, and up to a number fraction of large particles of 29% in bidisperse suspensions. The crystal structure is random hexagonal close-packed as in monodisperse pNIPAM microgel suspensions. We explain our experimental results by considering the effect of bound counterions. Above a critical particle concentration, these cause deswelling of the largest microgels, which are the softest, changing the size distribution of the suspension and enabling crystal formation in conditions where incompressible particles would not crystallize.

  19. Low-dose aspirin in polycythaemia vera: a pilot study. Gruppo Italiano Studio Policitemia (GISP).

    PubMed

    1997-05-01

    In this pilot study, aimed at exploring the feasibility of a large-scale trial of low-dose aspirin in polycythaemia vera (PV), 112 PV patients (42 females, 70 males. aged 17-80 years) were selected for not having a clear indication for, or contraindication to, aspirin treatment and randomized to receive oral aspirin (40 mg/d) or placebo. Follow-up duration was 16 +/- 6 months. Measurements of thromboxane A2 production during whole blood clotting demonstrated complete inhibition of platelet cyclooxygenase activity in patients receiving aspirin. Aspirin administration was not associated with any bleeding complication. Within the limitations of the small sample size, this study indicates that a biochemically effective regimen of antiplatelet therapy is well tolerated in patients with polycythaemia vera and that a large-scale placebo-controlled trial is feasible.

  20. The impact of facecards on patients' knowledge, satisfaction, trust, and agreement with hospital physicians: a pilot study.

    PubMed

    Simons, Yael; Caprio, Timothy; Furiasse, Nicholas; Kriss, Michael; Williams, Mark V; O'Leary, Kevin J

    2014-03-01

    Simple interventions such as facecards can improve patients' knowledge of names and roles of hospital physicians, but the effect on other aspects of the patient-physician relationship is not clear. To pilot an intervention to improve familiarity with physicians and assess its potential to improve patients' satisfaction, trust, and agreement with physicians. Cluster randomized controlled trial assessing the impact of physician facecards. Physician facecards included pictures of physicians and descriptions of their roles. We performed structured interviews of randomly selected patients to assess outcomes. One of 2 similar hospitalist units and 1 of 2 teaching-service units in a large teaching hospital were randomly selected to implement the intervention. Satisfaction with physician communication and overall hospital care was assessed using the Hospital Consumer Assessment of Healthcare Providers and Systems. Trust and agreement were each assessed through instruments used in prior research. Overall, 138 patients completed interviews, with no differences in age, sex, or race between those receiving facecards and those not. More patients who received facecards correctly identified ≥1 hospital physician (89.1% vs 51.1%; P < 0.01) and their role (67.4% vs 16.3%; P < 0.01) than patients who had not received facecards. Patients had high baseline levels of satisfaction, trust, and agreement with hospital physicians, and we found no significant differences with the use of facecards. Physician facecards improved patients' knowledge of the names and roles of hospital physicians. Larger studies are needed to assess the impact on satisfaction, trust, and agreement with physicians. © 2013 Society of Hospital Medicine.

  1. Transcriptome characterization and SSR discovery in large-scale loach Paramisgurnus dabryanus (Cobitidae, Cypriniformes).

    PubMed

    Li, Caijuan; Ling, Qufei; Ge, Chen; Ye, Zhuqing; Han, Xiaofei

    2015-02-25

    The large-scale loach (Paramisgurnus dabryanus, Cypriniformes) is a bottom-dwelling freshwater species of fish found mainly in eastern Asia. The natural germplasm resources of this important aquaculture species has been recently threatened due to overfishing and artificial propagation. The objective of this study is to obtain the first functional genomic resource and candidate molecular markers for future conservation and breeding research. Illumina paired-end sequencing generated over one hundred million reads that resulted in 71,887 assembled transcripts, with an average length of 1465bp. 42,093 (58.56%) protein-coding sequences were predicted; and 43,837 transcripts had significant matches to NCBI nonredundant protein (Nr) database. 29,389 and 14,419 transcripts were assigned into gene ontology (GO) categories and Eukaryotic Orthologous Groups (KOG), respectively. 22,102 (31.14%) transcripts were mapped to 302 KEGG pathways. In addition, 15,106 candidate SSR markers were identified, with 11,037 pairs of PCR primers designed. 400 primers pairs of SSR selected randomly were validated, of which 364 (91%) pairs of primers were able to produce PCR products. Further test with 41 loci and 20 large-scale loach specimens collected from the four largest lakes in China showed that 36 (87.8%) loci were polymorphic. The transcriptomic profile and SSR repertoire obtained in this study will facilitate population genetic studies and selective breeding of large-scale loach in the future. Copyright © 2015. Published by Elsevier B.V.

  2. Physical layer one-time-pad data encryption through synchronized semiconductor laser networks

    NASA Astrophysics Data System (ADS)

    Argyris, Apostolos; Pikasis, Evangelos; Syvridis, Dimitris

    2016-02-01

    Semiconductor lasers (SL) have been proven to be a key device in the generation of ultrafast true random bit streams. Their potential to emit chaotic signals under conditions with desirable statistics, establish them as a low cost solution to cover various needs, from large volume key generation to real-time encrypted communications. Usually, only undemanding post-processing is needed to convert the acquired analog timeseries to digital sequences that pass all established tests of randomness. A novel architecture that can generate and exploit these true random sequences is through a fiber network in which the nodes are semiconductor lasers that are coupled and synchronized to central hub laser. In this work we show experimentally that laser nodes in such a star network topology can synchronize with each other through complex broadband signals that are the seed to true random bit sequences (TRBS) generated at several Gb/s. The potential for each node to access real-time generated and synchronized with the rest of the nodes random bit streams, through the fiber optic network, allows to implement an one-time-pad encryption protocol that mixes the synchronized true random bit sequence with real data at Gb/s rates. Forward-error correction methods are used to reduce the errors in the TRBS and the final error rate at the data decoding level. An appropriate selection in the sampling methodology and properties, as well as in the physical properties of the chaotic seed signal through which network locks in synchronization, allows an error free performance.

  3. Mixed-Method Quasi-Experimental Study of Outcomes of a Large-Scale Multilevel Economic and Food Security Intervention on HIV Vulnerability in Rural Malawi.

    PubMed

    Weinhardt, Lance S; Galvao, Loren W; Yan, Alice F; Stevens, Patricia; Mwenyekonde, Thokozani Ng'ombe; Ngui, Emmanuel; Emer, Lindsay; Grande, Katarina M; Mkandawire-Valhmu, Lucy; Watkins, Susan C

    2017-03-01

    The objective of the Savings, Agriculture, Governance, and Empowerment for Health (SAGE4Health) study was to evaluate the impact of a large-scale multi-level economic and food security intervention on health outcomes and HIV vulnerability in rural Malawi. The study employed a quasi-experimental non-equivalent control group design to compare intervention participants (n = 598) with people participating in unrelated programs in distinct but similar geographical areas (control, n = 301). We conducted participant interviews at baseline, 18-, and 36-months on HIV vulnerability and related health outcomes, food security, and economic vulnerability. Randomly selected households (n = 1002) were interviewed in the intervention and control areas at baseline and 36 months. Compared to the control group, the intervention led to increased HIV testing (OR 1.90; 95 % CI 1.29-2.78) and HIV case finding (OR = 2.13; 95 % CI 1.07-4.22); decreased food insecurity (OR = 0.74; 95 % CI 0.63-0.87), increased nutritional diversity, and improved economic resilience to shocks. Most effects were sustained over a 3-year period. Further, no significant differences in change were found over the 3-year study period on surveys of randomly selected households in the intervention and control areas. Although there were general trends toward improvement in the study area, only intervention participants' outcomes were significantly better. Results indicate the intervention can improve economic and food security and HIV vulnerability through increased testing and case finding. Leveraging the resources of economic development NGOs to deliver locally-developed programs with scientific funding to conduct controlled evaluations has the potential to accelerate the scientific evidence base for the effects of economic development programs on health.

  4. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex

    PubMed Central

    Lindsay, Grace W.

    2017-01-01

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463

  5. Role of rasagiline in treating Parkinson's disease: Effect on disease progression.

    PubMed

    Malaty, Irene A; Fernandez, Hubert H

    2009-08-01

    Rasagiline is a second generation, selective, irreversible monoamine oxidase type B (MAO-B) inhibitor. It has demonstrated efficacy in monotherapy for early Parkinson's disease (PD) patients in one large randomized, placebo-controlled trial (TVP-1012 in Early Monotherapy for Parkinson's Disease Outpatients), and has shown ability to reduce off time in more advanced PD patients with motor fluctuations in two large placebo-controlled trials (Parkinson's Rasagiline: Efficacy and Safety in the Treatment of "Off", and Lasting Effect in Adjunct Therapy With Rasagiline Given Once Daily). Preclinical data abound to suggest potential for neuroprotection by this compound against a variety of neurotoxic insults in cell cultures and in animals. The lack of amphetamine metabolites provides an advantage over the first generation MAO-B inhibitor selegiline. One large trial has investigated the potential for disease modification in PD patients (Attenuation of Disease progression with Azilect Given Once-daily) and preliminary results maintain some possible advantage to earlier initiation of the 1 mg/day dose. The clinical significance of the difference detected remains a consideration.

  6. Global risk of big earthquakes has not recently increased.

    PubMed

    Shearer, Peter M; Stark, Philip B

    2012-01-17

    The recent elevated rate of large earthquakes has fueled concern that the underlying global rate of earthquake activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of large (magnitude M≥7) earthquakes from 1900 to the present, after removing local clustering related to aftershocks. The global rate of M≥8 earthquakes has been at a record high roughly since 2004, but rates have been almost as high before, and the rate of smaller earthquakes is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences--if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global rate of large events. Together these facts suggest that the global risk of large earthquakes is no higher today than it has been in the past.

  7. Global risk of big earthquakes has not recently increased

    PubMed Central

    Shearer, Peter M.; Stark, Philip B.

    2012-01-01

    The recent elevated rate of large earthquakes has fueled concern that the underlying global rate of earthquake activity has increased, which would have important implications for assessments of seismic hazard and our understanding of how faults interact. We examine the timing of large (magnitude M≥7) earthquakes from 1900 to the present, after removing local clustering related to aftershocks. The global rate of M≥8 earthquakes has been at a record high roughly since 2004, but rates have been almost as high before, and the rate of smaller earthquakes is close to its historical average. Some features of the global catalog are improbable in retrospect, but so are some features of most random sequences—if the features are selected after looking at the data. For a variety of magnitude cutoffs and three statistical tests, the global catalog, with local clusters removed, is not distinguishable from a homogeneous Poisson process. Moreover, no plausible physical mechanism predicts real changes in the underlying global rate of large events. Together these facts suggest that the global risk of large earthquakes is no higher today than it has been in the past. PMID:22184228

  8. RARtool: A MATLAB Software Package for Designing Response-Adaptive Randomized Clinical Trials with Time-to-Event Outcomes.

    PubMed

    Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee

    2015-08-01

    Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.

  9. The Role of Visual Eccentricity on Preference for Abstract Symmetry

    PubMed Central

    O’ Sullivan, Noreen; Bertamini, Marco

    2016-01-01

    This study tested preference for abstract patterns, comparing random patterns to a two-fold bilateral symmetry. Stimuli were presented at random locations in the periphery. Preference for bilateral symmetry has been extensively studied in central vision, but evaluation at different locations had not been systematically investigated. Patterns were presented for 200 ms within a large circular region. On each trial participant changed fixation and were instructed to select any location. Eccentricity values were calculated a posteriori as the distance between ocular coordinates at pattern onset and coordinates for the centre of the pattern. Experiment 1 consisted of two Tasks. In Task 1, participants detected pattern regularity as fast as possible. In Task 2 they evaluated their liking for the pattern on a Likert-scale. Results from Task 1 revealed that with our parameters eccentricity did not affect symmetry detection. However, in Task 2, eccentricity predicted more negative evaluation of symmetry, but not random patterns. In Experiment 2 participants were either presented with symmetry or random patterns. Regularity was task-irrelevant in this task. Participants discriminated the proportion of black/white dots within the pattern and then evaluated their liking for the pattern. Even when only one type of regularity was presented and regularity was task-irrelevant, preference evaluation for symmetry decreased with increasing eccentricity, whereas eccentricity did not affect the evaluation of random patterns. We conclude that symmetry appreciation is higher for foveal presentation in a way not fully accounted for by sensitivity. PMID:27124081

  10. The Role of Visual Eccentricity on Preference for Abstract Symmetry.

    PubMed

    Rampone, Giulia; O' Sullivan, Noreen; Bertamini, Marco

    2016-01-01

    This study tested preference for abstract patterns, comparing random patterns to a two-fold bilateral symmetry. Stimuli were presented at random locations in the periphery. Preference for bilateral symmetry has been extensively studied in central vision, but evaluation at different locations had not been systematically investigated. Patterns were presented for 200 ms within a large circular region. On each trial participant changed fixation and were instructed to select any location. Eccentricity values were calculated a posteriori as the distance between ocular coordinates at pattern onset and coordinates for the centre of the pattern. Experiment 1 consisted of two Tasks. In Task 1, participants detected pattern regularity as fast as possible. In Task 2 they evaluated their liking for the pattern on a Likert-scale. Results from Task 1 revealed that with our parameters eccentricity did not affect symmetry detection. However, in Task 2, eccentricity predicted more negative evaluation of symmetry, but not random patterns. In Experiment 2 participants were either presented with symmetry or random patterns. Regularity was task-irrelevant in this task. Participants discriminated the proportion of black/white dots within the pattern and then evaluated their liking for the pattern. Even when only one type of regularity was presented and regularity was task-irrelevant, preference evaluation for symmetry decreased with increasing eccentricity, whereas eccentricity did not affect the evaluation of random patterns. We conclude that symmetry appreciation is higher for foveal presentation in a way not fully accounted for by sensitivity.

  11. The Effect of CAI on Reading Achievement.

    ERIC Educational Resources Information Center

    Hardman, Regina

    A study determined whether computer assisted instruction (CAI) had an effect on students' reading achievement. Subjects were 21 randomly selected fourth-grade students at D. S. Wentworth Elementary School on the south side of Chicago in a low-income neighborhood who received a year's exposure to a CAI program, and 21 randomly selected students at…

  12. 78 FR 57033 - United States Standards for Condition of Food Containers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-17

    ... containers during production. Stationary lot sampling is the process of randomly selecting sample units from.... * * * * * Stationary lot sampling. The process of randomly selecting sample units from a lot whose production has been... less than \\1/16\\-inch Stringy seal (excessive plastic threads showing at edge of seal 222 area...

  13. Access to Higher Education by the Luck of the Draw

    ERIC Educational Resources Information Center

    Stone, Peter

    2013-01-01

    Random selection is a fair way to break ties between applicants of equal merit seeking admission to institutions of higher education (with "merit" defined here in terms of the intrinsic contribution higher education would make to the applicant's life). Opponents of random selection commonly argue that differences in strength between…

  14. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    ERIC Educational Resources Information Center

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  15. 1977 Survey of the American Professoriate. Technical Report.

    ERIC Educational Resources Information Center

    Ladd, Everett Carll, Jr.; And Others

    The development and data validation of the 1977 Ladd-Lipset national survey of the American professoriate are described. The respondents were selected from a random sample of colleges and universities and from a random sample of individual faculty members from the universities. The 158 institutions in the 1977 survey were selected from 2,406…

  16. Site Selection in Experiments: A Follow-Up Evaluation of Site Recruitment in Two Scale-Up Studies

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castillo, Veronica

    2015-01-01

    Randomized experiments are commonly used to evaluate if particular interventions improve student achievement. While these experiments can establish that a treatment actually "causes" changes, typically the participants are not randomly selected from a well-defined population and therefore the results do not readily generalize. Three…

  17. Real-time fast physical random number generator with a photonic integrated circuit.

    PubMed

    Ugajin, Kazusa; Terashima, Yuta; Iwakawa, Kento; Uchida, Atsushi; Harayama, Takahisa; Yoshimura, Kazuyuki; Inubushi, Masanobu

    2017-03-20

    Random number generators are essential for applications in information security and numerical simulations. Most optical-chaos-based random number generators produce random bit sequences by offline post-processing with large optical components. We demonstrate a real-time hardware implementation of a fast physical random number generator with a photonic integrated circuit and a field programmable gate array (FPGA) electronic board. We generate 1-Tbit random bit sequences and evaluate their statistical randomness using NIST Special Publication 800-22 and TestU01. All of the BigCrush tests in TestU01 are passed using 410-Gbit random bit sequences. A maximum real-time generation rate of 21.1 Gb/s is achieved for random bit sequences in binary format stored in a computer, which can be directly used for applications involving secret keys in cryptography and random seeds in large-scale numerical simulations.

  18. Active learning for solving the incomplete data problem in facial age classification by the furthest nearest-neighbor criterion.

    PubMed

    Wang, Jian-Gang; Sung, Eric; Yau, Wei-Yun

    2011-07-01

    Facial age classification is an approach to classify face images into one of several predefined age groups. One of the difficulties in applying learning techniques to the age classification problem is the large amount of labeled training data required. Acquiring such training data is very costly in terms of age progress, privacy, human time, and effort. Although unlabeled face images can be obtained easily, it would be expensive to manually label them on a large scale and getting the ground truth. The frugal selection of the unlabeled data for labeling to quickly reach high classification performance with minimal labeling efforts is a challenging problem. In this paper, we present an active learning approach based on an online incremental bilateral two-dimension linear discriminant analysis (IB2DLDA) which initially learns from a small pool of labeled data and then iteratively selects the most informative samples from the unlabeled set to increasingly improve the classifier. Specifically, we propose a novel data selection criterion called the furthest nearest-neighbor (FNN) that generalizes the margin-based uncertainty to the multiclass case and which is easy to compute, so that the proposed active learning algorithm can handle a large number of classes and large data sizes efficiently. Empirical experiments on FG-NET and Morph databases together with a large unlabeled data set for age categorization problems show that the proposed approach can achieve results comparable or even outperform a conventionally trained active classifier that requires much more labeling effort. Our IB2DLDA-FNN algorithm can achieve similar results much faster than random selection and with fewer samples for age categorization. It also can achieve comparable results with active SVM but is much faster than active SVM in terms of training because kernel methods are not needed. The results on the face recognition database and palmprint/palm vein database showed that our approach can handle problems with large number of classes. Our contributions in this paper are twofold. First, we proposed the IB2DLDA-FNN, the FNN being our novel idea, as a generic on-line or active learning paradigm. Second, we showed that it can be another viable tool for active learning of facial age range classification.

  19. Adverse selection and price sensitivity when low-income people have subsidies to purchase health insurance in the private market.

    PubMed

    Swartz, K; Garnick, D W

    2000-01-01

    Policymakers interested in subsidizing low-income people's purchase of private insurance face two major questions: will such subsidies lead to adverse selection, and how large do the subsidies have to be to induce large numbers of eligible people to purchase the insurance? This study examines New Jersey's short-lived experience with a premium subsidy program, Health Access New Jersey (Access Program). The program was for people in families with incomes below 250% of the poverty level who were not eligible for health insurance provided by an employer, or Medicaid or Medicare, and who wished to purchase policies in the state's individual health insurance market, the Individual Health Coverage Program. Surveying a random sample of Access Program policyholders, we compared their demographic and socioeconomic characteristics, as well as their health status, to those of other New Jersey residents who had family incomes below 250% of the poverty level to determine whether there was any evidence of adverse selection among the people who enrolled in the Access Program. The people who enrolled were not in worse health than uninsured people with incomes below 250% of the poverty level, but they were quite price sensitive. Most enrollees had incomes within the low end of the income eligibility distribution, reflecting the structure of rapidly declining subsidies as income increased.

  20. Selection dynamic of Escherichia coli host in M13 combinatorial peptide phage display libraries.

    PubMed

    Zanconato, Stefano; Minervini, Giovanni; Poli, Irene; De Lucrezia, Davide

    2011-01-01

    Phage display relies on an iterative cycle of selection and amplification of random combinatorial libraries to enrich the initial population of those peptides that satisfy a priori chosen criteria. The effectiveness of any phage display protocol depends directly on library amino acid sequence diversity and the strength of the selection procedure. In this study we monitored the dynamics of the selective pressure exerted by the host organism on a random peptide library in the absence of any additional selection pressure. The results indicate that sequence censorship exerted by Escherichia coli dramatically reduces library diversity and can significantly impair phage display effectiveness.

  1. Day-roost tree selection by northern long-eared bats—What do non-roost tree comparisons and one year of data really tell us?

    USGS Publications Warehouse

    Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.

    2015-01-01

    Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.

  2. Day-roost tree selection by northern long-eared bats - What do non-roost tree comparisons and one year of data really tell us?

    USGS Publications Warehouse

    Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.

    2015-01-01

    Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.

  3. Foundational errors in the Neutral and Nearly-Neutral theories of evolution in relation to the Synthetic Theory: is a new evolutionary paradigm necessary?

    PubMed

    Valenzuela, Carlos Y

    2013-01-01

    The Neutral Theory of Evolution (NTE) proposes mutation and random genetic drift as the most important evolutionary factors. The most conspicuous feature of evolution is the genomic stability during paleontological eras and lack of variation among taxa; 98% or more of nucleotide sites are monomorphic within a species. NTE explains this homology by random fixation of neutral bases and negative selection (purifying selection) that does not contribute either to evolution or polymorphisms. Purifying selection is insufficient to account for this evolutionary feature and the Nearly-Neutral Theory of Evolution (N-NTE) included negative selection with coefficients as low as mutation rate. These NTE and N-NTE propositions are thermodynamically (tendency to random distributions, second law), biotically (recurrent mutation), logically and mathematically (resilient equilibria instead of fixation by drift) untenable. Recurrent forward and backward mutation and random fluctuations of base frequencies alone in a site make life organization and fixations impossible. Drift is not a directional evolutionary factor, but a directional tendency of matter-energy processes (second law) which threatens the biotic organization. Drift cannot drive evolution. In a site, the mutation rates among bases and selection coefficients determine the resilient equilibrium frequency of bases that genetic drift cannot change. The expected neutral random interaction among nucleotides is zero; however, huge interactions and periodicities were found between bases of dinucleotides separated by 1, 2... and more than 1,000 sites. Every base is co-adapted with the whole genome. Neutralists found that neutral evolution is independent of population size (N); thus neutral evolution should be independent of drift, because drift effect is dependent upon N. Also, chromosome size and shape as well as protein size are far from random.

  4. Computerized stratified random site-selection approaches for design of a ground-water-quality sampling network

    USGS Publications Warehouse

    Scott, J.C.

    1990-01-01

    Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.

  5. Cooperation and charity in spatial public goods game under different strategy update rules

    NASA Astrophysics Data System (ADS)

    Li, Yixiao; Jin, Xiaogang; Su, Xianchuang; Kong, Fansheng; Peng, Chengbin

    2010-03-01

    Human cooperation can be influenced by other human behaviors and recent years have witnessed the flourishing of studying the coevolution of cooperation and punishment, yet the common behavior of charity is seldom considered in game-theoretical models. In this article, we investigate the coevolution of altruistic cooperation and egalitarian charity in spatial public goods game, by considering charity as the behavior of reducing inter-individual payoff differences. Our model is that, in each generation of the evolution, individuals play games first and accumulate payoff benefits, and then each egalitarian makes a charity donation by payoff transfer in its neighborhood. To study the individual-level evolutionary dynamics, we adopt different strategy update rules and investigate their effects on charity and cooperation. These rules can be classified into two global rules: random selection rule in which individuals randomly update strategies, and threshold selection rule where only those with payoffs below a threshold update strategies. Simulation results show that random selection enhances the cooperation level, while threshold selection lowers the threshold of the multiplication factor to maintain cooperation. When charity is considered, it is incapable in promoting cooperation under random selection, whereas it promotes cooperation under threshold selection. Interestingly, the evolution of charity strongly depends on the dispersion of payoff acquisitions of the population, which agrees with previous results. Our work may shed light on understanding human egalitarianism.

  6. How to derive biological information from the value of the normalization constant in allometric equations.

    PubMed

    Kaitaniemi, Pekka

    2008-04-09

    Allometric equations are widely used in many branches of biological science. The potential information content of the normalization constant b in allometric equations of the form Y = bX(a) has, however, remained largely neglected. To demonstrate the potential for utilizing this information, I generated a large number of artificial datasets that resembled those that are frequently encountered in biological studies, i.e., relatively small samples including measurement error or uncontrolled variation. The value of X was allowed to vary randomly within the limits describing different data ranges, and a was set to a fixed theoretical value. The constant b was set to a range of values describing the effect of a continuous environmental variable. In addition, a normally distributed random error was added to the values of both X and Y. Two different approaches were then used to model the data. The traditional approach estimated both a and b using a regression model, whereas an alternative approach set the exponent a at its theoretical value and only estimated the value of b. Both approaches produced virtually the same model fit with less than 0.3% difference in the coefficient of determination. Only the alternative approach was able to precisely reproduce the effect of the environmental variable, which was largely lost among noise variation when using the traditional approach. The results show how the value of b can be used as a source of valuable biological information if an appropriate regression model is selected.

  7. Qigong Exercises for the Management of Type 2 Diabetes Mellitus

    PubMed Central

    Close, Jacqueline R.; Lilly, Harold Ryan; Guillaume, Nathalie; Sun, Guan-Cheng

    2017-01-01

    Background: The purpose of this article is to clarify and define medical qigong and to identify an appropriate study design and methodology for a large-scale study looking at the effects of qigong in patients with type 2 diabetes mellitus (T2DM), specifically subject enrollment criteria, selection of the control group and study duration. Methods: A comprehensive literature review of English databases was used to locate articles from 1980–May 2017 involving qigong and T2DM. Control groups, subject criteria and the results of major diabetic markers were reviewed and compared within each study. Definitions of qigong and its differentiation from physical exercise were also considered. Results: After a thorough review, it was found that qigong shows positive effects on T2DM; however, there were inconsistencies in control groups, research subjects and diabetic markers analyzed. It was also discovered that there is a large variation in styles and definitions of qigong. Conclusions: Qigong exercise has shown promising results in clinical experience and in randomized, controlled pilot studies for affecting aspects of T2DM including blood glucose, triglycerides, total cholesterol, weight, BMI and insulin resistance. Due to the inconsistencies in study design and methods and the lack of large-scale studies, further well-designed randomized control trials (RCT) are needed to evaluate the ‘vital energy’ or qi aspect of internal medical qigong in people who have been diagnosed with T2DM. PMID:28930273

  8. Differential privacy-based evaporative cooling feature selection and classification with relief-F and random forests.

    PubMed

    Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A

    2017-09-15

    Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code available at http://insilico.utulsa.edu/software/privateEC . brett-mckinney@utulsa.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. Vacuum selection on axionic landscapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Gaoyuan; Battefeld, Thorsten, E-mail: gaoyuan.wang@stud.uni-goettingen.de, E-mail: tbattefe@astro.physik.uni-goettingen.de

    2016-04-01

    We compute the distribution of minima that are reached dynamically on multi-field axionic landscapes, both numerically and analytically. Such landscapes are well suited for inflationary model building due to the presence of shift symmetries and possible alignment effects (the KNP mechanism). The resulting distribution of dynamically reached minima differs considerably from the naive expectation based on counting all vacua. These differences are more pronounced in the presence of many fields due to dynamical selection effects: while low lying minima are preferred as fields roll down the potential, trajectories are also more likely to get trapped by one of the manymore » nearby minima. We show that common analytic arguments based on random matrix theory in the large D-limit to estimate the distribution of minima are insufficient for quantitative arguments pertaining to the dynamically reached ones. This discrepancy is not restricted to axionic potentials. We provide an empirical expression for the expectation value of such dynamically reached minimas' height and argue that the cosmological constant problem is not alleviated in the absence of anthropic arguments. We further comment on the likelihood of inflation on axionic landscapes in the large D-limit.« less

  10. Estimating occupancy in large landscapes: evaluation of amphibian monitoring in the greater Yellowstone ecosystem

    USGS Publications Warehouse

    Gould, William R.; Patla, Debra A.; Daley, Rob; Corn, Paul Stephen; Hossack, Blake R.; Bennetts, Robert E.; Peterson, Charles R.

    2012-01-01

    Monitoring of natural resources is crucial to ecosystem conservation, and yet it can pose many challenges. Annual surveys for amphibian breeding occupancy were conducted in Yellowstone and Grand Teton National Parks over a 4-year period (2006–2009) at two scales: catchments (portions of watersheds) and individual wetland sites. Catchments were selected in a stratified random sample with habitat quality and ease of access serving as strata. All known wetland sites with suitable habitat were surveyed within selected catchments. Changes in breeding occurrence of tiger salamanders, boreal chorus frogs, and Columbia-spotted frogs were assessed using multi-season occupancy estimation. Numerous a priori models were considered within an information theoretic framework including those with catchment and site-level covariates. Habitat quality was the most important predictor of occupancy. Boreal chorus frogs demonstrated the greatest increase in breeding occupancy at the catchment level. Larger changes for all 3 species were detected at the finer site-level scale. Connectivity of sites explained occupancy rates more than other covariates, and may improve understanding of the dynamic processes occurring among wetlands within this ecosystem. Our results suggest monitoring occupancy at two spatial scales within large study areas is feasible and informative.

  11. Transient evoked otoacoustic emissions in rock musicians.

    PubMed

    Høydal, Erik Harry; Lein Størmer, Carl Christian; Laukli, Einar; Stenklev, Niels Christian

    2017-09-01

    Our focus in this study was the assessment of transient evoked otoacoustic emissions (TEOAEs) in a large group of rock musicians. A further objective was to analyse tinnitus among rock musicians as related to TEOAEs. The study was a cross-sectional survey of rock musicians selected at random. A control group was included at random for comparison. We recruited 111 musicians and a control group of 40 non-musicians. Testing was conducted by using clinical examination, pure tone audiometry, TEOAEs and a questionnaire. TEOAE SNR in the half-octave frequency band centred on 4 kHz was significantly lower bilaterally in musicians than controls. This effect was strongly predicted by age and pure-tone hearing threshold levels in the 3-6 kHz range. Bilateral hearing thresholds were significantly higher at 6 kHz in musicians. Twenty percent of the musicians had permanent tinnitus. There was no association between the TEOAE parameters and permanent tinnitus. Our results suggest an incipient hearing loss at 6 kHz in rock musicians. Loss of TEOAE SNR in the 4 kHz half-octave frequency band was observed, but it was related to higher mean 3-6 kHz hearing thresholds and age. A large proportion of rock musicians have permanent tinnitus.

  12. Capturing the Flatness of a peer-to-peer lending network through random and selected perturbations

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis D.; Singh, Pramesh; Uparna, Jayaram; Horvat, Emoke-Agnes; Szymanski, Boleslaw K.; Korniss, Gyorgy; Bakdash, Jonathan Z.; Uzzi, Brian

    Null models are established tools that have been used in network analysis to uncover various structural patterns. They quantify the deviance of an observed network measure to that given by the null model. We construct a null model for weighted, directed networks to identify biased links (carrying significantly different weights than expected according to the null model) and thus quantify the flatness of the system. Using this model, we study the flatness of Kiva, a large international crownfinancing network of borrowers and lenders, aggregated to the country level. The dataset spans the years from 2006 to 2013. Our longitudinal analysis shows that flatness of the system is reducing over time, meaning the proportion of biased inter-country links is growing. We extend our analysis by testing the robustness of the flatness of the network in perturbations on the links' weights or the nodes themselves. Examples of such perturbations are event shocks (e.g. erecting walls) or regulatory shocks (e.g. Brexit). We find that flatness is unaffected by random shocks, but changes after shocks target links with a large weight or bias. The methods we use to capture the flatness are based on analytics, simulations, and numerical computations using Shannon's maximum entropy. Supported by ARL NS-CTA.

  13. Intra-class correlation estimates for assessment of vitamin A intake in children.

    PubMed

    Agarwal, Girdhar G; Awasthi, Shally; Walter, Stephen D

    2005-03-01

    In many community-based surveys, multi-level sampling is inherent in the design. In the design of these studies, especially to calculate the appropriate sample size, investigators need good estimates of intra-class correlation coefficient (ICC), along with the cluster size, to adjust for variation inflation due to clustering at each level. The present study used data on the assessment of clinical vitamin A deficiency and intake of vitamin A-rich food in children in a district in India. For the survey, 16 households were sampled from 200 villages nested within eight randomly-selected blocks of the district. ICCs and components of variances were estimated from a three-level hierarchical random effects analysis of variance model. Estimates of ICCs and variance components were obtained at village and block levels. Between-cluster variation was evident at each level of clustering. In these estimates, ICCs were inversely related to cluster size, but the design effect could be substantial for large clusters. At the block level, most ICC estimates were below 0.07. At the village level, many ICC estimates ranged from 0.014 to 0.45. These estimates may provide useful information for the design of epidemiological studies in which the sampled (or allocated) units range in size from households to large administrative zones.

  14. Refinement of the magnetic resonance diffusion-perfusion mismatch concept for thrombolytic patient selection: insights from the desmoteplase in acute stroke trials.

    PubMed

    Warach, Steven; Al-Rawi, Yasir; Furlan, Anthony J; Fiebach, Jochen B; Wintermark, Max; Lindstén, Annika; Smyej, Jamal; Bharucha, David B; Pedraza, Salvador; Rowley, Howard A

    2012-09-01

    The DIAS-2 study was the only large, randomized, intravenous, thrombolytic trial that selected patients based on the presence of ischemic penumbra. However, DIAS-2 did not confirm the positive findings of the smaller DEDAS and DIAS trials, which also used penumbral selection. Therefore, a reevaluation of the penumbra selection strategy is warranted. In post hoc analyses we assessed the relationships of magnetic resonance imaging-measured lesion volumes with clinical measures in DIAS-2, and the relationships of the presence and size of the diffusion-perfusion mismatch with the clinical effect of desmoteplase in DIAS-2 and in pooled data from DIAS, DEDAS, and DIAS-2. In DIAS-2, lesion volumes correlated with National Institutes of Health Stroke Scale (NIHSS) at both baseline and final time points (P<0.0001), and lesion growth was inversely related to good clinical outcome (P=0.004). In the pooled analysis, desmoteplase was associated with 47% clinical response rate (n=143) vs 34% in placebo (n=73; P=0.08). For both the pooled sample and for DIAS-2, increasing the minimum baseline mismatch volume (MMV) for inclusion increased the desmoteplase effect size. The odds ratio for good clinical response between desmoteplase and placebo treatment was 2.83 (95% confidence interval, 1.16-6.94; P=0.023) for MMV >60 mL. Increasing the minimum NIHSS score for inclusion did not affect treatment effect size. Pooled across all desmoteplase trials, desmoteplase appears beneficial in patients with large MMV and ineffective in patients with small MMV. These results support a modified diffusion-perfusion mismatch hypothesis for patient selection in later time-window thrombolytic trials. Clinical Trial Registration- URL: http://www.clinicaltrials.gov. Unique Identifiers: NCT00638781, NCT00638248, NCT00111852.

  15. Fuel management optimization using genetic algorithms and code independence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeChaine, M.D.; Feltus, M.A.

    1994-12-31

    Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less

  16. An RNA motif that binds ATP

    NASA Technical Reports Server (NTRS)

    Sassanfar, M.; Szostak, J. W.

    1993-01-01

    RNAs that contain specific high-affinity binding sites for small molecule ligands immobilized on a solid support are present at a frequency of roughly one in 10(10)-10(11) in pools of random sequence RNA molecules. Here we describe a new in vitro selection procedure designed to ensure the isolation of RNAs that bind the ligand of interest in solution as well as on a solid support. We have used this method to isolate a remarkably small RNA motif that binds ATP, a substrate in numerous biological reactions and the universal biological high-energy intermediate. The selected ATP-binding RNAs contain a consensus sequence, embedded in a common secondary structure. The binding properties of ATP analogues and modified RNAs show that the binding interaction is characterized by a large number of close contacts between the ATP and RNA, and by a change in the conformation of the RNA.

  17. Gossip-Based Dissemination

    NASA Astrophysics Data System (ADS)

    Friedman, Roy; Kermarrec, Anne-Marie; Miranda, Hugo; Rodrigues, Luís

    Gossip-based networking has emerged as a viable approach to disseminate information reliably and efficiently in large-scale systems. Initially introduced for database replication [222], the applicability of the approach extends much further now. For example, it has been applied for data aggregation [415], peer sampling [416] and publish/subscribe systems [845]. Gossip-based protocols rely on a periodic peer-wise exchange of information in wired systems. By changing the way each peer is selected for the gossip communication, and which data are exchanged and processed [451], gossip systems can be used to perform different distributed tasks, such as, among others: overlay maintenance, distributed computation, and information dissemination (a collection of papers on gossip can be found in [451]). In a wired setting, the peer sampling service, allowing for a random or specific peer selection, is often provided as an independent service, able to operate independently from other gossip-based services [416].

  18. VARIABLE SELECTION FOR QUALITATIVE INTERACTIONS IN PERSONALIZED MEDICINE WHILE CONTROLLING THE FAMILY-WISE ERROR RATE

    PubMed Central

    Gunter, Lacey; Zhu, Ji; Murphy, Susan

    2012-01-01

    For many years, subset analysis has been a popular topic for the biostatistics and clinical trials literature. In more recent years, the discussion has focused on finding subsets of genomes which play a role in the effect of treatment, often referred to as stratified or personalized medicine. Though highly sought after, methods for detecting subsets with altering treatment effects are limited and lacking in power. In this article we discuss variable selection for qualitative interactions with the aim to discover these critical patient subsets. We propose a new technique designed specifically to find these interaction variables among a large set of variables while still controlling for the number of false discoveries. We compare this new method against standard qualitative interaction tests using simulations and give an example of its use on data from a randomized controlled trial for the treatment of depression. PMID:22023676

  19. Large-area fluidic assembly of single-walled carbon nanotubes through dip-coating and directional evaporation

    NASA Astrophysics Data System (ADS)

    Kim, Pilnam; Kang, Tae June

    2017-12-01

    We present a simple and scalable fluidic-assembly approach, in which bundles of single-walled carbon nanotubes (SWCNTs) are selectively aligned and deposited by directionally controlled dip-coating and solvent evaporation processes. The patterned surface with alternating regions of hydrophobic polydimethyl siloxane (PDMS) (height 100 nm) strips and hydrophilic SiO2 substrate was withdrawn vertically at a constant speed ( 3 mm/min) from a solution bath containing SWCNTs ( 0.1 mg/ml), allowing for directional evaporation and subsequent selective deposition of nanotube bundles along the edges of horizontally aligned PDMS strips. In addition, the fluidic assembly was applied to fabricate a field effect transistor (FET) with highly oriented SWCNTs, which demonstrate significantly higher current density as well as high turn-off ratio (T/O ratio 100) as compared to that with randomly distributed carbon nanotube bundles (T/O ratio <10).

  20. Hot rolling and annealing effects on the microstructure and mechanical properties of ODS austenitic steel fabricated by electron beam selective melting

    NASA Astrophysics Data System (ADS)

    Gao, Rui; Ge, Wen-jun; Miao, Shu; Zhang, Tao; Wang, Xian-ping; Fang, Qian-feng

    2016-03-01

    The grain morphology, nano-oxide particles and mechanical properties of oxide dispersion strengthened (ODS)-316L austenitic steel synthesized by electron beam selective melting (EBSM) technique with different post-working processes, were explored in this study. The ODS-316L austenitic steel with superfine nano-sized oxide particles of 30-40 nm exhibits good tensile strength (412 MPa) and large total elongation (about 51%) due to the pinning effect of uniform distributed oxide particles on dislocations. After hot rolling, the specimen exhibits a higher tensile strength of 482 MPa, but the elongation decreases to 31.8% owing to the introduction of high-density dislocations. The subsequent heat treatment eliminates the grain defects induced by hot rolling and increases the randomly orientated grains, which further improves the strength and ductility of EBSM ODS-316L steel.

  1. Bioinformatic Analysis of the Contribution of Primer Sequences to Aptamer Structures

    PubMed Central

    Ellington, Andrew D.

    2009-01-01

    Aptamers are nucleic acid molecules selected in vitro to bind a particular ligand. While numerous experimental studies have examined the sequences, structures, and functions of individual aptamers, considerably fewer studies have applied bioinformatics approaches to try to infer more general principles from these individual studies. We have used a large Aptamer Database to parse the contributions of both random and constant regions to the secondary structures of more than 2000 aptamers. We find that the constant, primer-binding regions do not, in general, contribute significantly to aptamer structures. These results suggest that (a) binding function is not contributed to nor constrained by constant regions; (b) in consequence, the landscape of functional binding sequences is sparse but robust, favoring scenarios for short, functional nucleic acid sequences near origins; and (c) many pool designs for the selection of aptamers are likely to prove robust. PMID:18594898

  2. Biochemical Profile of Heritage and Modern Apple Cultivars and Application of Machine Learning Methods To Predict Usage, Age, and Harvest Season.

    PubMed

    Anastasiadi, Maria; Mohareb, Fady; Redfern, Sally P; Berry, Mark; Simmonds, Monique S J; Terry, Leon A

    2017-07-05

    The present study represents the first major attempt to characterize the biochemical profile in different tissues of a large selection of apple cultivars sourced from the United Kingdom's National Fruit Collection comprising dessert, ornamental, cider, and culinary apples. Furthermore, advanced machine learning methods were applied with the objective to identify whether the phenolic and sugar composition of an apple cultivar could be used as a biomarker fingerprint to differentiate between heritage and mainstream commercial cultivars as well as govern the separation among primary usage groups and harvest season. A prediction accuracy of >90% was achieved with the random forest method for all three models. The results highlighted the extraordinary phytochemical potency and unique profile of some heritage, cider, and ornamental apple cultivars, especially in comparison to more mainstream apple cultivars. Therefore, these findings could guide future cultivar selection on the basis of health-promoting phytochemical content.

  3. Effects of flow sheet implementation on physician performance in the management of asthmatic patients.

    PubMed

    Ruoff, Gary

    2002-01-01

    This project focused on increasing compliance, in a large family practice group, with quality indicators for the management of asthma. The objective was to determine if use of a flow sheet incorporating the Global Initiative for Asthma (GINA) guidelines could improve compliance with those guidelines if the flow sheet was placed in patients' medical records. After review and selection of 14 clinical quality indicators, physicians in the practice implemented a flow sheet as an intervention. These flow sheets were inserted into the records of 122 randomly selected patients with asthma. Medical records were reviewed before the flow sheets were placed in the records, and again approximately 6 months later, to determine if there was a change in compliance with the quality indicators. Improvement of documentation was demonstrated in 13 of the 14 quality indicators. The results indicate that compliance with asthma management quality indicators can improve with the use of a flow sheet.

  4. Meta-structure correlation in protein space unveils different selection rules for folded and intrinsically disordered proteins.

    PubMed

    Naranjo, Yandi; Pons, Miquel; Konrat, Robert

    2012-01-01

    The number of existing protein sequences spans a very small fraction of sequence space. Natural proteins have overcome a strong negative selective pressure to avoid the formation of insoluble aggregates. Stably folded globular proteins and intrinsically disordered proteins (IDPs) use alternative solutions to the aggregation problem. While in globular proteins folding minimizes the access to aggregation prone regions, IDPs on average display large exposed contact areas. Here, we introduce the concept of average meta-structure correlation maps to analyze sequence space. Using this novel conceptual view we show that representative ensembles of folded and ID proteins show distinct characteristics and respond differently to sequence randomization. By studying the way evolutionary constraints act on IDPs to disable a negative function (aggregation) we might gain insight into the mechanisms by which function-enabling information is encoded in IDPs.

  5. Multivitamin/multimineral supplements for cancer prevention: implications for primary care practice.

    PubMed

    Hardy, Mary L; Duvall, Karen

    2015-01-01

    There is a popular belief that multivitamin and mineral (MVM) supplements can help prevent cancer and other chronic diseases. Studies evaluating the effects of MVM supplements on cancer risk have largely been observational, with considerable methodologic limitations, and with conflicting results. We review evidence from the few available randomized, controlled trials that assessed the effects of supplements containing individual vitamins, a combination of a few select vitamins, or complete MVM supplements, with a focus on the recent Physicians' Health Study II (PHS II). PHS II is a landmark trial that followed generally healthy middle-aged and older men (mean age 64 years) who were randomized to daily MVM supplementation for a mean duration of 11 years. Men taking MVMs experienced a statistically significant 8% reduction in incidence of total cancer (hazard ratio [HR]: 0.92; 95% confidence interval [CI]: 0.86-0.998; p = 0.04). Men with a history of cancer derived an even greater benefit: cancer incidence was 27% lower with MVM supplementation versus placebo in this subgroup (HR: 0.73; 95% CI: 0.56-0.96; p = 0.02). Positive results of PHS II contrast with randomized studies of individual vitamins or small combinations of vitamins, which have largely shown a neutral effect, and in some cases, an adverse effect, on cancer risk. The results of PHS II may have a considerable public health impact, potentially translating to prevention of approximately 68 000 cancers per year if all men were to use similar supplements, and to an even greater benefit with regard to secondary prevention of cancer.

  6. A Randomized Controlled Trial of COMPASS Web-Based and Face-to-Face Teacher Coaching in Autism

    PubMed Central

    Ruble, Lisa A.; McGrew, John H.; Toland, Michael D.; Dalrymple, Nancy J.; Jung, Lee Ann

    2013-01-01

    Objective Most children with autism rely on schools as their primary source of intervention, yet research has suggested that teachers rarely use evidence-based practices. To address the need for improved educational outcomes, a previously tested consultation intervention called the Collaborative Model for Promoting Competence and Success (COMPASS; Ruble, Dalrymple, & McGrew, 2010; Ruble, Dalrymple, & McGrew, 2012) was evaluated in a 2nd randomized controlled trial, with the addition of a web-based group. Method Forty-nine teacher–child dyads were randomized into 1 of 3 groups: (1) a placebo control (PBO) group, (2) COMPASS followed by face-to-face (FF) coaching sessions, and (3) COMPASS followed by web-based (WEB) coaching sessions. Three individualized goals (social, communication, and independence skills) were selected for intervention for each child. The primary outcome of independent ratings of child goal attainment and several process measures (e.g., consultant and teacher fidelity) were evaluated. Results Using an intent-to-treat approach, findings replicated earlier results with a very large effect size (d = 1.41) for the FF group and a large effect size (d = 1.12) for the WEB group relative to the PBO group. There were no differences in overall change across goal domains between the FF and WEB groups, suggesting the efficacy of videoconferencing technology. Conclusions COMPASS is effective and results in improved educational outcomes for young children with autism. Videoconferencing technology, as a scalable tool, has promise for facilitating access to autism specialists and bridging the research-to-practice gap. PMID:23438314

  7. Random bit generation at tunable rates using a chaotic semiconductor laser under distributed feedback.

    PubMed

    Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun

    2015-09-01

    A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.

  8. Genome-wide association data classification and SNPs selection using two-stage quality-based Random Forests.

    PubMed

    Nguyen, Thanh-Tung; Huang, Joshua; Wu, Qingyao; Nguyen, Thuy; Li, Mark

    2015-01-01

    Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction errors and outperformed most existing the-state-of-the-art random forests. The top 25 SNPs in Parkinson data set were identified by the proposed model including four interesting genes associated with neurological disorders. The presented approach has shown to be effective in selecting informative sub-groups of SNPs potentially associated with diseases that traditional statistical approaches might fail. The new RF works well for the data where the number of case-control objects is much smaller than the number of SNPs, which is a typical problem in gene data and GWAS. Experiment results demonstrated the effectiveness of the proposed RF model that outperformed the state-of-the-art RFs, including Breiman's RF, GRRF and wsRF methods.

  9. Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit

    2010-10-01

    The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.

  10. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    NASA Astrophysics Data System (ADS)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  11. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    PubMed Central

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307

  12. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness.

    PubMed

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.

  13. Unconventional therapies in asthma: an overview.

    PubMed

    Lewith, G T; Watkins, A D

    1996-11-01

    Acupuncture, homoeopathy, mind-body therapies, and nutritional, herbal, and environmental medicine have all been used in the management of patients with asthma. This paper reviews the evidence base for the use of these unconventional or complementary therapies. Although there is a paucity of large randomized, controlled trials in this area, there is sufficient evidence to suggest that many of these therapies can produce objective and subjective benefit in selected groups of patients. In view of the increasing popularity of complementary medicine among patients and general practitioners, there is now an urgent need for high-quality research to determine how, or whether, these therapies may be interwoven with the more orthodox treatments currently available.

  14. ATM QoS Experiments Using TCP Applications: Performance of TCP/IP Over ATM in a Variety of Errored Links

    NASA Technical Reports Server (NTRS)

    Frantz, Brian D.; Ivancic, William D.

    2001-01-01

    Asynchronous Transfer Mode (ATM) Quality of Service (QoS) experiments using the Transmission Control Protocol/Internet Protocol (TCP/IP) were performed for various link delays. The link delay was set to emulate a Wide Area Network (WAN) and a Satellite Link. The purpose of these experiments was to evaluate the ATM QoS requirements for applications that utilize advance TCP/IP protocols implemented with large windows and Selective ACKnowledgements (SACK). The effects of cell error, cell loss, and random bit errors on throughput were reported. The detailed test plan and test results are presented herein.

  15. Spatial inventory integrating raster databases and point sample data. [Geographic Information System for timber inventory

    NASA Technical Reports Server (NTRS)

    Strahler, A. H.; Woodcock, C. E.; Logan, T. L.

    1983-01-01

    A timber inventory of the Eldorado National Forest, located in east-central California, provides an example of the use of a Geographic Information System (GIS) to stratify large areas of land for sampling and the collection of statistical data. The raster-based GIS format of the VICAR/IBIS software system allows simple and rapid tabulation of areas, and facilitates the selection of random locations for ground sampling. Algorithms that simplify the complex spatial pattern of raster-based information, and convert raster format data to strings of coordinate vectors, provide a link to conventional vector-based geographic information systems.

  16. Ethnic Differences in Elders' Home Remedy Use: Sociostructural Explanations

    PubMed Central

    Grzywacz, Joseph G.; Arcury, Thomas A.; Bell, Ronny A.; Lang, Wei; Suerken, Cynthia K.; Smith, Shannon L.; Quandt, Sara A.

    2006-01-01

    Objective: To determine if ethnic differences in elders' use of home remedies are explained by structured inequalities. Method: Dichotomous indicators of “food” and “other” home remedies were obtained from a randomly selected cohort of older adults with diabetes (N=701). Analyses evaluated if differences in availability of care, economic hardship, and health status explained ethnic differences in home remedy use. Results: Differences in residential location, discretionary money, and health partially explained greater home remedy use among Black and Native American elders relative to whites. Conclusions: Ethnic differences in elders' use of home remedies are not largely attributed to socially structured inequalities. PMID:16430319

  17. The role of drop velocity in statistical spray description

    NASA Technical Reports Server (NTRS)

    Groeneweg, J. F.; El-Wakil, M. M.; Myers, P. S.; Uyehara, O. A.

    1978-01-01

    The justification for describing a spray by treating drop velocity as a random variable on an equal statistical basis with drop size was studied experimentally. A double exposure technique using fluorescent drop photography was used to make size and velocity measurements at selected locations in a steady ethanol spray formed by a swirl atomizer. The size velocity data were categorized to construct bivariate spray density functions to describe the spray immediately after formation and during downstream propagation. Bimodal density functions were formed by environmental interaction during downstream propagation. Large differences were also found between spatial mass density and mass flux size distribution at the same location.

  18. Patterns of family caregiving and support provided to older psychiatric patients in long-term care.

    PubMed

    Beeler, J; Rosenthal, A; Cohler, B

    1999-09-01

    Data on patterns of relationships and caregiving between older, institutionalized chronically mentally ill patients and their families were gathered in brief face-to-face interviews with 109 patients randomly selected from residents age 45 or older in a large intermediate care facility in Chicago. Three-fourths of the sample maintained some form of family contact. One-third had been married or had children. Siblings were the most frequently identified family contact and support. The results suggests that older, institutionalized psychiatric patients continue to have family contact and that siblings and offspring become increasingly important as patients age.

  19. Auger electron intensity variations in oxygen-exposed large grain polycrystalline silver

    NASA Technical Reports Server (NTRS)

    Lee, W. S.; Outlaw, R. A.; Hoflund, G. B.; Davidson, M. R.

    1989-01-01

    Auger electron spectroscopic studies of the grains in oxygen-charged polycrystal-line silver show significant intensity variations as a function of crystallographic orientation. These intensity variations were observed by studies of the Auger images and line scans of the different grains (randomly selected) for each silver transition energy. The results can be attributed to the diffraction of the ejected Auger electrons and interpreted by corresponding changes in the electron mean-free path for inelastic scattering and by oxygen atom accumulation in the subsurface. The subsurface (second layer) octahedral sites increased in size because of surface relaxation and serve as a stable reservoir for the dissolved oxygen.

  20. [Pay attention to the complexity of cataract surgery of no vitreous eyes].

    PubMed

    Bao, Y Z

    2017-04-11

    With wide-spread performance of pars plana vitrectomy, cataract surgeries with no vitreous are getting more and more. This kind of surgery has great difference between individuals and it lacks randomized large sample clinical trial. Surgical strategy decision was basically relied on the surgeon's personal experience. We should fully aware the individual and common characteristics of no vitreous cataract surgery. Surgical time should be carefully decided. Complete ocular examination, evaluation, design of cataract surgical procedure and appropriate intra-ocular lens selection are needed. We must pay highly attention on the cataract surgery of no vitreous eyes. (Chin J Ophthalmol, 2017, 53: 241-243) .

  1. The Effects of Social Capital Levels in Elementary Schools on Organizational Information Sharing

    ERIC Educational Resources Information Center

    Ekinci, Abdurrahman

    2012-01-01

    This study aims to assess the effects of social capital levels at elementary schools on organizational information sharing as reported by teachers. Participants were 267 teachers selected randomly from 16 elementary schools; schools also selected randomly among 42 elementary schools located in the city center of Batman. The data were analyzed by…

  2. The Selection and Prevalence of Natural and Fortified Calcium Food Sources in the Diets of Adolescent Girls

    ERIC Educational Resources Information Center

    Rafferty, Karen; Watson, Patrice; Lappe, Joan M.

    2011-01-01

    Objective: To assess the impact of calcium-fortified food and dairy food on selected nutrient intakes in the diets of adolescent girls. Design: Randomized controlled trial, secondary analysis. Setting and Participants: Adolescent girls (n = 149) from a midwestern metropolitan area participated in randomized controlled trials of bone physiology…

  3. A National Survey of Chief Student Personnel Officers at Randomly Selected Institutions of Postsecondary Education in the United States.

    ERIC Educational Resources Information Center

    Thomas, Henry B.; Kaplan, E. Joseph

    A national survey was conducted of randomly selected chief student personnel officers as listed in the 1979 "Education Directory of Colleges and Universities." The survey addressed specific institutional demographics, policy-making authority, reporting structure, and areas of responsibility of the administrators. Over 93 percent of the respondents…

  4. Nonmanufacturing Businesses. U.S. Metric Study Interim Report.

    ERIC Educational Resources Information Center

    Cornog, June R.; Bunten, Elaine D.

    In this fifth interim report on the feasibility of a United States changeover to a metric system stems from the U.S. Metric Study, a primary stratified sample of 2,828 nonmanufacturing firms was randomly selected from 28,184 businesses taken from Social Security files, a secondary sample of 2,258 firms was randomly selected for replacement…

  5. TOC: Table of Contents Practices of Primary Journals--Recommendations for Monolingual, Multilingual and International Journals.

    ERIC Educational Resources Information Center

    Juhasz, Stephen; And Others

    Table of contents (TOC) practices of some 120 primary journals were analyzed. The journals were randomly selected. The method of randomization is described. The samples were selected from a university library with a holding of approximately 12,000 titles published worldwide. A questionnaire was designed. Purpose was to find uniformity and…

  6. Molecular selection in a unified evolutionary sequence

    NASA Technical Reports Server (NTRS)

    Fox, S. W.

    1986-01-01

    With guidance from experiments and observations that indicate internally limited phenomena, an outline of unified evolutionary sequence is inferred. Such unification is not visible for a context of random matrix and random mutation. The sequence proceeds from Big Bang through prebiotic matter, protocells, through the evolving cell via molecular and natural selection, to mind, behavior, and society.

  7. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  8. The Relationship between Teachers Commitment and Female Students Academic Achievements in Some Selected Secondary School in Wolaita Zone, Southern Ethiopia

    ERIC Educational Resources Information Center

    Bibiso, Abyot; Olango, Menna; Bibiso, Mesfin

    2017-01-01

    The purpose of this study was to investigate the relationship between teacher's commitment and female students academic achievement in selected secondary school of Wolaita zone, Southern Ethiopia. The research method employed was survey study and the sampling techniques were purposive, simple random and stratified random sampling. Questionnaire…

  9. The Social Security Administration's Youth Transition Demonstration Projects: Implementation Lessons from the Original Projects

    ERIC Educational Resources Information Center

    Martinez, John; Fraker, Thomas; Manno, Michelle; Baird, Peter; Mamun, Arif; O'Day, Bonnie; Rangarajan, Anu; Wittenburg, David

    2010-01-01

    This report focuses on the seven original Youth Transition Demonstration (YTD) projects selected for funding in 2003. Three of the original seven projects were selected for a national random assignment evaluation in 2005; however, this report only focuses on program operations prior to joining the random assignment evaluation for the three…

  10. Prediction of aquatic toxicity mode of action using linear discriminant and random forest models.

    PubMed

    Martin, Todd M; Grulke, Christopher M; Young, Douglas M; Russom, Christine L; Wang, Nina Y; Jackson, Crystal R; Barron, Mace G

    2013-09-23

    The ability to determine the mode of action (MOA) for a diverse group of chemicals is a critical part of ecological risk assessment and chemical regulation. However, existing MOA assignment approaches in ecotoxicology have been limited to a relatively few MOAs, have high uncertainty, or rely on professional judgment. In this study, machine based learning algorithms (linear discriminant analysis and random forest) were used to develop models for assigning aquatic toxicity MOA. These methods were selected since they have been shown to be able to correlate diverse data sets and provide an indication of the most important descriptors. A data set of MOA assignments for 924 chemicals was developed using a combination of high confidence assignments, international consensus classifications, ASTER (ASessment Tools for the Evaluation of Risk) predictions, and weight of evidence professional judgment based an assessment of structure and literature information. The overall data set was randomly divided into a training set (75%) and a validation set (25%) and then used to develop linear discriminant analysis (LDA) and random forest (RF) MOA assignment models. The LDA and RF models had high internal concordance and specificity and were able to produce overall prediction accuracies ranging from 84.5 to 87.7% for the validation set. These results demonstrate that computational chemistry approaches can be used to determine the acute toxicity MOAs across a large range of structures and mechanisms.

  11. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR)

    PubMed Central

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-01-01

    Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy. PMID:17543100

  12. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR).

    PubMed

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-06-01

    Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy.

  13. A comparison of two sampling designs for fish assemblage assessment in a large river

    USGS Publications Warehouse

    Kiraly, Ian A.; Coghlan, Stephen M.; Zydlewski, Joseph D.; Hayes, Daniel

    2014-01-01

    We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.

  14. Random forest feature selection, fusion and ensemble strategy: Combining multiple morphological MRI measures to discriminate among healhy elderly, MCI, cMCI and alzheimer's disease patients: From the alzheimer's disease neuroimaging initiative (ADNI) database.

    PubMed

    Dimitriadis, S I; Liparas, Dimitris; Tsolaki, Magda N

    2018-05-15

    In the era of computer-assisted diagnostic tools for various brain diseases, Alzheimer's disease (AD) covers a large percentage of neuroimaging research, with the main scope being its use in daily practice. However, there has been no study attempting to simultaneously discriminate among Healthy Controls (HC), early mild cognitive impairment (MCI), late MCI (cMCI) and stable AD, using features derived from a single modality, namely MRI. Based on preprocessed MRI images from the organizers of a neuroimaging challenge, 3 we attempted to quantify the prediction accuracy of multiple morphological MRI features to simultaneously discriminate among HC, MCI, cMCI and AD. We explored the efficacy of a novel scheme that includes multiple feature selections via Random Forest from subsets of the whole set of features (e.g. whole set, left/right hemisphere etc.), Random Forest classification using a fusion approach and ensemble classification via majority voting. From the ADNI database, 60 HC, 60 MCI, 60 cMCI and 60 CE were used as a training set with known labels. An extra dataset of 160 subjects (HC: 40, MCI: 40, cMCI: 40 and AD: 40) was used as an external blind validation dataset to evaluate the proposed machine learning scheme. In the second blind dataset, we succeeded in a four-class classification of 61.9% by combining MRI-based features with a Random Forest-based Ensemble Strategy. We achieved the best classification accuracy of all teams that participated in this neuroimaging competition. The results demonstrate the effectiveness of the proposed scheme to simultaneously discriminate among four groups using morphological MRI features for the very first time in the literature. Hence, the proposed machine learning scheme can be used to define single and multi-modal biomarkers for AD. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A principle of organization which facilitates broad Lamarckian-like adaptations by improvisation.

    PubMed

    Soen, Yoav; Knafo, Maor; Elgart, Michael

    2015-12-02

    During the lifetime of an organism, every individual encounters many combinations of diverse changes in the somatic genome, epigenome and microbiome. This gives rise to many novel combinations of internal failures which are unique to each individual. How any individual can tolerate this high load of new, individual-specific scenarios of failure is not clear. While stress-induced plasticity and hidden variation have been proposed as potential mechanisms of tolerance, the main conceptual problem remains unaddressed, namely: how largely non-beneficial random variation can be rapidly and safely organized into net benefits to every individual. We propose an organizational principle which explains how every individual can alleviate a high load of novel stressful scenarios using many random variations in flexible and inherently less harmful traits. Random changes which happen to reduce stress, benefit the organism and decrease the drive for additional changes. This adaptation (termed 'Adaptive Improvisation') can be further enhanced, propagated, stabilized and memorized when beneficial changes reinforce themselves by auto-regulatory mechanisms. This principle implicates stress not only in driving diverse variations in cells tissues and organs, but also in organizing these variations into adaptive outcomes. Specific (but not exclusive) examples include stress reduction by rapid exchange of mobile genetic elements (or exosomes) in unicellular, and rapid changes in the symbiotic microorganisms of animals. In all cases, adaptive changes can be transmitted across generations, allowing rapid improvement and assimilation in a few generations. We provide testable predictions derived from the hypothesis. The hypothesis raises a critical, but thus far overlooked adaptation problem and explains how random variation can self-organize to confer a wide range of individual-specific adaptations beyond the existing outcomes of natural selection. It portrays gene regulation as an inseparable synergy between natural selection and adaptation by improvisation. The latter provides a basis for Lamarckian adaptation that is not limited to a specific mechanism and readily accounts for the remarkable resistance of tumors to treatment.

  16. Comparison of the safety and immunogenicity of live attenuated and inactivated hepatitis A vaccine in healthy Chinese children aged 18 months to 16 years: results from a randomized, parallel controlled, phase IV study.

    PubMed

    Ma, F; Yang, J; Kang, G; Sun, Q; Lu, P; Zhao, Y; Wang, Z; Luo, J; Wang, Z

    2016-09-01

    For large-scale immunization of children with hepatitis A (HA) vaccines in China, accurately designed studies comparing the safety and immunogenicity of the live attenuated HA vaccine (HA-L) and inactivated HA vaccine (HA-I) are necessary. A randomized, parallel controlled, phase IV clinical trial was conducted with 6000 healthy children aged 18 months to 16 years. HA-L or HA-I was administered at a ratio of 1: 1 to randomized selected participants. The safety and immunogenicity were evaluated. Both HA-L and HA-I were well tolerated by all participants. The immunogenicity results showed that the seroconversion rates (HA-L versus HA-I: 98.0% versus 100%, respectively, p >0.05), and geometric mean concentrations in participants negative for antibodies against HA virus IgG (anti-HAV IgG) before vaccination did not differ significantly between the two types of vaccines (HA-L versus HA-I first dose: 898.9 versus 886.2 mIU/mL, respectively, p >0.05). After administration of the booster dose of HA-I, the geometric mean concentrations of anti-HAV IgG (HA-I booster dose: 2591.2 mIU/mL) was higher than that after the first dose (p <0.05) and that reported in participants administered HA-L (p <0.05). Additionally, 12 (25%) of the 48 randomized selected participants who received HA-L tested positive for HA antigen in stool samples. Hence, both HA-L and HA-I could provide acceptable immunogenicity in children. The effects of long-term immunogenicity after natural exposure to wild-type HA virus and the possibility of mutational shifts of the live vaccine virus in the field need to be studied in more detail. Copyright © 2016. Published by Elsevier Ltd.

  17. Genetic analyses of protein yield in dairy cows applying random regression models with time-dependent and temperature x humidity-dependent covariates.

    PubMed

    Brügemann, K; Gernand, E; von Borstel, U U; König, S

    2011-08-01

    Data used in the present study included 1,095,980 first-lactation test-day records for protein yield of 154,880 Holstein cows housed on 196 large-scale dairy farms in Germany. Data were recorded between 2002 and 2009 and merged with meteorological data from public weather stations. The maximum distance between each farm and its corresponding weather station was 50 km. Hourly temperature-humidity indexes (THI) were calculated using the mean of hourly measurements of dry bulb temperature and relative humidity. On the phenotypic scale, an increase in THI was generally associated with a decrease in daily protein yield. For genetic analyses, a random regression model was applied using time-dependent (d in milk, DIM) and THI-dependent covariates. Additive genetic and permanent environmental effects were fitted with this random regression model and Legendre polynomials of order 3 for DIM and THI. In addition, the fixed curve was modeled with Legendre polynomials of order 3. Heterogeneous residuals were fitted by dividing DIM into 5 classes, and by dividing THI into 4 classes, resulting in 20 different classes. Additive genetic variances for daily protein yield decreased with increasing degrees of heat stress and were lowest at the beginning of lactation and at extreme THI. Due to higher additive genetic variances, slightly higher permanent environment variances, and similar residual variances, heritabilities were highest for low THI in combination with DIM at the end of lactation. Genetic correlations among individual values for THI were generally >0.90. These trends from the complex random regression model were verified by applying relatively simple bivariate animal models for protein yield measured in 2 THI environments; that is, defining a THI value of 60 as a threshold. These high correlations indicate the absence of any substantial genotype × environment interaction for protein yield. However, heritabilities and additive genetic variances from the random regression model tended to be slightly higher in the THI range corresponding to cows' comfort zone. Selecting such superior environments for progeny testing can contribute to an accurate genetic differentiation among selection candidates. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Does sex induce a phase transition?

    NASA Astrophysics Data System (ADS)

    de Oliveira, P. M. C.; Moss de Oliveira, S.; Stauffer, D.; Cebrat, S.; Pękalski, A.

    2008-05-01

    We discovered a dynamic phase transition induced by sexual reproduction. The dynamics is a pure Darwinian rule applied to diploid bit-strings with both fundamental ingredients to drive Darwin's evolution: (1) random mutations and crossings which act in the sense of increasing the entropy (or diversity); and (2) selection which acts in the opposite sense by limiting the entropy explosion. Selection wins this competition if mutations performed at birth are few enough, and thus the wild genotype dominates the steady-state population. By slowly increasing the average number m of mutations, however, the population suddenly undergoes a mutational degradation precisely at a transition point mc. Above this point, the “bad” alleles (represented by 1-bits) spread over the genetic pool of the population, overcoming the selection pressure. Individuals become selectively alike, and evolution stops. Only below this point, m < mc, evolutionary life is possible. The finite-size-scaling behaviour of this transition is exhibited for large enough “chromosome” lengths L, through lengthy computer simulations. One important and surprising observation is the L-independence of the transition curves, for large L. They are also independent on the population size. Another is that mc is near unity, i.e. life cannot be stable with much more than one mutation per diploid genome, independent of the chromosome length, in agreement with reality. One possible consequence is that an eventual evolutionary jump towards larger L enabling the storage of more genetic information would demand an improved DNA copying machinery in order to keep the same total number of mutations per offspring.

  19. BCH codes for large IC random-access memory systems

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.

    1983-01-01

    In this report some shortened BCH codes for possible applications to large IC random-access memory systems are presented. These codes are given by their parity-check matrices. Encoding and decoding of these codes are discussed.

  20. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    NASA Astrophysics Data System (ADS)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  1. Combining techniques for screening and evaluating interaction terms on high-dimensional time-to-event data.

    PubMed

    Sariyar, Murat; Hoffmann, Isabell; Binder, Harald

    2014-02-26

    Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones.

  2. Phenotyping: Using Machine Learning for Improved Pairwise Genotype Classification Based on Root Traits

    PubMed Central

    Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris

    2016-01-01

    Phenotyping local crop cultivars is becoming more and more important, as they are an important genetic source for breeding – especially in regard to inherent root system architectures. Machine learning algorithms are promising tools to assist in the analysis of complex data sets; novel approaches are need to apply them on root phenotyping data of mature plants. A greenhouse experiment was conducted in large, sand-filled columns to differentiate 16 European Pisum sativum cultivars based on 36 manually derived root traits. Through combining random forest and support vector machine models, machine learning algorithms were successfully used for unbiased identification of most distinguishing root traits and subsequent pairwise cultivar differentiation. Up to 86% of pea cultivar pairs could be distinguished based on top five important root traits (Timp5) – Timp5 differed widely between cultivar pairs. Selecting top important root traits (Timp) provided a significant improved classification compared to using all available traits or randomly selected trait sets. The most frequent Timp of mature pea cultivars was total surface area of lateral roots originating from tap root segments at 0–5 cm depth. The high classification rate implies that culturing did not lead to a major loss of variability in root system architecture in the studied pea cultivars. Our results illustrate the potential of machine learning approaches for unbiased (root) trait selection and cultivar classification based on rather small, complex phenotypic data sets derived from pot experiments. Powerful statistical approaches are essential to make use of the increasing amount of (root) phenotyping information, integrating the complex trait sets describing crop cultivars. PMID:27999587

  3. The Origins of Order: Self-Organization and Selection in Evolution

    NASA Astrophysics Data System (ADS)

    Kauffman, Stuart A.

    The following sections are included: * Introduction * Fitness Landscapes in Sequence Space * The NK Model of Rugged Fitness Landscapes * The NK Model of Random Epistatic Interactions * The Rank Order Statistics on K = N - 1 Random Landscapes * The number of local optima is very large * The expected fraction of fitter 1-mutant neighbors dwindles by 1/2 on each improvement step * Walks to local optima are short and vary as a logarithmic function of N * The expected time to reach an optimum is proportional to the dimensionality of the space * The ratio of accepted to tried mutations scales as lnN/N * Any genotype can only climb to a small fraction of the local optima * A small fraction of the genotypes can climb to any one optimum * Conflicting constraints cause a "complexity catastrophe": as complexity increase accessible adaptive peaks fall toward the mean fitness * The "Tunable" NK Family of Correlated Landscapes * Other Combinatorial Optimization Problems and Their Landscapes * Summary * References

  4. Physical states and finite-size effects in Kitaev's honeycomb model: Bond disorder, spin excitations, and NMR line shape

    NASA Astrophysics Data System (ADS)

    Zschocke, Fabian; Vojta, Matthias

    2015-07-01

    Kitaev's compass model on the honeycomb lattice realizes a spin liquid whose emergent excitations are dispersive Majorana fermions and static Z2 gauge fluxes. We discuss the proper selection of physical states for finite-size simulations in the Majorana representation, based on a recent paper by F. L. Pedrocchi, S. Chesi, and D. Loss [Phys. Rev. B 84, 165414 (2011), 10.1103/PhysRevB.84.165414]. Certain physical observables acquire large finite-size effects, in particular if the ground state is not fermion-free, which we prove to generally apply to the system in the gapless phase and with periodic boundary conditions. To illustrate our findings, we compute the static and dynamic spin susceptibilities for finite-size systems. Specifically, we consider random-bond disorder (which preserves the solubility of the model), calculate the distribution of local flux gaps, and extract the NMR line shape. We also predict a transition to a random-flux state with increasing disorder.

  5. Columnar organization of orientation domains in V1

    NASA Astrophysics Data System (ADS)

    Liedtke, Joscha; Wolf, Fred

    In the primary visual cortex (V1) of primates and carnivores, the functional architecture of basic stimulus selectivities appears similar across cortical layers (Hubel & Wiesel, 1962) justifying the use of two-dimensional cortical models and disregarding organization in the third dimension. Here we show theoretically that already small deviations from an exact columnar organization lead to non-trivial three-dimensional functional structures. We extend two-dimensional random field models (Schnabel et al., 2007) to a three-dimensional cortex by keeping a typical scale in each layer and introducing a correlation length in the third, columnar dimension. We examine in detail the three-dimensional functional architecture for different cortical geometries with different columnar correlation lengths. We find that (i) topological defect lines are generally curved and (ii) for large cortical curvatures closed loops and reconnecting topological defect lines appear. This theory extends the class of random field models by introducing a columnar dimension and provides a systematic statistical assessment of the three-dimensional functional architecture of V1 (see also (Tanaka et al., 2011)).

  6. Hints of correlation between broad-line and radio variations for 3C 120

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H. T.; Bai, J. M.; Li, S. K.

    2014-01-01

    In this paper, we investigate the correlation between broad-line and radio variations for the broad-line radio galaxy 3C 120. By the z-transformed discrete correlation function method and the model-independent flux randomization/random subset selection (FR/RSS) Monte Carlo method, we find that broad Hβ line variations lead the 15 GHz variations. The FR/RSS method shows that the Hβ line variations lead the radio variations by a factor of τ{sub ob} = 0.34 ± 0.01 yr. This time lag can be used to locate the position of the emitting region of radio outbursts in the jet, on the order of ∼5 lt-yr frommore » the central engine. This distance is much larger than the size of the broad-line region. The large separation of the radio outburst emitting region from the broad-line region will observably influence the gamma-ray emission in 3C 120.« less

  7. Reducing DNA context dependence in bacterial promoters

    PubMed Central

    Carr, Swati B.; Densmore, Douglas M.

    2017-01-01

    Variation in the DNA sequence upstream of bacterial promoters is known to affect the expression levels of the products they regulate, sometimes dramatically. While neutral synthetic insulator sequences have been found to buffer promoters from upstream DNA context, there are no established methods for designing effective insulator sequences with predictable effects on expression levels. We address this problem with Degenerate Insulation Screening (DIS), a novel method based on a randomized 36-nucleotide insulator library and a simple, high-throughput, flow-cytometry-based screen that randomly samples from a library of 436 potential insulated promoters. The results of this screen can then be compared against a reference uninsulated device to select a set of insulated promoters providing a precise level of expression. We verify this method by insulating the constitutive, inducible, and repressible promotors of a four transcriptional-unit inverter (NOT-gate) circuit, finding both that order dependence is largely eliminated by insulation and that circuit performance is also significantly improved, with a 5.8-fold mean improvement in on/off ratio. PMID:28422998

  8. The socioeconomic health gradient across the life cycle: what role for selective mortality and institutionalization?

    PubMed Central

    Baeten, Steef; Van Ourti, Tom; van Doorslaer, Eddy

    2013-01-01

    Several studies have documented the now fairly stylized fact that health inequalities by income differ across the age distribution: in cross-sections the health gap between rich and poor tends to widen until about age 50 and then declines at higher ages. It has been suggested that selective mortality and institutionalization could be important factors driving the convergence at higher ages. We use eight waves of a health survey linked to four registries (on mortality, hospitalizations, (municipal) residence status and taxable incomes) to test this hypothesis. We construct life cycle profiles of health for birth year/gender/income groups from the health surveys (based on 128,689 observations) and exploit the registries to obtain precise estimates of individual probabilities of mortality and institutionalization using a seven year observation period for 2,521,122 individuals. We generate selection corrected health profiles using an inverse probability weighting procedure and find that attrition is indeed not random: older, poorer and unhealthier individuals are significantly more likely not to survive the next year and to be admitted to an institution. While these selection effects are very significant, they are not very large. We therefore reject the hypothesis that selective dropout is an important determinant of the differential health trajectories by income over the life course in the Netherlands. PMID:24161090

  9. Application of stochastic processes in random growth and evolutionary dynamics

    NASA Astrophysics Data System (ADS)

    Oikonomou, Panagiotis

    We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. While homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously towards the target phenotype. Moreover, we show that scale-free networks always evolve faster than homogeneous random networks; remarkably, this property does not depend on the precise value of the topological parameter. By contrast, homogeneous random networks require a specific tuning of their topological parameter in order to optimize their fitness. This model suggests that the evolutionary paths of biological networks, punctuated or continuous, may solely be determined by the network topology.

  10. Using Field Data and GIS-Derived Variables to Model Occurrence of Williamson's Sapsucker Nesting Habitat at Multiple Spatial Scales.

    PubMed

    Drever, Mark C; Gyug, Les W; Nielsen, Jennifer; Stuart-Smith, A Kari; Ohanjanian, I Penny; Martin, Kathy

    2015-01-01

    Williamson's sapsucker (Sphyrapicus thyroideus) is a migratory woodpecker that breeds in mixed coniferous forests in western North America. In Canada, the range of this woodpecker is restricted to three small populations in southern British Columbia, precipitating a national listing as 'Endangered' in 2005, and the need to characterize critical habitat for its survival and recovery. We compared habitat attributes between Williamson's sapsucker nest territories and random points without nests or detections of this sapsucker as part of a resource selection analysis to identify the habitat features that best explain the probability of nest occurrence in two separate geographic regions in British Columbia. We compared the relative explanatory power of generalized linear models based on field-derived and Geographic Information System (GIS) data within both a 225 m and 800 m radius of a nest or random point. The model based on field-derived variables explained the most variation in nest occurrence in the Okanagan-East Kootenay Region, whereas nest occurrence was best explained by GIS information at the 800 m scale in the Western Region. Probability of nest occurrence was strongly tied to densities of potential nest trees, which included open forests with very large (diameter at breast height, DBH, ≥57.5 cm) western larch (Larix occidentalis) trees in the Okanagan-East Kootenay Region, and very large ponderosa pine (Pinus ponderosa) and large (DBH 17.5-57.5 cm) trembling aspen (Populus tremuloides) trees in the Western Region. Our results have the potential to guide identification and protection of critical habitat as required by the Species at Risk Act in Canada, and to better manage Williamson's sapsucker habitat overall in North America. In particular, management should focus on the maintenance and recruitment of very large western larch and ponderosa pine trees.

  11. Generation of Aptamers from A Primer-Free Randomized ssDNA Library Using Magnetic-Assisted Rapid Aptamer Selection

    NASA Astrophysics Data System (ADS)

    Tsao, Shih-Ming; Lai, Ji-Ching; Horng, Horng-Er; Liu, Tu-Chen; Hong, Chin-Yih

    2017-04-01

    Aptamers are oligonucleotides that can bind to specific target molecules. Most aptamers are generated using random libraries in the standard systematic evolution of ligands by exponential enrichment (SELEX). Each random library contains oligonucleotides with a randomized central region and two fixed primer regions at both ends. The fixed primer regions are necessary for amplifying target-bound sequences by PCR. However, these extra-sequences may cause non-specific bindings, which potentially interfere with good binding for random sequences. The Magnetic-Assisted Rapid Aptamer Selection (MARAS) is a newly developed protocol for generating single-strand DNA aptamers. No repeat selection cycle is required in the protocol. This study proposes and demonstrates a method to isolate aptamers for C-reactive proteins (CRP) from a randomized ssDNA library containing no fixed sequences at 5‧ and 3‧ termini using the MARAS platform. Furthermore, the isolated primer-free aptamer was sequenced and binding affinity for CRP was analyzed. The specificity of the obtained aptamer was validated using blind serum samples. The result was consistent with monoclonal antibody-based nephelometry analysis, which indicated that a primer-free aptamer has high specificity toward targets. MARAS is a feasible platform for efficiently generating primer-free aptamers for clinical diagnoses.

  12. Quantum interference experiments with large molecules

    NASA Astrophysics Data System (ADS)

    Nairz, Olaf; Arndt, Markus; Zeilinger, Anton

    2003-04-01

    Wave-particle duality is frequently the first topic students encounter in elementary quantum physics. Although this phenomenon has been demonstrated with photons, electrons, neutrons, and atoms, the dual quantum character of the famous double-slit experiment can be best explained with the largest and most classical objects, which are currently the fullerene molecules. The soccer-ball-shaped carbon cages C60 are large, massive, and appealing objects for which it is clear that they must behave like particles under ordinary circumstances. We present the results of a multislit diffraction experiment with such objects to demonstrate their wave nature. The experiment serves as the basis for a discussion of several quantum concepts such as coherence, randomness, complementarity, and wave-particle duality. In particular, the effect of longitudinal (spectral) coherence can be demonstrated by a direct comparison of interferograms obtained with a thermal beam and a velocity selected beam in close analogy to the usual two-slit experiments using light.

  13. Noteworthy Articles in 2015 for the Cardiothoracic Anesthesiologist.

    PubMed

    Varelmann, Dirk J; Muehlschlegel, J Daniel

    2016-03-01

    Large multicenter, randomized controlled trials published in reputable journals had a large impact on the world of cardiothoracic anesthesia in 2015. We as cardiac anesthesiologists pride ourselves as being experts in applied physiology, physics, ultrasonography, and pharmacology/pharmacotherapy. The selected studies added to our knowledge in the fields of echocardiography, pharmacology, molecular biology, and genetics. Outcome studies shine a light on important topics that are relevant to all cardiac anesthesiologists: does surgical atrial fibrillation ablation during mitral valve surgery reduce the recurrence of atrial fibrillation at 1 year after surgery? Does remote ischemic preconditioning live up to its promise to reduce postoperative major cardiac and cerebral events? Although we still do not have the answer to all the questions, the year 2015 has been a great step toward the goal of understanding molecular mechanisms of ischemic myocardial injury and toward providing evidence-based medicine for improving patient outcome. © The Author(s) 2016.

  14. The two-point correlation function for groups of galaxies in the Center for Astrophysics redshift survey

    NASA Technical Reports Server (NTRS)

    Ramella, Massimo; Geller, Margaret J.; Huchra, John P.

    1990-01-01

    The large-scale distribution of groups of galaxies selected from complete slices of the CfA redshift survey extension is examined. The survey is used to reexamine the contribution of group members to the galaxy correlation function. The relationship between the correlation function for groups and those calculated for rich clusters is discussed, and the results for groups are examined as an extension of the relation between correlation function amplitude and richness. The group correlation function indicates that groups and individual galaxies are equivalent tracers of the large-scale matter distribution. The distribution of group centers is equivalent to random sampling of the galaxy distribution. The amplitude of the correlation function for groups is consistent with an extrapolation of the amplitude-richness relation for clusters. The amplitude scaled by the mean intersystem separation is also consistent with results for richer clusters.

  15. Changing friend selection in middle school: A social network analysis of a randomized intervention study designed to prevent adolescent problem behavior

    PubMed Central

    DeLay, Dawn; Ha, Thao; Van Ryzin, Mark; Winter, Charlotte; Dishion, Thomas J.

    2015-01-01

    Adolescent friendships that promote problem behavior are often chosen in middle school. The current study examines the unintended impact of a randomized school based intervention on the selection of friends in middle school, as well as on observations of deviant talk with friends five years later. Participants included 998 middle school students (526 boys and 472 girls) recruited at the onset of middle school (age 11-12 years) from three public middle schools participating in the Family Check-up model intervention. The current study focuses only on the effects of the SHAPe curriculum—one level of the Family Check-up model—on friendship choices. Participants nominated friends and completed measures of deviant peer affiliation. Approximately half of the sample (n=500) was randomly assigned to the intervention and the other half (n=498) comprised the control group within each school. The results indicate that the SHAPe curriculum affected friend selection within School 1, but not within Schools 2 or 3. The effects of friend selection in School 1 translated into reductions in observed deviancy training five years later (age 16-17 years). By coupling longitudinal social network analysis with a randomized intervention study the current findings provide initial evidence that a randomized public middle school intervention can disrupt the formation of deviant peer groups and diminish levels of adolescent deviance five years later. PMID:26377235

  16. Nitrates and bone turnover (NABT) - trial to select the best nitrate preparation: study protocol for a randomized controlled trial.

    PubMed

    Bucur, Roxana C; Reid, Lauren S; Hamilton, Celeste J; Cummings, Steven R; Jamal, Sophie A

    2013-09-08

    Organic nitrates uncouple bone turnover, improve bone mineral density, and improve trabecular and cortical components of bone. These changes in turnover, strength and geometry may translate into an important reduction in fractures. However, before proceeding with a large fracture trial, there is a need to identify the nitrate formulation that has both the greatest efficacy (with regards to bone turnover markers) and gives the fewest headaches. Ascertaining which nitrate formulation this may be is the purpose of the current study. This will be an open-label randomized, controlled trial conducted at Women's College Hospital comparing five formulations of nitrates for their effects on bone turnover markers and headache. We will recruit postmenopausal women age 50 years or older with no contraindications to nitroglycerin. Our trial will consist of a run-in phase and a treatment phase. We will enroll 420 women in the run-in phase, each to receive all of the 5 potential treatments in random order for 2 days, each with a 2-day washout period between treatments. Those who tolerate all formulations will enter the 12-week treatment phase and be randomly assigned to one of five groups: 0.3 mg sublingual nitroglycerin tablet, 0.6 mg of the sublingual tablet, a 20 mg tablet of isosorbide mononitrate, a 160 mg nitroglycerin transdermal patch (used for 8 h), and 15 mg of nitroglycerin ointment as used in a previous trial by our group. We will continue enrolment until we have randomized 210 women or 35 women per group. Concentrations of bone formation (bone-specific alkaline phosphatase and procollagen type I N-terminal propeptide) and bone resorption (C-telopeptides of collagen crosslinks and N-terminal crosslinks of collagen) agents will be measured in samples taken at study entry (the start of the run in phase) and 12 weeks. Subjects will record the frequency and severity of headaches daily during the run-in phase and then monthly after that. We will use the 'multiple comparisons with the best' approach for data analyses, as this strategy allows practical considerations of ease of use and tolerability to guide selection of the preparation for future studies. Data from this protocol will be used to develop a randomized, controlled trial of nitrates to prevent osteoporotic fractures. ClinicalTrials.gov Identifier: NCT01387672. Controlled-Trials.com: ISRCTN08860742.

  17. A management-oriented classification of pinyon-juniper woodlands of the Great Basin

    Treesearch

    Neil E. West; Robin J. Tausch; Paul T. Tueller

    1998-01-01

    A hierarchical framework for the classification of Great Basin pinyon-juniper woodlands was based on a systematic sample of 426 stands from a random selection of 66 of the 110 mountain ranges in the region. That is, mountain ranges were randomly selected, but stands were systematically located on mountain ranges. The National Hierarchical Framework of Ecological Units...

  18. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  19. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  20. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  1. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  2. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  3. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  4. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  5. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  6. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  7. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...

  8. Attitude and Motivation as Predictors of Academic Achievement of Students in Clothing and Textiles

    ERIC Educational Resources Information Center

    Uwameiye, B. E.; Osho, L. E.

    2011-01-01

    This study investigated attitude and motivation as predictors of academic achievement of students in clothing and textiles. Three colleges of education in Edo and Delta States were randomly selected for use in this study. From each school, 40 students were selected from Year III using simple random technique yielding a total of 240 students. The…

  9. A morphologic analysis of 'naked' islets of Langerhans in lobular atrophy of the pancreas.

    PubMed

    Suda, K; Tsukahara, M; Miyake, T; Hirai, S

    1994-08-01

    The 'naked' islets of Langerhans (NIL) in randomly selected autopsy cases and in cases of chronic alcoholic pancreatitis, cystic fibrosis, and pancreatic carcinoma were studied histopathologically. The NIL were found in 55 of 164 randomly selected cases, with age-related frequency, in 21 of 30 cases of chronic alcoholic pancreatitis, in 2 of 2 cases of cystic fibrosis, and in 25 of 32 cases of pancreatic carcinoma. The NIL were frequently accompanied by ductal alterations: epithelial metaplasia and hyperplasia in randomly selected cases, protein plugs in chronic alcoholic pancreatitis, mucus plugs in cystic fibrosis, and obliterated ducts in pancreatic carcinoma. The NIL in randomly selected cases may have been formed by ductal alterations that caused stenosis of the lumen, those in chronic alcoholic pancreatitis and cystic fibrosis were the result of protein or mucus plugging, and those in pancreatic carcinoma were a result of neoplastic involvement of the distal pancreatic duct. Therefore, the common factor in the development of NIL is thought to be obstruction of the pancreatic duct system, and in cases of NIL that have a multilobular distribution and interinsular fibrosis, a diagnosis of chronic pancreatitis can usually be made.

  10. Freak waves in random oceanic sea states.

    PubMed

    Onorato, M; Osborne, A R; Serio, M; Bertone, S

    2001-06-18

    Freak waves are very large, rare events in a random ocean wave train. Here we study their generation in a random sea state characterized by the Joint North Sea Wave Project spectrum. We assume, to cubic order in nonlinearity, that the wave dynamics are governed by the nonlinear Schrödinger (NLS) equation. We show from extensive numerical simulations of the NLS equation how freak waves in a random sea state are more likely to occur for large values of the Phillips parameter alpha and the enhancement coefficient gamma. Comparison with linear simulations is also reported.

  11. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    NASA Astrophysics Data System (ADS)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  12. Structure/permeability relationships of silicon-containing polyimides

    NASA Technical Reports Server (NTRS)

    Stern, S. A.; Vaidyanathan, R.; Pratt, J. R.

    1989-01-01

    The permeability to H2, O2, N2, CO2 and CH4 of three silicone-polyimide random copolymers and two polyimides containing silicon atoms in their backbone chains, was determined at 35.0 C and at pressures up to about 120 psig (approximately 8.2 atm). The copolymers contained different amounts of BPADA-m-PDA and amine-terminated poly (dimethyl siloxane) and also had different numbers of siloxane linkages in their silicone component. The polyimides containing silicon atoms (silicon-modified polyimides) were SiDA-4,4'-ODA and SiDA-p-PDA. The gas permeability and selectivity of the copolymers are more similar to those of their silicone component than of the polyimide component. By contrast, the permeability and selectivity of the silicon-modified polyimides are more similar to those of their parent polyimides, PMDA-4,4'-ODA and SiDA-p-PDA. The substitution of SiDA for the PMDA moiety in a polyimide appears to result in a significant increase in gas permeability, without a correspondingly large decrease in selectivity. The potential usefulness of the above polymers and copolymers as gas separation membranes is discussed.

  13. Is Mutation Random or Targeted?: No Evidence for Hypermutability in Snail Toxin Genes.

    PubMed

    Roy, Scott W

    2016-10-01

    Ever since Luria and Delbruck, the notion that mutation is random with respect to fitness has been foundational to modern biology. However, various studies have claimed striking exceptions to this rule. One influential case involves toxin-encoding genes in snails of the genus Conus, termed conotoxins, a large gene family that undergoes rapid diversification of their protein-coding sequences by positive selection. Previous reconstructions of the sequence evolution of conotoxin genes claimed striking patterns: (1) elevated synonymous change, interpreted as being due to targeted "hypermutation" in this region; (2) elevated transversion-to-transition ratios, interpreted as reflective of the particular mechanism of hypermutation; and (3) much lower rates of synonymous change in the codons encoding several highly conserved cysteine residues, interpreted as strong position-specific codon bias. This work has spawned a variety of studies on the potential mechanisms of hypermutation and on causes for cysteine codon bias, and has inspired hypermutation hypotheses for various other fast-evolving genes. Here, I show that all three findings are likely to be artifacts of statistical reconstruction. First, by simulating nonsynonymous change I show that high rates of dN can lead to overestimation of dS. Second, I show that there is no evidence for any of these three patterns in comparisons of closely related conotoxin sequences, suggesting that the reported findings are due to breakdown of statistical methods at high levels of sequence divergence. The current findings suggest that mutation and codon bias in conotoxin genes may not be atypical, and that random mutation and selection can explain the evolution of even these exceptional loci. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Nonrandom Distribution of Virulences Within Two Field Collections of Uromyces appendiculatus.

    PubMed

    Groth, J V; Ozmon, E A

    2002-07-01

    ABSTRACT Two collections of urediniospores of Uromyces appendiculatus, each from a different commercial bean field, were characterized for associations of virulence among individuals within each collection. Four bean (Phaseolus vulgaris) lines with distinct, race-specific resistance to which virulence in each population was polymorphic were used to obtain measures of all six possible pairwise virulence associations for each collection. We inoculated one of the lines and collected urediniospores only from the segment of the population that was virulent on that line. This segment, when compared with nonselected collections from susceptible Pinto 111, gave a direct measure of degree of association as the change in frequency of virulence observed. Plants of the second bean line were inoculated in separate sets with both selected and unselected collections. Frequencies of virulence were estimated from the numbers of susceptible-type and resistant-type infections. Reciprocals of each pairing also were made. For collection P21, all virulences were significantly associated, either positively or negatively, except one pair (in one direction of selection only); whereas, for collection M5, all virulences were significantly associated. Virulence association in P21 was shown to be the result of predominance of phenotypes with certain combinations of virulence by inoculation of the four bean lines with 10 randomly chosen single-uredinial individuals. In support of this, a large random-mated F1 population derived from each collection showed much less virulence association, with the majority of pairs of virulences showing nonsignificant changes in virulence frequency after passage through the first line. Random mating also significantly changed virulence frequency from that of the original population in all instances. Changes were in both directions, suggesting either that virulences were not all recessive, or that heterozygote frequency was sometimes above and sometimes below the Hardy-Weinberg expectation in the field populations.

  15. Genomic Mapping of Direct and Correlated Responses to Long-Term Selection for Rapid Growth Rate in Mice

    PubMed Central

    Allan, Mark F.; Eisen, Eugene J.; Pomp, Daniel

    2005-01-01

    Understanding the genetic architecture of traits such as growth, body composition, and energy balance has become a primary focus for biomedical and agricultural research. The objective of this study was to map QTL in a large F2 (n = 1181) population resulting from an intercross between the M16 and ICR lines of mice. The M16 line, developed by long-term selection for 3- to 6-week weight gain, is larger, heavier, fatter, hyperphagic, and diabetic relative to its randomly selected control line of ICR origin. The F2 population was phenotyped for growth and energy intake at weekly intervals from 4 to 8 weeks of age and for body composition and plasma levels of insulin, leptin, TNFα, IL6, and glucose at 8 weeks and was genotyped for 80 microsatellite markers. Since the F2 was a cross between a selection line and its unselected control, the QTL identified likely represent genes that contributed to direct and correlated responses to long-term selection for rapid growth rate. Across all traits measured, 95 QTL were identified, likely representing 19 unique regions on 13 chromosomes. Four chromosomes (2, 6, 11, and 17) harbored loci contributing disproportionately to selection response. Several QTL demonstrating differential regulation of regional adipose deposition and age-dependent regulation of growth and energy consumption were identified. PMID:15944354

  16. Target selection biases from recent experience transfer across effectors.

    PubMed

    Moher, Jeff; Song, Joo-Hyun

    2016-02-01

    Target selection is often biased by an observer's recent experiences. However, not much is known about whether these selection biases influence behavior across different effectors. For example, does looking at a red object make it easier to subsequently reach towards another red object? In the current study, we asked observers to find the uniquely colored target object on each trial. Randomly intermixed pre-trial cues indicated the mode of action: either an eye movement or a visually guided reach movement to the target. In Experiment 1, we found that priming of popout, reflected in faster responses following repetition of the target color on consecutive trials, occurred regardless of whether the effector was repeated from the previous trial or not. In Experiment 2, we examined whether an inhibitory selection bias away from a feature could transfer across effectors. While priming of popout reflects both enhancement of the repeated target features and suppression of the repeated distractor features, the distractor previewing effect isolates a purely inhibitory component of target selection in which a previewed color is presented in a homogenous display and subsequently inhibited. Much like priming of popout, intertrial suppression biases in the distractor previewing effect transferred across effectors. Together, these results suggest that biases for target selection driven by recent trial history transfer across effectors. This indicates that representations in memory that bias attention towards or away from specific features are largely independent from their associated actions.

  17. Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.

    2010-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  18. An investigation into the probabilistic combination of quasi-static and random accelerations

    NASA Technical Reports Server (NTRS)

    Schock, R. W.; Tuell, L. P.

    1984-01-01

    The development of design load factors for aerospace and aircraft components and experiment support structures, which are subject to a simultaneous vehicle dynamic vibration (quasi-static) and acoustically generated random vibration, require the selection of a combination methodology. Typically, the procedure is to define the quasi-static and the random generated response separately, and arithmetically add or root sum square to get combined accelerations. Since the combination of a probabilistic and a deterministic function yield a probabilistic function, a viable alternate approach would be to determine the characteristics of the combined acceleration probability density function and select an appropriate percentile level for the combined acceleration. The following paper develops this mechanism and provides graphical data to select combined accelerations for most popular percentile levels.

  19. On the apparent insignificance of the randomness of flexible joints on large space truss dynamics

    NASA Technical Reports Server (NTRS)

    Koch, R. M.; Klosner, J. M.

    1993-01-01

    Deployable periodic large space structures have been shown to exhibit high dynamic sensitivity to period-breaking imperfections and uncertainties. These can be brought on by manufacturing or assembly errors, structural imperfections, as well as nonlinear and/or nonconservative joint behavior. In addition, the necessity of precise pointing and position capability can require the consideration of these usually negligible and unknown parametric uncertainties and their effect on the overall dynamic response of large space structures. This work describes the use of a new design approach for the global dynamic solution of beam-like periodic space structures possessing parametric uncertainties. Specifically, the effect of random flexible joints on the free vibrations of simply-supported periodic large space trusses is considered. The formulation is a hybrid approach in terms of an extended Timoshenko beam continuum model, Monte Carlo simulation scheme, and first-order perturbation methods. The mean and mean-square response statistics for a variety of free random vibration problems are derived for various input random joint stiffness probability distributions. The results of this effort show that, although joint flexibility has a substantial effect on the modal dynamic response of periodic large space trusses, the effect of any reasonable uncertainty or randomness associated with these joint flexibilities is insignificant.

  20. Why choose Random Forest to predict rare species distribution with few samples in large undersampled areas? Three Asian crane species models provide supporting evidence.

    PubMed

    Mi, Chunrong; Huettmann, Falk; Guo, Yumin; Han, Xuesong; Wen, Lijia

    2017-01-01

    Species distribution models (SDMs) have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane ( Grus monacha , n  = 33), White-naped Crane ( Grus vipio , n  = 40), and Black-necked Crane ( Grus nigricollis , n  = 75) in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model), Random Forest, CART (Classification and Regression Tree) and Maxent (Maximum Entropy Models). In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC) and true skill statistic (TSS)) were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial) ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid assessments and decisions for efficient conservation.

  1. Why choose Random Forest to predict rare species distribution with few samples in large undersampled areas? Three Asian crane species models provide supporting evidence

    PubMed Central

    Mi, Chunrong; Huettmann, Falk; Han, Xuesong; Wen, Lijia

    2017-01-01

    Species distribution models (SDMs) have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33), White-naped Crane (Grus vipio, n = 40), and Black-necked Crane (Grus nigricollis, n = 75) in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model), Random Forest, CART (Classification and Regression Tree) and Maxent (Maximum Entropy Models). In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC) and true skill statistic (TSS)) were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial) ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid assessments and decisions for efficient conservation. PMID:28097060

  2. Sequence Based Prediction of DNA-Binding Proteins Based on Hybrid Feature Selection Using Random Forest and Gaussian Naïve Bayes

    PubMed Central

    Lou, Wangchao; Wang, Xiaoqing; Chen, Fan; Chen, Yixiao; Jiang, Bo; Zhang, Hua

    2014-01-01

    Developing an efficient method for determination of the DNA-binding proteins, due to their vital roles in gene regulation, is becoming highly desired since it would be invaluable to advance our understanding of protein functions. In this study, we proposed a new method for the prediction of the DNA-binding proteins, by performing the feature rank using random forest and the wrapper-based feature selection using forward best-first search strategy. The features comprise information from primary sequence, predicted secondary structure, predicted relative solvent accessibility, and position specific scoring matrix. The proposed method, called DBPPred, used Gaussian naïve Bayes as the underlying classifier since it outperformed five other classifiers, including decision tree, logistic regression, k-nearest neighbor, support vector machine with polynomial kernel, and support vector machine with radial basis function. As a result, the proposed DBPPred yields the highest average accuracy of 0.791 and average MCC of 0.583 according to the five-fold cross validation with ten runs on the training benchmark dataset PDB594. Subsequently, blind tests on the independent dataset PDB186 by the proposed model trained on the entire PDB594 dataset and by other five existing methods (including iDNA-Prot, DNA-Prot, DNAbinder, DNABIND and DBD-Threader) were performed, resulting in that the proposed DBPPred yielded the highest accuracy of 0.769, MCC of 0.538, and AUC of 0.790. The independent tests performed by the proposed DBPPred on completely a large non-DNA binding protein dataset and two RNA binding protein datasets also showed improved or comparable quality when compared with the relevant prediction methods. Moreover, we observed that majority of the selected features by the proposed method are statistically significantly different between the mean feature values of the DNA-binding and the non DNA-binding proteins. All of the experimental results indicate that the proposed DBPPred can be an alternative perspective predictor for large-scale determination of DNA-binding proteins. PMID:24475169

  3. Citation patterns of online and print journals in the digital ageEC

    PubMed Central

    De Groote, Sandra L.

    2008-01-01

    Purpose: The research assesses the impact of online journals on citation patterns by examining whether researchers were more likely to limit the resources they cited to those journals available online rather than those only in print. Setting: Publications from a large urban university with a medical college at an urban location and at a smaller regional location were examined. The number of online journals available to authors on either campus was the same. The number of print journals available on the large campus was much greater than the print journals available at the small campus. Methodology: Searches by author affiliation from 1996 to 2005 were performed in the Web of Science to find all articles written by affiliated members in the college of medicine at the selected institution. Cited references from randomly selected articles were recorded, and the cited journals were coded into five categories based on their availability at the study institution: print only, print and online, online only, not owned, and dropped. Results were analyzed using SPSS. The age of articles cited for selected years as well as for 2006 and 2007 was also examined. Results: The number of journals cited each year continued to increase. On the large urban campus, researchers were not more likely to cite journals available online or less likely to cite journals only in print. At the regional location, at which the number of print-only journals was minimal, use of print-only journals significantly decreased. Conclusion/discussion: The citation of print-only journals by researchers with access to a library with a large print and electronic collection appeared to continue, despite the availability of potential alternatives in the online collection. Journals available in electronic format were cited more frequently in publications from the campus whose library had a small print collection, and the citation of journals available in both print and electronic formats generally increased over the years studied. PMID:18974814

  4. Designing a national soil erosion monitoring network for England and Wales

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Rawlins, Barry; Anderson, Karen; Evans, Martin; Farrow, Luke; Glendell, Miriam; James, Mike; Rickson, Jane; Quine, Timothy; Quinton, John; Brazier, Richard

    2014-05-01

    Although soil erosion is recognised as a significant threat to sustainable land use and may be a priority for action in any forthcoming EU Soil Framework Directive, those responsible for setting national policy with respect to erosion are constrained by a lack of robust, representative, data at large spatial scales. This reflects the process-orientated nature of much soil erosion research. Recognising this limitation, The UK Department for Environment, Food and Rural Affairs (Defra) established a project to pilot a cost-effective framework for monitoring of soil erosion in England and Wales (E&W). The pilot will compare different soil erosion monitoring methods at a site scale and provide statistical information for the final design of the full national monitoring network that will: provide unbiased estimates of the spatial mean of soil erosion rate across E&W (tonnes ha-1 yr-1) for each of three land-use classes - arable and horticultural grassland upland and semi-natural habitats quantify the uncertainty of these estimates with confidence intervals. Probability (design-based) sampling provides most efficient unbiased estimates of spatial means. In this study, a 16 hectare area (a square of 400 x 400 m) positioned at the centre of a 1-km grid cell, selected at random from mapped land use across E&W, provided the sampling support for measurement of erosion rates, with at least 94% of the support area corresponding to the target land use classes. Very small or zero erosion rates likely to be encountered at many sites reduce the sampling efficiency and make it difficult to compare different methods of soil erosion monitoring. Therefore, to increase the proportion of samples with larger erosion rates without biasing our estimates, we increased the inclusion probability density in areas where the erosion rate is likely to be large by using stratified random sampling. First, each sampling domain (land use class in E&W) was divided into strata; e.g. two sub-domains within which, respectively, small or no erosion rates, and moderate or larger erosion rates are expected. Each stratum was then sampled independently and at random. The sample density need not be equal in the two strata, but is known and is accounted for in the estimation of the mean and its standard error. To divide the domains into strata we used information on slope angle, previous interpretation of erosion susceptibility of the soil associations that correspond to the soil map of E&W at 1:250 000 (Soil Survey of England and Wales, 1983), and visual interpretation of evidence of erosion from aerial photography. While each domain could be stratified on the basis of the first two criteria, air photo interpretation across the whole country was not feasible. For this reason we used a two-phase random sampling for stratification (TPRS) design (de Gruijter et al., 2006). First, we formed an initial random sample of 1-km grid cells from the target domain. Second, each cell was then allocated to a stratum on the basis of the three criteria. A subset of the selected cells from each stratum were then selected for field survey at random, with a specified sampling density for each stratum so as to increase the proportion of cells where moderate or larger erosion rates were expected. Once measurements of erosion have been made, an estimate of the spatial mean of the erosion rate over the target domain, its standard error and associated uncertainty can be calculated by an expression which accounts for the estimated proportions of the two strata within the initial random sample. de Gruijter, J.J., Brus, D.J., Biekens, M.F.P. & Knotters, M. 2006. Sampling for Natural Resource Monitoring. Springer, Berlin. Soil Survey of England and Wales. 1983 National Soil Map NATMAP Vector 1:250,000. National Soil Research Institute, Cranfield University.

  5. Statistical auditing and randomness test of lotto k/N-type games

    NASA Astrophysics Data System (ADS)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.

    2008-11-01

    One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.

  6. Random variability explains apparent global clustering of large earthquakes

    USGS Publications Warehouse

    Michael, A.J.

    2011-01-01

    The occurrence of 5 Mw ≥ 8.5 earthquakes since 2004 has created a debate over whether or not we are in a global cluster of large earthquakes, temporarily raising risks above long-term levels. I use three classes of statistical tests to determine if the record of M ≥ 7 earthquakes since 1900 can reject a null hypothesis of independent random events with a constant rate plus localized aftershock sequences. The data cannot reject this null hypothesis. Thus, the temporal distribution of large global earthquakes is well-described by a random process, plus localized aftershocks, and apparent clustering is due to random variability. Therefore the risk of future events has not increased, except within ongoing aftershock sequences, and should be estimated from the longest possible record of events.

  7. Evaluation of a School-Based Depression Prevention Program among Adolescents from Low-Income Areas: A Randomized Controlled Effectiveness Trial

    PubMed Central

    Kindt, Karlijn C. M.; Kleinjan, Marloes; Janssens, Jan M. A. M.; Scholte, Ron H. J.

    2014-01-01

    A randomized controlled trial was conducted among a potential high-risk group of 1,343 adolescents from low-income areas in The Netherlands to test the effectiveness of the depression prevention program Op Volle Kracht (OVK) as provided by teachers in a school setting. The results showed no main effect of the program on depressive symptoms at one-year follow-up. A moderation effect was found for parental psychopathology; adolescents who had parents with psychopathology and received the OVK program had less depressive symptoms compared to adolescents with parents with psychopathology in the control condition. No moderating effects on depressive symptoms were found for gender, ethnical background, and level of baseline depressive symptoms. An iatrogenic effect of the intervention was found on the secondary outcome of clinical depressive symptoms. Based on the low level of reported depressive symptoms at baseline, it seems that our sample might not meet the characteristics of a high-risk selective group for depressive symptoms. Therefore, no firm conclusions can be drawn about the selective potential of the OVK depression prevention program. In its current form, the OVK program should not be implemented on a large scale in the natural setting for non-high-risk adolescents. Future research should focus on high-risk participants, such as children of parents with psychopathology. PMID:24837666

  8. Exploring the parameter space of the coarse-grained UNRES force field by random search: selecting a transferable medium-resolution force field.

    PubMed

    He, Yi; Xiao, Yi; Liwo, Adam; Scheraga, Harold A

    2009-10-01

    We explored the energy-parameter space of our coarse-grained UNRES force field for large-scale ab initio simulations of protein folding, to obtain good initial approximations for hierarchical optimization of the force field with new virtual-bond-angle bending and side-chain-rotamer potentials which we recently introduced to replace the statistical potentials. 100 sets of energy-term weights were generated randomly, and good sets were selected by carrying out replica-exchange molecular dynamics simulations of two peptides with a minimal alpha-helical and a minimal beta-hairpin fold, respectively: the tryptophan cage (PDB code: 1L2Y) and tryptophan zipper (PDB code: 1LE1). Eight sets of parameters produced native-like structures of these two peptides. These eight sets were tested on two larger proteins: the engrailed homeodomain (PDB code: 1ENH) and FBP WW domain (PDB code: 1E0L); two sets were found to produce native-like conformations of these proteins. These two sets were tested further on a larger set of nine proteins with alpha or alpha + beta structure and found to locate native-like structures of most of them. These results demonstrate that, in addition to finding reasonable initial starting points for optimization, an extensive search of parameter space is a powerful method to produce a transferable force field. Copyright 2009 Wiley Periodicals, Inc.

  9. Ceiling effect of online user interests for the movies

    NASA Astrophysics Data System (ADS)

    Ni, Jing; Zhang, Yi-Lu; Hu, Zhao-Long; Song, Wen-Jun; Hou, Lei; Guo, Qiang; Liu, Jian-Guo

    2014-05-01

    Online users' collective interests play an important role for analyzing the online social networks and personalized recommendations. In this paper, we introduce the information entropy to measure the diversity of the user interests. We empirically analyze the information entropy of the objects selected by the users with the same degree in both the MovieLens and Netflix datasets. The results show that as the user degree increases, the entropy increases from the lowest value at first to the highest value and then begins to fall, which indicates that the interests of the small-degree and large-degree users are more centralized, while the interests of normal users are more diverse. Furthermore, a null model is proposed to compare with the empirical results. In a null model, we keep the number of users and objects as well as the user degrees unchangeable, but the selection behaviors are totally random in both datasets. Results show that the diversity of the majority of users in the real datasets is higher than that the random case, with the exception of the diversity of only a fraction of small-degree users. That may because new users just like popular objects, while with the increase of the user experiences, they quickly become users of broad interests. Therefore, small-degree users' interests are much easier to predict than the other users', which may shed some light for the cold-start problem.

  10. The Status, Quality, and Expansion of the NIH Full-Length cDNA Project: The Mammalian Gene Collection (MGC)

    PubMed Central

    2004-01-01

    The National Institutes of Health's Mammalian Gene Collection (MGC) project was designed to generate and sequence a publicly accessible cDNA resource containing a complete open reading frame (ORF) for every human and mouse gene. The project initially used a random strategy to select clones from a large number of cDNA libraries from diverse tissues. Candidate clones were chosen based on 5′-EST sequences, and then fully sequenced to high accuracy and analyzed by algorithms developed for this project. Currently, more than 11,000 human and 10,000 mouse genes are represented in MGC by at least one clone with a full ORF. The random selection approach is now reaching a saturation point, and a transition to protocols targeted at the missing transcripts is now required to complete the mouse and human collections. Comparison of the sequence of the MGC clones to reference genome sequences reveals that most cDNA clones are of very high sequence quality, although it is likely that some cDNAs may carry missense variants as a consequence of experimental artifact, such as PCR, cloning, or reverse transcriptase errors. Recently, a rat cDNA component was added to the project, and ongoing frog (Xenopus) and zebrafish (Danio) cDNA projects were expanded to take advantage of the high-throughput MGC pipeline. PMID:15489334

  11. Developing a reliable and valid instrument to assess health-affecting aspects of neighborhoods in Tehran

    PubMed Central

    Ghalichi, Leila; Mohammad, Kazem; Majdzadeh, Reza; Hoseini, Mostafa; Pournik, Omid; Nedjat, Saharnaz

    2012-01-01

    Background: Residence characteristics can affect health of residents. This paper reports the development of an instrument assessing these aspects of neighborhoods. Materials and Methods: Literature search and focus group discussions with residents were carried out and relevant items were extracted. Five experts reviewed and commented on the items. An observation instrument with 54 items was composed and completed by two independent observers in 20 randomly selected locations. Due to lack of acceptable reliability in some items, the checklist was revised. The new 22-items checklist in four categories (general characteristics, public green area characteristics, access to services and undesirable features) was completed by two independent trained observers in 28 randomly selected locations. Results: The items in the final checklist had kappa statistics ranging from 0.63 to 1, with an exception of the item assessing “presence of beggars, homeless or working/street children”, with kappa as low as 0.27 due to variability of their presence in different times. Average Kappa statistics was 0.78 for general characteristics, 0.79 for public green area characteristics, 0.84 for access to services, and 0.54 for undesirable features. Conclusion: Neighborhood and health observation instrument seems to have good reliability in city of Tehran. It can probably be used in other large cities of Iran and similar cities elsewhere. PMID:23626633

  12. Encounter success of free-ranging marine predator movements across a dynamic prey landscape.

    PubMed

    Sims, David W; Witt, Matthew J; Richardson, Anthony J; Southall, Emily J; Metcalfe, Julian D

    2006-05-22

    Movements of wide-ranging top predators can now be studied effectively using satellite and archival telemetry. However, the motivations underlying movements remain difficult to determine because trajectories are seldom related to key biological gradients, such as changing prey distributions. Here, we use a dynamic prey landscape of zooplankton biomass in the north-east Atlantic Ocean to examine active habitat selection in the plankton-feeding basking shark Cetorhinus maximus. The relative success of shark searches across this landscape was examined by comparing prey biomass encountered by sharks with encounters by random-walk simulations of 'model' sharks. Movements of transmitter-tagged sharks monitored for 964 days (16754 km estimated minimum distance) were concentrated on the European continental shelf in areas characterized by high seasonal productivity and complex prey distributions. We show movements by adult and sub-adult sharks yielded consistently higher prey encounter rates than 90% of random-walk simulations. Behavioural patterns were consistent with basking sharks using search tactics structured across multiple scales to exploit the richest prey areas available in preferred habitats. Simple behavioural rules based on learned responses to previously encountered prey distributions may explain the high performances. This study highlights how dynamic prey landscapes enable active habitat selection in large predators to be investigated from a trophic perspective, an approach that may inform conservation by identifying critical habitat of vulnerable species.

  13. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  14. Analysis of access to hypertensive and diabetic drugs in the Family Health Strategy, State of Pernambuco, Brazil.

    PubMed

    Barreto, Maria Nelly Sobreira de Carvalho; Cesse, Eduarda Ângela Pessoa; Lima, Rodrigo Fonseca; Marinho, Michelly Geórgia da Silva; Specht, Yuri da Silva; de Carvalho, Eduardo Maia Freese; Fontbonne, Annick

    2015-01-01

    To evaluate the access to drugs for hypertension and diabetes and the direct cost of buying them among users of the Family Health Strategy (FHS) in the state of Pernambuco, Brazil. Population-based, cross-sectional study of a systematic random sample of 785 patients with hypertension and 823 patients with diabetes mellitus who were registered in 208 randomly selected FHS teams in 35 municipalities of the state of Pernambuco. The selected municipalities were classified into three levels with probability proportional to municipality size (LS, large-sized; MS, medium-sized; SS, small-sized). To verify differences between the cities, we used the χ2 test. Pharmacological treatment was used by 91.2% patients with hypertension whereas 85.6% patients with diabetes mellitus used oral antidiabetic drugs (OADs), and 15.4% used insulin. The FHS team itself provided antihypertensive medications to 69.0% patients with hypertension, OADs to 75.0% patients with diabetes mellitus, and insulin treatment to 65.4%. The 36.9% patients with hypertension and 29.8% with diabetes mellitus that had to buy all or part of their medications reported median monthly cost of R$ 18.30, R$ 14.00, and R$ 27.61 for antihypertensive drugs, OADs, and insulin, respectively. It is necessary to increase efforts to ensure access to these drugs in the primary health care network.

  15. Potential of Using Mobile Phone Data to Assist in Mission Analysis and Area of Operations Planning

    DTIC Science & Technology

    2015-08-01

    tremendously beneficial especially since a sizeable portion of the population are nomads , changing location based on season. A proper AO...provided: a. User_id: Selected User’s random ID b. Timestamp: 24 h format YYYY-MM-DD-HH:M0:00 (the second digits of the minutes and all the seconds...yearly were selected. This data provided: a. User_id: Selected User’s random ID b. Timestamp: 24 h format YYYY-MM-DD-HH:M0:00 (the second digits

  16. Discriminative Projection Selection Based Face Image Hashing

    NASA Astrophysics Data System (ADS)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  17. Topology-selective jamming of fully-connected, code-division random-access networks

    NASA Technical Reports Server (NTRS)

    Polydoros, Andreas; Cheng, Unjeng

    1990-01-01

    The purpose is to introduce certain models of topology selective stochastic jamming and examine its impact on a class of fully-connected, spread-spectrum, slotted ALOHA-type random access networks. The theory covers dedicated as well as half-duplex units. The dominant role of the spatial duty factor is established, and connections with the dual concept of time selective jamming are discussed. The optimal choices of coding rate and link access parameters (from the users' side) and the jamming spatial fraction are numerically established for DS and FH spreading.

  18. Development of Solution Algorithm and Sensitivity Analysis for Random Fuzzy Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Katagiri, Hideki

    2010-10-01

    This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.

  19. Unsupervised Feature Selection Based on the Morisita Index for Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Golay, Jean; Kanevski, Mikhail

    2017-04-01

    Hyperspectral sensors are capable of acquiring images with hundreds of narrow and contiguous spectral bands. Compared with traditional multispectral imagery, the use of hyperspectral images allows better performance in discriminating between land-cover classes, but it also results in large redundancy and high computational data processing. To alleviate such issues, unsupervised feature selection techniques for redundancy minimization can be implemented. Their goal is to select the smallest subset of features (or bands) in such a way that all the information content of a data set is preserved as much as possible. The present research deals with the application to hyperspectral images of a recently introduced technique of unsupervised feature selection: the Morisita-Based filter for Redundancy Minimization (MBRM). MBRM is based on the (multipoint) Morisita index of clustering and on the Morisita estimator of Intrinsic Dimension (ID). The fundamental idea of the technique is to retain only the bands which contribute to increasing the ID of an image. In this way, redundant bands are disregarded, since they have no impact on the ID. Besides, MBRM has several advantages over benchmark techniques: in addition to its ability to deal with large data sets, it can capture highly-nonlinear dependences and its implementation is straightforward in any programming environment. Experimental results on freely available hyperspectral images show the good effectiveness of MBRM in remote sensing data processing. Comparisons with benchmark techniques are carried out and random forests are used to assess the performance of MBRM in reducing the data dimensionality without loss of relevant information. References [1] C. Traina Jr., A.J.M. Traina, L. Wu, C. Faloutsos, Fast feature selection using fractal dimension, in: Proceedings of the XV Brazilian Symposium on Databases, SBBD, pp. 158-171, 2000. [2] J. Golay, M. Kanevski, A new estimator of intrinsic dimension based on the multipoint Morisita index, Pattern Recognition 48(12), pp. 4070-4081, 2015. [3] J. Golay, M. Kanevski, Unsupervised feature selection based on the Morisita estimator of intrinsic dimension, arXiv:1608.05581, 2016.

  20. Limited Effects of a 2-Year School-Based Physical Activity Intervention on Body Composition and Cardiorespiratory Fitness in 7-Year-Old Children

    ERIC Educational Resources Information Center

    Magnusson, Kristjan Thor; Hrafnkelsson, Hannes; Sigurgeirsson, Ingvar; Johannsson, Erlingur; Sveinsson, Thorarinn

    2012-01-01

    The aim of this study was to assess the effects of a 2-year cluster-randomized physical activity and dietary intervention program among 7-year-old (at baseline) elementary school participants on body composition and objectively measured cardiorespiratory fitness. Three pairs of schools were selected and matched, then randomly selected as either an…

Top