The Coalescent Process in Models with Selection
Kaplan, N. L.; Darden, T.; Hudson, R. R.
1988-01-01
Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685
78 FR 57033 - United States Standards for Condition of Food Containers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-17
... containers during production. Stationary lot sampling is the process of randomly selecting sample units from.... * * * * * Stationary lot sampling. The process of randomly selecting sample units from a lot whose production has been... less than \\1/16\\-inch Stringy seal (excessive plastic threads showing at edge of seal 222 area...
The coalescent process in models with selection and recombination.
Hudson, R R; Kaplan, N L
1988-11-01
The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.
Bridging Emergent Attributes and Darwinian Principles in Teaching Natural Selection
ERIC Educational Resources Information Center
Xu, Dongchen; Chi, Michelene T. H.
2016-01-01
Students often have misconceptions about natural selection as they misuse a direct causal schema to explain the process. Natural selection is in fact an emergent process where random interactions lead to changes in a population. The misconceptions stem from students' lack of emergent schema for natural selection. In order to help students…
Random bits, true and unbiased, from atmospheric turbulence
Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo
2014-01-01
Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499
Goal selection versus process control while learning to use a brain-computer interface
NASA Astrophysics Data System (ADS)
Royer, Audrey S.; Rose, Minn L.; He, Bin
2011-06-01
A brain-computer interface (BCI) can be used to accomplish a task without requiring motor output. Two major control strategies used by BCIs during task completion are process control and goal selection. In process control, the user exerts continuous control and independently executes the given task. In goal selection, the user communicates their goal to the BCI and then receives assistance executing the task. A previous study has shown that goal selection is more accurate and faster in use. An unanswered question is, which control strategy is easier to learn? This study directly compares goal selection and process control while learning to use a sensorimotor rhythm-based BCI. Twenty young healthy human subjects were randomly assigned either to a goal selection or a process control-based paradigm for eight sessions. At the end of the study, the best user from each paradigm completed two additional sessions using all paradigms randomly mixed. The results of this study were that goal selection required a shorter training period for increased speed, accuracy, and information transfer over process control. These results held for the best subjects as well as in the general subject population. The demonstrated characteristics of goal selection make it a promising option to increase the utility of BCIs intended for both disabled and able-bodied users.
Natural Selection as an Emergent Process: Instructional Implications
ERIC Educational Resources Information Center
Cooper, Robert A.
2017-01-01
Student reasoning about cases of natural selection is often plagued by errors that stem from miscategorising selection as a direct, causal process, misunderstanding the role of randomness, and from the intuitive ideas of intentionality, teleology and essentialism. The common thread throughout many of these reasoning errors is a failure to apply…
Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore
2011-12-01
Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.
ERIC Educational Resources Information Center
Felce, David; Perry, Jonathan
2004-01-01
Background: The aims were to: (i) explore the association between age and size of setting and staffing per resident; and (ii) report resident and setting characteristics, and indicators of service process and resident activity for a national random sample of staffed housing provision. Methods: Sixty settings were selected randomly from those…
SELECTING SITES FOR COMPARISON WITH CREATED WETLANDS
The paper describes the method used for selecting natural wetlands to compare with created wetlands. The results of the selection process and the advantages and disadvantages of the method are discussed. The random site selection method required extensive field work and may have ...
The coalescent of a sample from a binary branching process.
Lambert, Amaury
2018-04-25
At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.
An Overview of Randomization and Minimization Programs for Randomized Clinical Trials
Saghaei, Mahmoud
2011-01-01
Randomization is an essential component of sound clinical trials, which prevents selection biases and helps in blinding the allocations. Randomization is a process by which subsequent subjects are enrolled into trial groups only by chance, which essentially eliminates selection biases. A serious consequence of randomization is severe imbalance among the treatment groups with respect to some prognostic factors, which invalidate the trial results or necessitate complex and usually unreliable secondary analysis to eradicate the source of imbalances. Minimization on the other hand tends to allocate in such a way as to minimize the differences among groups, with respect to prognostic factors. Pure minimization is therefore completely deterministic, that is, one can predict the allocation of the next subject by knowing the factor levels of a previously enrolled subject and having the properties of the next subject. To eliminate the predictability of randomization, it is necessary to include some elements of randomness in the minimization algorithms. In this article brief descriptions of randomization and minimization are presented followed by introducing selected randomization and minimization programs. PMID:22606659
Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E
2001-01-01
Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.
Lemasters, John J
2005-01-01
In autophagy, portions of cytoplasm are sequestered into autophagosomes and delivered to lysosomes for degradation. Long assumed to be a random process, increasing evidence suggests that autophagy of mitochondria, peroxisomes, and possibly other organelles is selective. A recent paper (Kissova et al., J. Biol. Chem. 2004;279:39068-39074) shows in yeast that a specific outer membrane protein, Uth1p, is required for efficient mitochondrial autophagy. For this selective autophagy of mitochondria, we propose the term "mitophagy" to emphasize the non-random nature of the process. Mitophagy may play a key role in retarding accumulation of somatic mutations of mtDNA with aging.
Nonequivalence of updating rules in evolutionary games under high mutation rates.
Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Nonequivalence of updating rules in evolutionary games under high mutation rates
NASA Astrophysics Data System (ADS)
Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Fixation probabilities on superstars, revisited and revised.
Jamieson-Lane, Alastair; Hauert, Christoph
2015-10-07
Population structures can be crucial determinants of evolutionary processes. For the Moran process on graphs certain structures suppress selective pressure, while others amplify it (Lieberman et al., 2005). Evolutionary amplifiers suppress random drift and enhance selection. Recently, some results for the most powerful known evolutionary amplifier, the superstar, have been invalidated by a counter example (Díaz et al., 2013). Here we correct the original proof and derive improved upper and lower bounds, which indicate that the fixation probability remains close to 1-1/(r(4)H) for population size N→∞ and structural parameter H⪢1. This correction resolves the differences between the two aforementioned papers. We also confirm that in the limit N,H→∞ superstars remain capable of eliminating random drift and hence of providing arbitrarily strong selective advantages to any beneficial mutation. In addition, we investigate the robustness of amplification in superstars and find that it appears to be a fragile phenomenon with respect to changes in the selection or mutation processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Engen, Steinar; Saether, Bernt-Erik
2014-03-01
We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
In Darwinian evolution, feedback from natural selection leads to biased mutations.
Caporale, Lynn Helena; Doyle, John
2013-12-01
Natural selection provides feedback through which information about the environment and its recurring challenges is captured, inherited, and accumulated within genomes in the form of variations that contribute to survival. The variation upon which natural selection acts is generally described as "random." Yet evidence has been mounting for decades, from such phenomena as mutation hotspots, horizontal gene transfer, and highly mutable repetitive sequences, that variation is far from the simplifying idealization of random processes as white (uniform in space and time and independent of the environment or context). This paper focuses on what is known about the generation and control of mutational variation, emphasizing that it is not uniform across the genome or in time, not unstructured with respect to survival, and is neither memoryless nor independent of the (also far from white) environment. We suggest that, as opposed to frequentist methods, Bayesian analysis could capture the evolution of nonuniform probabilities of distinct classes of mutation, and argue not only that the locations, styles, and timing of real mutations are not correctly modeled as generated by a white noise random process, but that such a process would be inconsistent with evolutionary theory. © 2013 New York Academy of Sciences.
ERIC Educational Resources Information Center
Zoblotsky, Todd; Ransford-Kaldon, Carolyn; Morrison, Donald M.
2011-01-01
The present paper describes the recruitment and site selection process that has been underway since January 2011, with particular emphasis on the use of Mahalanobis distance score to determine matched pairs of sites prior to randomization to treatment and control groups. Through a systematic winnowing process, the authors found that they could…
Unbiased feature selection in learning random forests for high-dimensional data.
Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi
2015-01-01
Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.
49 CFR 382.401 - Retention of records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... substances collection process (except calibration of evidential breath testing devices). (3) One year... record is required to be prepared, it must be maintained. (1) Records related to the collection process: (i) Collection logbooks, if used; (ii) Documents relating to the random selection process; (iii...
Silicon solar cell process development, fabrication and analysis
NASA Technical Reports Server (NTRS)
Iles, P. A.; Leung, D. C.
1982-01-01
For UCP Si, randomly selected wafers and wafers cut from two specific ingots were studied. For the randomly selected wafers, a moderate gettering diffusion had little effect. Moreover, an efficiency up to 14% AMI was achieved with advanced processes. For the two specific UCP ingots, ingot #5848-13C displayed severe impurity effects as shown by lower 3sc in the middle of the ingot and low CFF in the top of the ingot. Also the middle portions of this ingot responded to a series of progressively more severe gettering diffusion. Unexplained was the fact that severely gettered samples of this ingot displayed a negative light biased effect on the minority carrier diffusion length while the nongettered or moderately gettered ones had the more conventional positive light biased effect on diffusion length. On the other hand, ingot C-4-21A did not have the problem of ingot 5848-13C and behaved like to the randomly selected wafers. The top half of the ingot was shown to be slightly superior to the bottom half, but moderate gettering helped to narrow the gap.
Ma, Li; Fan, Suohai
2017-03-14
The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.
Hindersin, Laura; Traulsen, Arne
2015-11-01
We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process.
Garvin-Doxas, Kathy
2008-01-01
While researching student assumptions for the development of the Biology Concept Inventory (BCI; http://bioliteracy.net), we found that a wide class of student difficulties in molecular and evolutionary biology appears to be based on deep-seated, and often unaddressed, misconceptions about random processes. Data were based on more than 500 open-ended (primarily) college student responses, submitted online and analyzed through our Ed's Tools system, together with 28 thematic and think-aloud interviews with students, and the responses of students in introductory and advanced courses to questions on the BCI. Students believe that random processes are inefficient, whereas biological systems are very efficient. They are therefore quick to propose their own rational explanations for various processes, from diffusion to evolution. These rational explanations almost always make recourse to a driver, e.g., natural selection in evolution or concentration gradients in molecular biology, with the process taking place only when the driver is present, and ceasing when the driver is absent. For example, most students believe that diffusion only takes place when there is a concentration gradient, and that the mutational processes that change organisms occur only in response to natural selection pressures. An understanding that random processes take place all the time and can give rise to complex and often counterintuitive behaviors is almost totally absent. Even students who have had advanced or college physics, and can discuss diffusion correctly in that context, cannot make the transfer to biological processes, and passing through multiple conventional biology courses appears to have little effect on their underlying beliefs. PMID:18519614
47 CFR 1.926 - Application processing; initial procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 1 2013-10-01 2013-10-01 false Application processing; initial procedures. 1.926 Section 1.926 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings Application Requirements...
47 CFR 1.926 - Application processing; initial procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 1 2014-10-01 2014-10-01 false Application processing; initial procedures. 1.926 Section 1.926 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings Application Requirements...
47 CFR 1.926 - Application processing; initial procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Application processing; initial procedures. 1.926 Section 1.926 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings Application Requirements...
Ray tracing method for simulation of laser beam interaction with random packings of powders
NASA Astrophysics Data System (ADS)
Kovalev, O. B.; Kovaleva, I. O.; Belyaev, V. V.
2018-03-01
Selective laser sintering is a technology of rapid manufacturing of a free form that is created as a solid object by selectively fusing successive layers of powder using a laser. The motivation of this study is due to the currently insufficient understanding of the processes and phenomena of selective laser melting of powders whose time scales differ by orders of magnitude. To construct random packings from mono- and polydispersed solid spheres, the algorithm of their generation based on the discrete element method is used. A numerical method of ray tracing is proposed that is used to simulate the interaction of laser radiation with a random bulk packing of spherical particles and to predict the optical properties of the granular layer, the extinction and absorption coefficients, depending on the optical properties of a powder material.
Emotional Intelligence and Life Adjustment for Nigerian Secondary Students
ERIC Educational Resources Information Center
Ogoemeka, Obioma Helen
2013-01-01
In the process of educating adolescents, good emotional development and life adjustment are two significant factors for teachers to know. This study employed random cluster sampling of senior secondary school students in Ondo and Oyo States in south-western Nigeria. The Random sampling was employed to select 1,070 students. The data collected were…
Sadeh, Sadra; Rotter, Stefan
2014-01-01
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704
Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA
2009-06-23
A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.
Random covering of the circle: the configuration-space of the free deposition process
NASA Astrophysics Data System (ADS)
Huillet, Thierry
2003-12-01
Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.
Inference from habitat-selection analysis depends on foraging strategies.
Bastille-Rousseau, Guillaume; Fortin, Daniel; Dussault, Christian
2010-11-01
1. Several methods have been developed to assess habitat selection, most of which are based on a comparison between habitat attributes in used vs. unused or random locations, such as the popular resource selection functions (RSFs). Spatial evaluation of residency time has been recently proposed as a promising avenue for studying habitat selection. Residency-time analyses assume a positive relationship between residency time within habitat patches and selection. We demonstrate that RSF and residency-time analyses provide different information about the process of habitat selection. Further, we show how the consideration of switching rate between habitat patches (interpatch movements) together with residency-time analysis can reveal habitat-selection strategies. 2. Spatially explicit, individual-based modelling was used to simulate foragers displaying one of six foraging strategies in a heterogeneous environment. The strategies combined one of three patch-departure rules (fixed-quitting-harvest-rate, fixed-time and fixed-amount strategy), together with one of two interpatch-movement rules (random or biased). Habitat selection of simulated foragers was then assessed using RSF, residency-time and interpatch-movement analyses. 3. Our simulations showed that RSFs and residency times are not always equivalent. When foragers move in a non-random manner and do not increase residency time in richer patches, residency-time analysis can provide misleading assessments of habitat selection. This is because the overall time spent in the various patch types not only depends on residency times, but also on interpatch-movement decisions. 4. We suggest that RSFs provide the outcome of the entire selection process, whereas residency-time and interpatch-movement analyses can be used in combination to reveal the mechanisms behind the selection process. 5. We showed that there is a risk in using residency-time analysis alone to infer habitat selection. Residency-time analyses, however, may enlighten the mechanisms of habitat selection by revealing central components of resource-use strategies. Given that management decisions are often based on resource-selection analyses, the evaluation of resource-use strategies can be key information for the development of efficient habitat-management strategies. Combining RSF, residency-time and interpatch-movement analyses is a simple and efficient way to gain a more comprehensive understanding of habitat selection. © 2010 The Authors. Journal compilation © 2010 British Ecological Society.
The genealogy of samples in models with selection.
Neuhauser, C; Krone, S M
1997-02-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.
The Genealogy of Samples in Models with Selection
Neuhauser, C.; Krone, S. M.
1997-01-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604
Selecting materialized views using random algorithm
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi
2007-04-01
The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.
Spielman, Stephanie J; Wilke, Claus O
2016-11-01
The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Heo, Moonseong; Meissner, Paul; Litwin, Alain H; Arnsten, Julia H; McKee, M Diane; Karasz, Alison; McKinley, Paula; Rehm, Colin D; Chambers, Earle C; Yeh, Ming-Chin; Wylie-Rosett, Judith
2017-01-01
Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants' preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants' preference after randomization, which we call a "preference option randomized design (PORD)". In contrast to other preference designs, which ask whether or not participants consent to the assigned intervention after randomization, the crucial feature of preference option randomized design is its unique informed consent process before randomization. Specifically, the preference option randomized design consent process informs participants that they can opt out and switch to the other intervention only if after randomization they actively express the desire to do so. Participants who do not independently express explicit alternate preference or assent to the randomly assigned intervention are considered to not have an alternate preference. In sum, preference option randomized design intends to maximize retention, minimize possibility of forced assignment for any participants, and to maintain randomization by allowing participants with no or equal preference to represent random assignments. This design scheme enables to define five effects that are interconnected with each other through common design parameters-comparative, preference, selection, intent-to-treat, and overall/as-treated-to collectively guide decision making between interventions. Statistical power functions for testing all these effects are derived, and simulations verified the validity of the power functions under normal and binomial distributions.
Hancock, Laura M; Bruce, Jared M; Bruce, Amanda S; Lynch, Sharon G
2015-01-01
Between 40-65% of multiple sclerosis patients experience cognitive deficits, with processing speed and working memory most commonly affected. This pilot study investigated the effect of computerized cognitive training focused on improving processing speed and working memory. Participants were randomized into either an active or a sham training group and engaged in six weeks of training. The active training group improved on a measure of processing speed and attention following cognitive training, and data trended toward significance on measures of other domains. Results provide preliminary evidence that cognitive training with multiple sclerosis patients may produce moderate improvement in select areas of cognitive functioning.
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Lisonek, Petr; Soukal, David
2005-03-01
In this paper, we show that the communication channel known as writing in memory with defective cells is a relevant information-theoretical model for a specific case of passive warden steganography when the sender embeds a secret message into a subset C of the cover object X without sharing the selection channel C with the recipient. The set C could be arbitrary, determined by the sender from the cover object using a deterministic, pseudo-random, or a truly random process. We call this steganography "writing on wet paper" and realize it using low-density random linear codes with the encoding step based on the LT process. The importance of writing on wet paper for covert communication is discussed within the context of adaptive steganography and perturbed quantization steganography. Heuristic arguments supported by tests using blind steganalysis indicate that the wet paper steganography provides improved steganographic security for embedding in JPEG images and is less vulnerable to attacks when compared to existing methods with shared selection channels.
Listeners modulate temporally selective attention during natural speech processing
Astheimer, Lori B.; Sanders, Lisa D.
2009-01-01
Spatially selective attention allows for the preferential processing of relevant stimuli when more information than can be processed in detail is presented simultaneously at distinct locations. Temporally selective attention may serve a similar function during speech perception by allowing listeners to allocate attentional resources to time windows that contain highly relevant acoustic information. To test this hypothesis, event-related potentials were compared in response to attention probes presented in six conditions during a narrative: concurrently with word onsets, beginning 50 and 100 ms before and after word onsets, and at random control intervals. Times for probe presentation were selected such that the acoustic environments of the narrative were matched for all conditions. Linguistic attention probes presented at and immediately following word onsets elicited larger amplitude N1s than control probes over medial and anterior regions. These results indicate that native speakers selectively process sounds presented at specific times during normal speech perception. PMID:18395316
Different hunting strategies select for different weights in red deer.
Martínez, María; Rodríguez-Vigal, Carlos; Jones, Owen R; Coulson, Tim; San Miguel, Alfonso
2005-09-22
Much insight can be derived from records of shot animals. Most researchers using such data assume that their data represents a random sample of a particular demographic class. However, hunters typically select a non-random subset of the population and hunting is, therefore, not a random process. Here, with red deer (Cervus elaphus) hunting data from a ranch in Toledo, Spain, we demonstrate that data collection methods have a significant influence upon the apparent relationship between age and weight. We argue that a failure to correct for such methodological bias may have significant consequences for the interpretation of analyses involving weight or correlated traits such as breeding success, and urge researchers to explore methods to identify and correct for such bias in their data.
Ensemble Feature Learning of Genomic Data Using Support Vector Machine
Anaissi, Ali; Goyal, Madhu; Catchpoole, Daniel R.; Braytee, Ali; Kennedy, Paul J.
2016-01-01
The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algorithm were further explored with Singular Value Decomposition (SVD) which reveals significant clusters with the selected data. PMID:27304923
NASA Technical Reports Server (NTRS)
Holmquist, R.
1978-01-01
The random evolutionary hits (REH) theory of evolutionary divergence, originally proposed in 1972, is restated with attention to certain aspects of the theory that have caused confusion. The theory assumes that natural selection and stochastic processes interact and that natural selection restricts those codon sites which may fix mutations. The predicted total number of fixed nucleotide replacements agrees with data for cytochrome c, a-hemoglobin, beta-hemoglobin, and myoglobin. The restatement analyzes the magnitude of possible sources of errors and simplifies calculational methodology by supplying polynomial expressions to replace tables and graphs.
Coevolutionary dynamics in large, but finite populations
NASA Astrophysics Data System (ADS)
Traulsen, Arne; Claussen, Jens Christian; Hauert, Christoph
2006-07-01
Coevolving and competing species or game-theoretic strategies exhibit rich and complex dynamics for which a general theoretical framework based on finite populations is still lacking. Recently, an explicit mean-field description in the form of a Fokker-Planck equation was derived for frequency-dependent selection with two strategies in finite populations based on microscopic processes [A. Traulsen, J. C. Claussen, and C. Hauert, Phys. Rev. Lett. 95, 238701 (2005)]. Here we generalize this approach in a twofold way: First, we extend the framework to an arbitrary number of strategies and second, we allow for mutations in the evolutionary process. The deterministic limit of infinite population size of the frequency-dependent Moran process yields the adjusted replicator-mutator equation, which describes the combined effect of selection and mutation. For finite populations, we provide an extension taking random drift into account. In the limit of neutral selection, i.e., whenever the process is determined by random drift and mutations, the stationary strategy distribution is derived. This distribution forms the background for the coevolutionary process. In particular, a critical mutation rate uc is obtained separating two scenarios: above uc the population predominantly consists of a mixture of strategies whereas below uc the population tends to be in homogeneous states. For one of the fundamental problems in evolutionary biology, the evolution of cooperation under Darwinian selection, we demonstrate that the analytical framework provides excellent approximations to individual based simulations even for rather small population sizes. This approach complements simulation results and provides a deeper, systematic understanding of coevolutionary dynamics.
Valenzuela, Carlos Y
2013-01-01
The Neutral Theory of Evolution (NTE) proposes mutation and random genetic drift as the most important evolutionary factors. The most conspicuous feature of evolution is the genomic stability during paleontological eras and lack of variation among taxa; 98% or more of nucleotide sites are monomorphic within a species. NTE explains this homology by random fixation of neutral bases and negative selection (purifying selection) that does not contribute either to evolution or polymorphisms. Purifying selection is insufficient to account for this evolutionary feature and the Nearly-Neutral Theory of Evolution (N-NTE) included negative selection with coefficients as low as mutation rate. These NTE and N-NTE propositions are thermodynamically (tendency to random distributions, second law), biotically (recurrent mutation), logically and mathematically (resilient equilibria instead of fixation by drift) untenable. Recurrent forward and backward mutation and random fluctuations of base frequencies alone in a site make life organization and fixations impossible. Drift is not a directional evolutionary factor, but a directional tendency of matter-energy processes (second law) which threatens the biotic organization. Drift cannot drive evolution. In a site, the mutation rates among bases and selection coefficients determine the resilient equilibrium frequency of bases that genetic drift cannot change. The expected neutral random interaction among nucleotides is zero; however, huge interactions and periodicities were found between bases of dinucleotides separated by 1, 2... and more than 1,000 sites. Every base is co-adapted with the whole genome. Neutralists found that neutral evolution is independent of population size (N); thus neutral evolution should be independent of drift, because drift effect is dependent upon N. Also, chromosome size and shape as well as protein size are far from random.
Exploring the repetition bias in voluntary task switching.
Mittelstädt, Victor; Dignath, David; Schmidt-Ott, Magdalena; Kiesel, Andrea
2018-01-01
In the voluntary task-switching paradigm, participants are required to randomly select tasks. We reasoned that the consistent finding of a repetition bias (i.e., participants repeat tasks more often than expected by chance) reflects reasonable adaptive task selection behavior to balance the goal of random task selection with the goals to minimize the time and effort for task performance. We conducted two experiments in which participants were provided with variable amount of preview for the non-chosen task stimuli (i.e., potential switch stimuli). We assumed that switch stimuli would initiate some pre-processing resulting in improved performance in switch trials. Results showed that reduced switch costs due to extra-preview in advance of each trial were accompanied by more task switches. This finding is in line with the characteristics of rational adaptive behavior. However, participants were not biased to switch tasks more often than chance despite large switch benefits. We suggest that participants might avoid effortful additional control processes that modulate the effects of preview on task performance and task choice.
The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices
An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei
2014-01-01
All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033
In vitro selection using a dual RNA library that allows primerless selection
Jarosch, Florian; Buchner, Klaus; Klussmann, Sven
2006-01-01
High affinity target-binding aptamers are identified from random oligonucleotide libraries by an in vitro selection process called Systematic Evolution of Ligands by EXponential enrichment (SELEX). Since the SELEX process includes a PCR amplification step the randomized region of the oligonucleotide libraries need to be flanked by two fixed primer binding sequences. These primer binding sites are often difficult to truncate because they may be necessary to maintain the structure of the aptamer or may even be part of the target binding motif. We designed a novel type of RNA library that carries fixed sequences which constrain the oligonucleotides into a partly double-stranded structure, thereby minimizing the risk that the primer binding sequences become part of the target-binding motif. Moreover, the specific design of the library including the use of tandem RNA Polymerase promoters allows the selection of oligonucleotides without any primer binding sequences. The library was used to select aptamers to the mirror-image peptide of ghrelin. Ghrelin is a potent stimulator of growth-hormone release and food intake. After selection, the identified aptamer sequences were directly synthesized in their mirror-image configuration. The final 44 nt-Spiegelmer, named NOX-B11-3, blocks ghrelin action in a cell culture assay displaying an IC50 of 4.5 nM at 37°C. PMID:16855281
van Rossum, Joris
2006-01-01
In its essence, the explanatory potential of the theory of natural selection is based on the iterative process of random production and variation, and subsequent non-random, directive selection. It is shown that within this explanatory framework, there is no place for the explanation of sexual reproduction. Thus in Darwinistic literature, sexual reproduction - one of nature's most salient characteristics - is often either assumed or ignored, but not explained. This fundamental and challenging gap within a complete naturalistic understanding of living beings calls for the need of a cybernetic account for sexual reproduction, meaning an understanding of the dynamic and creative potential of living beings to continuously and autonomously produce new organisms with unique and specific constellations.
Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension
NASA Astrophysics Data System (ADS)
Yan, Z.; Luan, X.
2017-12-01
Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.
Different hunting strategies select for different weights in red deer
Martínez, María; Rodríguez-Vigal, Carlos; Jones, Owen R; Coulson, Tim; Miguel, Alfonso San
2005-01-01
Much insight can be derived from records of shot animals. Most researchers using such data assume that their data represents a random sample of a particular demographic class. However, hunters typically select a non-random subset of the population and hunting is, therefore, not a random process. Here, with red deer (Cervus elaphus) hunting data from a ranch in Toledo, Spain, we demonstrate that data collection methods have a significant influence upon the apparent relationship between age and weight. We argue that a failure to correct for such methodological bias may have significant consequences for the interpretation of analyses involving weight or correlated traits such as breeding success, and urge researchers to explore methods to identify and correct for such bias in their data. PMID:17148205
Jeyasingh, Suganthi; Veluchamy, Malathi
2017-05-01
Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License
People's Need for Additional Job Training: Development and Evaluation of an Assessment Procedure.
ERIC Educational Resources Information Center
Copa, George H.; Maurice, Clyde F.
A procedure was developed and evaluated for assessing the self-perceived educational needs of people as one input to the process of planning, approving, and implementing relevant educational programs. The method of data collection involved selecting samples of people by randomly selecting households in a given geographic area, and then contacting…
What Every Public School Physical Educator Should Know about the Hiring Process
ERIC Educational Resources Information Center
Stier, William F., Jr.; Schneider, Robert C.
2007-01-01
A national survey of high school principals was conducted to determine whether they agreed or disagreed with selected practices and procedures used to hire high school physical education teachers. A survey instrument, developed with the help of experts in the field and consisting of 29 items, was sent to 400 randomly selected principals. Useable…
ERIC Educational Resources Information Center
Wentling, Rose Mary; Palma-Rivas, Nilda
The current status of diversity initiatives in eight U.S.-based multinational corporations was examined through a process involving semistructured interviews of diversity managers and analysis of their annual reports for fiscal 1996 and related documents. The 8 corporations were randomly selected from the 30 multinational corporations in Illinois.…
USDA-ARS?s Scientific Manuscript database
The concentration of mercury, cadmium, lead, and arsenic along with glyphosate and an extensive array of pesticides in the U.S. peanut crop was assessed for crop years 2013-2015. Samples were randomly selected from various buying points during the grading process. Samples were selected from the thre...
Results from the Biology Concept Inventory (BCI), and what they mean for biogeoscience literacy.
NASA Astrophysics Data System (ADS)
Garvin-Doxas, K.; Klymkowsky, M.
2008-12-01
While researching the Biology Concept Inventory (BCI) we found that a wide class of student difficulties in genetics and molecular biology can be traced to deep-seated misconceptions about random processes and molecular interactions. Students believe that random processes are inefficient, while biological systems are very efficient, and are therefore quick to propose their own rational explanations for various processes (from diffusion to evolution). These rational explanations almost always make recourse to a driver (natural selection in genetics, or density gradients in molecular biology) with the process only taking place when the driver is present. The concept of underlying random processes that are taking place all the time giving rise to emergent behaviour is almost totally absent. Even students who have advanced or college physics, and can discuss diffusion correctly in that context, cannot make the transfer to biological processes. Furthermore, their understanding of molecular interactions is purely geometric, with a lock-and-key model (rather than an energy minimization model) that does not allow for the survival of slight variations of the "correct" molecule. Together with the dominant misconception about random processes, this results in a strong conceptual barrier in understanding evolutionary processes, and can frustrate the success of education programs.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
Information Selection in Intelligence Processing
2011-12-01
given. Edges connecting nodes representing irrelevant persons with either relevant or irrelevant persons are added randomly, as in an Erdos- Renyi ...graph (Erdos at Renyi , 1959): For each irrelevant node i , and another node j (either relevant or irrelevant) there is a predetermined probability that...statistics for engineering and the sciences (7th ed.). Boston: Duxbury Press. Erdos, P., & Renyi , A. (1959). “On Random Graphs,” Publicationes
Primer-Free Aptamer Selection Using A Random DNA Library
Pan, Weihua; Xin, Ping; Patrick, Susan; Dean, Stacey; Keating, Christine; Clawson, Gary
2010-01-01
Aptamers are highly structured oligonucleotides (DNA or RNA) that can bind to targets with affinities comparable to antibodies 1. They are identified through an in vitro selection process called Systematic Evolution of Ligands by EXponential enrichment (SELEX) to recognize a wide variety of targets, from small molecules to proteins and other macromolecules 2-4. Aptamers have properties that are well suited for in vivo diagnostic and/or therapeutic applications: Besides good specificity and affinity, they are easily synthesized, survive more rigorous processing conditions, they are poorly immunogenic, and their relatively small size can result in facile penetration of tissues. Aptamers that are identified through the standard SELEX process usually comprise ~80 nucleotides (nt), since they are typically selected from nucleic acid libraries with ~40 nt long randomized regions plus fixed primer sites of ~20 nt on each side. The fixed primer sequences thus can comprise nearly ~50% of the library sequences, and therefore may positively or negatively compromise identification of aptamers in the selection process 3, although bioinformatics approaches suggest that the fixed sequences do not contribute significantly to aptamer structure after selection 5. To address these potential problems, primer sequences have been blocked by complementary oligonucleotides or switched to different sequences midway during the rounds of SELEX 6, or they have been trimmed to 6-9 nt 7, 8. Wen and Gray 9 designed a primer-free genomic SELEX method, in which the primer sequences were completely removed from the library before selection and were then regenerated to allow amplification of the selected genomic fragments. However, to employ the technique, a unique genomic library has to be constructed, which possesses limited diversity, and regeneration after rounds of selection relies on a linear reamplification step. Alternatively, efforts to circumvent problems caused by fixed primer sequences using high efficiency partitioning are met with problems regarding PCR amplification 10. We have developed a primer-free (PF) selection method that significantly simplifies SELEX procedures and effectively eliminates primer-interference problems 11, 12. The protocols work in a straightforward manner. The central random region of the library is purified without extraneous flanking sequences and is bound to a suitable target (for example to a purified protein or complex mixtures such as cell lines). Then the bound sequences are obtained, reunited with flanking sequences, and re-amplified to generate selected sub-libraries. As an example, here we selected aptamers to S100B, a protein marker for melanoma. Binding assays showed Kd s in the 10-7 - 10-8 M range after a few rounds of selection, and we demonstrate that the aptamers function effectively in a sandwich binding format. PMID:20689511
NASA Astrophysics Data System (ADS)
Sirait, Kamson; Tulus; Budhiarti Nababan, Erna
2017-12-01
Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...
ERIC Educational Resources Information Center
Hashemnezhad, Hossein; Zangalani, Sanaz Khalili
2012-01-01
The aim of the present paper was to investigate the effects of processing instruction and traditional instruction on Iranian EFL learners' writing ability. Thirty participants who were non-randomly selected out of 63 Intermediate EFL learners, taking English courses in a language institute in Khoy-Iran, participated in this quasi-experimental…
Vickers, Andrew J; Young-Afat, Danny A; Ehdaie, Behfar; Kim, Scott Yh
2018-02-01
Informed consent for randomized trials often causes significant and persistent anxiety, distress and confusion to patients. Where an experimental treatment is compared to a standard care control, much of this burden is potentially avoidable in the control group. We propose a "just-in-time" consent in which consent discussions take place in two stages: an initial consent to research from all participants and a later specific consent to randomized treatment only from those assigned to the experimental intervention. All patients are first approached and informed about research procedures, such as questionnaires or tests. They are also informed that they might be randomly selected to receive an experimental treatment and that, if selected, they can learn more about the treatment and decide whether or not to accept it at that time. After randomization, control patients undergo standard clinical consent whereas patients randomized to the experimental procedure undergo a second consent discussion. Analysis would be by intent-to-treat, which protects the trial from selection bias, although not from poor acceptance of experimental treatment. The advantages of just-in-time consent stem from the fact that only patients randomized to the experimental treatment are subject to a discussion of that intervention. We hypothesize that this will reduce much of the patient's burden associated with the consent process, such as decisional anxiety, confusion and information overload. We recommend well-controlled studies to compare just-in-time and traditional consent, with endpoints to include characteristics of participants, distress and anxiety and participants' understanding of research procedures.
A Metacommunity Framework for Enhancing the Effectiveness of Biological Monitoring Strategies
Roque, Fabio O.; Cottenie, Karl
2012-01-01
Because of inadequate knowledge and funding, the use of biodiversity indicators is often suggested as a way to support management decisions. Consequently, many studies have analyzed the performance of certain groups as indicator taxa. However, in addition to knowing whether certain groups can adequately represent the biodiversity as a whole, we must also know whether they show similar responses to the main structuring processes affecting biodiversity. Here we present an application of the metacommunity framework for evaluating the effectiveness of biodiversity indicators. Although the metacommunity framework has contributed to a better understanding of biodiversity patterns, there is still limited discussion about its implications for conservation and biomonitoring. We evaluated the effectiveness of indicator taxa in representing spatial variation in macroinvertebrate community composition in Atlantic Forest streams, and the processes that drive this variation. We focused on analyzing whether some groups conform to environmental processes and other groups are more influenced by spatial processes, and on how this can help in deciding which indicator group or groups should be used. We showed that a relatively small subset of taxa from the metacommunity would represent 80% of the variation in community composition shown by the entire metacommunity. Moreover, this subset does not have to be composed of predetermined taxonomic groups, but rather can be defined based on random subsets. We also found that some random subsets composed of a small number of genera performed better in responding to major environmental gradients. There were also random subsets that seemed to be affected by spatial processes, which could indicate important historical processes. We were able to integrate in the same theoretical and practical framework, the selection of biodiversity surrogates, indicators of environmental conditions, and more importantly, an explicit integration of environmental and spatial processes into the selection approach. PMID:22937068
Dual-state modulation of the contextual cueing effect: Evidence from eye movement recordings.
Zhao, Guang; Liu, Qiang; Jiao, Jun; Zhou, Peiling; Li, Hong; Sun, Hong-jin
2012-06-08
The repeated configurations of random elements induce a better search performance than that of the displays of novel random configurations. The mechanism of such contextual cueing effect has been investigated through the use of the RT × Set Size function. There are divergent views on whether the contextual cueing effect is driven by attentional guidance or facilitation of initial perceptual processing or response selection. To explore this question, we used eye movement recording in this study, which offers information about the substages of the search task. The results suggest that the contextual cueing effect is contributed mainly by attentional guidance, and facilitation of response selection also plays a role.
Duijts, Saskia FA; Kant, IJmert; Swaen, Gerard MH
2007-01-01
Background It is unclear if objective selection of employees, for an intervention to prevent sickness absence, is more effective than subjective 'personal enlistment'. We hypothesize that objectively selected employees are 'at risk' for sickness absence and eligible to participate in the intervention program. Methods The dispatch of 8603 screening instruments forms the starting point of the objective selection process. Different stages of this process, throughout which employees either dropped out or were excluded, were described and compared with the subjective selection process. Characteristics of ineligible and ultimately selected employees, for a randomized trial, were described and quantified using sickness absence data. Results Overall response rate on the screening instrument was 42.0%. Response bias was found for the parameters sex and age, but not for sickness absence. Sickness absence was higher in the 'at risk' (N = 212) group (42%) compared to the 'not at risk' (N = 2503) group (25%) (OR 2.17 CI 1.63–2.89; p = 0.000). The selection process ended with the successful inclusion of 151 eligible, i.e. 2% of the approached employees in the trial. Conclusion The study shows that objective selection of employees for early intervention is effective. Despite methodological and practical problems, selected employees are actually those at risk for sickness absence, who will probably benefit more from the intervention program than others. PMID:17474980
Estimation and classification by sigmoids based on mutual information
NASA Technical Reports Server (NTRS)
Baram, Yoram
1994-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.
A Randomized Controlled Trial of an Electronic Informed Consent Process
Rothwell, Erin; Wong, Bob; Rose, Nancy C.; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A.; Botkin, Jeffrey R.
2018-01-01
A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. PMID:25747685
Comparative study of feature selection with ensemble learning using SOM variants
NASA Astrophysics Data System (ADS)
Filali, Ameni; Jlassi, Chiraz; Arous, Najet
2017-03-01
Ensemble learning has succeeded in the growth of stability and clustering accuracy, but their runtime prohibits them from scaling up to real-world applications. This study deals the problem of selecting a subset of the most pertinent features for every cluster from a dataset. The proposed method is another extension of the Random Forests approach using self-organizing maps (SOM) variants to unlabeled data that estimates the out-of-bag feature importance from a set of partitions. Every partition is created using a various bootstrap sample and a random subset of the features. Then, we show that the process internal estimates are used to measure variable pertinence in Random Forests are also applicable to feature selection in unsupervised learning. This approach aims to the dimensionality reduction, visualization and cluster characterization at the same time. Hence, we provide empirical results on nineteen benchmark data sets indicating that RFS can lead to significant improvement in terms of clustering accuracy, over several state-of-the-art unsupervised methods, with a very limited subset of features. The approach proves promise to treat with very broad domains.
Is Attention Shared Between the Ears?1
Shiffrin, Richard M.; Pisoni, David B.; Castaneda-Mendez, Kicab
2012-01-01
This study tests the locus of attention during selective listening for speech-like stimuli. Can processing be differentially allocated to the two ears? Two conditions were used. The simultaneous condition involved one of four randomly chosen stop-consonants being presented to one of the ears chosen at random. The sequential condition involved two intervals; in the first S listened to the right ear; in the second S listened to the left ear. One of the four consonants was presented to an attended ear during one of these intervals. Experiment I used no distracting stimuli. Experiment II utilized a distracting consonant not confusable with any of the four target consonants. This distractor was always presented to any ear not containing a target. In both experiments, simultaneous and sequential performance were essentially identical, despite the need for attention sharing between the two ears during the simultaneous condition. We conclude that selective attention does not occur during perceptual processing of speech sounds presented to the two ears. We suggest that attentive effects arise in short-term memory following processing. PMID:23226838
ERIC Educational Resources Information Center
Tang, Eunice Lai-Yiu; Lee, John Chi-Kin; Chun, Cecilia Ka-Wai
2012-01-01
This study sets out to investigate how pre-service ESL teachers shape their beliefs in the process of experimenting with new teaching methods introduced in the teacher education programme. A 4-year longitudinal study was conducted with four randomly selected ESL pre-service teachers. Their theoretical orientations of ESL instruction were tracked…
ERIC Educational Resources Information Center
Buium, Nissan; Turnure, James E.
In a replication of a similar study with American children, 56 normal native Israeli children (5-years-old) were studied to determine the universality of self-generated verbal mediators as a means of enhancing memory processes. Eight Ss, randomly selected, were assigned in each of the following conditions: labeling, sentence generation, listening…
Evolutionary games on cycles with strong selection
NASA Astrophysics Data System (ADS)
Altrock, P. M.; Traulsen, A.; Nowak, M. A.
2017-02-01
Evolutionary games on graphs describe how strategic interactions and population structure determine evolutionary success, quantified by the probability that a single mutant takes over a population. Graph structures, compared to the well-mixed case, can act as amplifiers or suppressors of selection by increasing or decreasing the fixation probability of a beneficial mutant. Properties of the associated mean fixation times can be more intricate, especially when selection is strong. The intuition is that fixation of a beneficial mutant happens fast in a dominance game, that fixation takes very long in a coexistence game, and that strong selection eliminates demographic noise. Here we show that these intuitions can be misleading in structured populations. We analyze mean fixation times on the cycle graph under strong frequency-dependent selection for two different microscopic evolutionary update rules (death-birth and birth-death). We establish exact analytical results for fixation times under strong selection and show that there are coexistence games in which fixation occurs in time polynomial in population size. Depending on the underlying game, we observe inherence of demographic noise even under strong selection if the process is driven by random death before selection for birth of an offspring (death-birth update). In contrast, if selection for an offspring occurs before random removal (birth-death update), then strong selection can remove demographic noise almost entirely.
NASA Astrophysics Data System (ADS)
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
47 CFR 1.227 - Consolidations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... application. (ii) Domestic public fixed and public mobile. See Rule § 21.31 of this chapter for the... as to amendments of applications. (iii) Public coast stations (Maritime mobile service). See... issues, or (2) Any applications which present conflicting claims, except where a random selection process...
Random walk, diffusion and mixing in simulations of scalar transport in fluid flows
NASA Astrophysics Data System (ADS)
Klimenko, A. Y.
2008-12-01
Physical similarity and mathematical equivalence of continuous diffusion and particle random walk form one of the cornerstones of modern physics and the theory of stochastic processes. In many applied models used in simulation of turbulent transport and turbulent combustion, mixing between particles is used to reflect the influence of the continuous diffusion terms in the transport equations. We show that the continuous scalar transport and diffusion can be accurately specified by means of mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. This gives an alternative formulation for the stochastic process which is selected to represent the continuous diffusion. This paper focuses on statistical errors and deals with relatively simple cases, where one-particle distributions are sufficient for a complete description of the problem.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
Hybrid feature selection for supporting lightweight intrusion detection systems
NASA Astrophysics Data System (ADS)
Song, Jianglong; Zhao, Wentao; Liu, Qiang; Wang, Xin
2017-08-01
Redundant and irrelevant features not only cause high resource consumption but also degrade the performance of Intrusion Detection Systems (IDS), especially when coping with big data. These features slow down the process of training and testing in network traffic classification. Therefore, a hybrid feature selection approach in combination with wrapper and filter selection is designed in this paper to build a lightweight intrusion detection system. Two main phases are involved in this method. The first phase conducts a preliminary search for an optimal subset of features, in which the chi-square feature selection is utilized. The selected set of features from the previous phase is further refined in the second phase in a wrapper manner, in which the Random Forest(RF) is used to guide the selection process and retain an optimized set of features. After that, we build an RF-based detection model and make a fair comparison with other approaches. The experimental results on NSL-KDD datasets show that our approach results are in higher detection accuracy as well as faster training and testing processes.
Temporally selective attention supports speech processing in 3- to 5-year-old children.
Astheimer, Lori B; Sanders, Lisa D
2012-01-01
Recent event-related potential (ERP) evidence demonstrates that adults employ temporally selective attention to preferentially process the initial portions of words in continuous speech. Doing so is an effective listening strategy since word-initial segments are highly informative. Although the development of this process remains unexplored, directing attention to word onsets may be important for speech processing in young children who would otherwise be overwhelmed by the rapidly changing acoustic signals that constitute speech. We examined the use of temporally selective attention in 3- to 5-year-old children listening to stories by comparing ERPs elicited by attention probes presented at four acoustically matched times relative to word onsets: concurrently with a word onset, 100 ms before, 100 ms after, and at random control times. By 80 ms, probes presented at and after word onsets elicited a larger negativity than probes presented before word onsets or at control times. The latency and distribution of this effect is similar to temporally and spatially selective attention effects measured in adults and, despite differences in polarity, spatially selective attention effects measured in children. These results indicate that, like adults, preschool aged children modulate temporally selective attention to preferentially process the initial portions of words in continuous speech. Copyright © 2011 Elsevier Ltd. All rights reserved.
A new mosaic method for three-dimensional surface
NASA Astrophysics Data System (ADS)
Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun
2011-08-01
Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.
An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System
NASA Astrophysics Data System (ADS)
Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed
PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.
A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges
Wang, Xu; Sun, Baitao
2014-01-01
Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347
Random-access scanning microscopy for 3D imaging in awake behaving animals
Nadella, K. M. Naga Srinivas; Roš, Hana; Baragli, Chiara; Griffiths, Victoria A.; Konstantinou, George; Koimtzis, Theo; Evans, Geoffrey J.; Kirkby, Paul A.; Silver, R. Angus
2018-01-01
Understanding how neural circuits process information requires rapid measurements from identified neurons distributed in 3D space. Here we describe an acousto-optic lens two-photon microscope that performs high-speed focussing and line-scanning within a volume spanning hundreds of micrometres. We demonstrate its random access functionality by selectively imaging cerebellar interneurons sparsely distributed in 3D and by simultaneously recording from the soma, proximal and distal dendrites of neocortical pyramidal cells in behaving mice. PMID:27749836
School Counselors and Child Abuse Reporting: A National Survey
ERIC Educational Resources Information Center
Bryant, Jill K.
2009-01-01
A study was done to investigate school counselors' child abuse reporting behaviors and perceptions regarding the child abuse reporting process. Participants were randomly selected from the American School Counselor Association membership database with 193 school counselors returning questionnaires. Overall, school counselors indicated that they…
A randomized controlled trial of an electronic informed consent process.
Rothwell, Erin; Wong, Bob; Rose, Nancy C; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A; Botkin, Jeffrey R
2014-12-01
A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. © The Author(s) 2014.
10 CFR 431.383 - Enforcement process for electric motors.
Code of Federal Regulations, 2014 CFR
2014-01-01
... general purpose electric motor of equivalent electrical design and enclosure rather than replacing the... equivalent electrical design and enclosure rather than machining and attaching an endshield. ... sample of up to 20 units will then be randomly selected from one or more subdivided groups within the...
ERIC Educational Resources Information Center
Reha, Rose K.
To determine what interviewers perceived to be important factors in the interviewing process and whether the sex of the interviewer or type of organization he or she worked for influenced their perceptions of such factors, a questionnaire was administered to 42 personnel managers from randomly selected places of business and government offices.…
Constructing high complexity synthetic libraries of long ORFs using in vitro selection
NASA Technical Reports Server (NTRS)
Cho, G.; Keefe, A. D.; Liu, R.; Wilson, D. S.; Szostak, J. W.
2000-01-01
We present a method that can significantly increase the complexity of protein libraries used for in vitro or in vivo protein selection experiments. Protein libraries are often encoded by chemically synthesized DNA, in which part of the open reading frame is randomized. There are, however, major obstacles associated with the chemical synthesis of long open reading frames, especially those containing random segments. Insertions and deletions that occur during chemical synthesis cause frameshifts, and stop codons in the random region will cause premature termination. These problems can together greatly reduce the number of full-length synthetic genes in the library. We describe a strategy in which smaller segments of the synthetic open reading frame are selected in vitro using mRNA display for the absence of frameshifts and stop codons. These smaller segments are then ligated together to form combinatorial libraries of long uninterrupted open reading frames. This process can increase the number of full-length open reading frames in libraries by up to two orders of magnitude, resulting in protein libraries with complexities of greater than 10(13). We have used this methodology to generate three types of displayed protein library: a completely random sequence library, a library of concatemerized oligopeptide cassettes with a propensity for forming amphipathic alpha-helical or beta-strand structures, and a library based on one of the most common enzymatic scaffolds, the alpha/beta (TIM) barrel. Copyright 2000 Academic Press.
Random walks on activity-driven networks with attractiveness
NASA Astrophysics Data System (ADS)
Alessandretti, Laura; Sun, Kaiyuan; Baronchelli, Andrea; Perra, Nicola
2017-05-01
Virtually all real-world networks are dynamical entities. In social networks, the propensity of nodes to engage in social interactions (activity) and their chances to be selected by active nodes (attractiveness) are heterogeneously distributed. Here, we present a time-varying network model where each node and the dynamical formation of ties are characterized by these two features. We study how these properties affect random-walk processes unfolding on the network when the time scales describing the process and the network evolution are comparable. We derive analytical solutions for the stationary state and the mean first-passage time of the process, and we study cases informed by empirical observations of social networks. Our work shows that previously disregarded properties of real social systems, such as heterogeneous distributions of activity and attractiveness as well as the correlations between them, substantially affect the dynamical process unfolding on the network.
Process to Selectively Distinguish Viable from Non-Viable Bacterial Cells
NASA Technical Reports Server (NTRS)
LaDuc, Myron T.; Bernardini, Jame N.; Stam, Christina N.
2010-01-01
The combination of ethidium monoazide (EMA) and post-fragmentation, randomly primed DNA amplification technologies will enhance the analytical capability to discern viable from non-viable bacterial cells in spacecraft-related samples. Intercalating agents have been widely used since the inception of molecular biology to stain and visualize nucleic acids. Only recently, intercalating agents such as EMA have been exploited to selectively distinguish viable from dead bacterial cells. Intercalating dyes can only penetrate the membranes of dead cells. Once through the membrane and actually inside the cell, they intercalate DNA and, upon photolysis with visible light, produce stable DNA monoadducts. Once the DNA is crosslinked, it becomes insoluble and unable to be fragmented for post-fragmentation, randomly primed DNA library formation. Viable organisms DNA remains unaffected by the intercalating agents, allowing for amplification via post-fragmentation, randomly primed technologies. This results in the ability to carry out downstream nucleic acid-based analyses on viable microbes to the exclusion of all non-viable cells.
The impact of innovation intermediary on knowledge transfer
NASA Astrophysics Data System (ADS)
Lin, Min; Wei, Jun
2018-07-01
Many firms have opened up their innovation process and actively transfer knowledge with external partners in the market of technology. To reduce some of the market inefficiencies, more and more firms collaborate with innovation intermediaries. In light of the increasing importance of intermediary in the context of open innovation, we in this paper systematically investigate the effect of innovation intermediary on knowledge transfer and innovation process in networked systems. We find that the existence of innovation intermediary is conducive to the knowledge diffusion and facilitate the knowledge growth at system level. Interestingly, the scale of the innovation intermediary has little effect on the growth of knowledge. We further investigate the selection of intermediary members by comparing four selection strategies: random selection, initial knowledge level based selection, absorptive capability based selection, and innovative ability based selection. It is found that the selection strategy based on innovative ability outperforms all the other strategies in promoting the system knowledge growth. Our study provides a theoretical understanding of the impact of innovation intermediary on knowledge transfer and sheds light on the design and selection of innovation intermediary in open innovation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardin, John D; Baca, Allen G
This paper presents the mechanical design, fabrication and dynamic testing of an electrostatic analyzer spacecraft instrument. The functional and environmental requirements combined with limited spacecraft accommodations, resulted in complex component geometries, unique material selections, and difficult fabrication processes. The challenging aspects of the mechanical design and several of the more difficult production processes are discussed. In addition, the successes, failures, and lessons learned from acoustic and random vibration testing of a full-scale prototype instrument are presented.
ERIC Educational Resources Information Center
Moser, Gene W.
Reported is one of a series of investigations of the Project on an Information Memory Model. This study was done to test an information memory model for identifying the unit of information structure involved in task cognitions by humans. Four groups of 30 randomly selected subjects (ages 7, 9, 11 and 15 years) performed a sorting task of 14…
ERIC Educational Resources Information Center
Fessehatsion, Petros Woldu
2017-01-01
The research tried to examine the role of school principal in facilitating change in teaching-learning process. Moreover, it has focused on the main roles of principal in implementing LCIP. The research employed both quantitative and qualitative methods. The study used a random sample of 62 teachers from a purposefully selected five junior schools…
Context-specific control and the Stroop negative priming effect.
Milliken, Bruce; Thomson, David R; Bleile, Karmen; MacLellan, Ellen; Giammarco, Maria
2012-01-01
The present study highlights the utility of context-specific learning for different probe types in accounting for the commonly observed dependence of negative priming on probe selection. Using a Stroop priming procedure, Experiments 1a and 1b offered a demonstration that Stroop priming effects can differ qualitatively for selection and no-selection probes when probe selection is manipulated between subjects, but not when it is manipulated randomly from trial to trial within subject (see also Moore, 1994). In Experiments 2 and 3, selection and no-selection probes served as two contexts that varied randomly from trial to trial, but for which proportion repeated was manipulated separately. A context-specific proportion repeated effect was observed in Experiment 2, characterized by modest quantitative shifts in the repetition effects as a function of the context-specific proportion repeated manipulation. However, with a longer intertrial interval in Experiment 3, a context-specific proportion repeated manipulation that focused on the no-selection probes changed the repetition effect qualitatively, from negative priming when the proportion repeated was .25 to positive priming when the proportion repeated was .75. The results are discussed with reference to the role of rapid, context-specific learning processes in the integration of prior experiences with current perception and action.
47 CFR 1.227 - Consolidations.
Code of Federal Regulations, 2014 CFR
2014-10-01
...(a), (b), and (d), and § 80.374 of this chapter) mutual exclusivity will occur if the later... and Issues § 1.227 Consolidations. (a) The Commission, upon motion or upon its own motion, will, where... issues, or (2) Any applications which present conflicting claims, except where a random selection process...
47 CFR 1.227 - Consolidations.
Code of Federal Regulations, 2013 CFR
2013-10-01
...(a), (b), and (d), and § 80.374 of this chapter) mutual exclusivity will occur if the later... and Issues § 1.227 Consolidations. (a) The Commission, upon motion or upon its own motion, will, where... issues, or (2) Any applications which present conflicting claims, except where a random selection process...
47 CFR 1.227 - Consolidations.
Code of Federal Regulations, 2012 CFR
2012-10-01
...(a), (b), and (d), and § 80.374 of this chapter) mutual exclusivity will occur if the later... and Issues § 1.227 Consolidations. (a) The Commission, upon motion or upon its own motion, will, where... issues, or (2) Any applications which present conflicting claims, except where a random selection process...
Mindful Listening Instruction: Does It Make a Difference
ERIC Educational Resources Information Center
Anderson, William Todd
2013-01-01
This study examines the effect of mindfulness on student listening. Mindfulness is defined as "the process of noticing novel distinctions." Fifth grade students (N = 38) at a single school participated in this study, which used a posttest-only, random selection experimental design. The Independent Variable was exposure to mindful…
Homework Motivation and Preferences of Turkish Students
ERIC Educational Resources Information Center
Iflazoglu, Ayten; Hong, Eunsook
2012-01-01
Turkish students' motivation sources, organisational approaches, physical needs and environmental and interpersonal preferences during the homework process were examined in 1776 students in Grades 5-8 from 10 randomly selected schools in two districts of a major urban city in Turkey. These constructs were examined to determine grade, gender,…
Moving a randomized clinical trial into an observational cohort.
Goodman, Phyllis J; Hartline, Jo Ann; Tangen, Catherine M; Crowley, John J; Minasian, Lori M; Klein, Eric A; Cook, Elise D; Darke, Amy K; Arnold, Kathryn B; Anderson, Karen; Yee, Monica; Meyskens, Frank L; Baker, Laurence H
2013-02-01
The Selenium and Vitamin E Cancer Prevention Trial (SELECT) was a randomized, double-blind, placebo-controlled prostate cancer prevention study funded by the National Cancer Institute (NCI) and conducted by the Southwest Oncology Group (SWOG). A total of 35,533 men were assigned randomly to one of the four treatment groups (vitamin E + placebo, selenium + placebo, vitamin E + selenium, and placebo + placebo). The independent Data and Safety Monitoring Committee (DSMC) recommended the discontinuation of study supplements because of the lack of efficacy for risk reduction and because futility analyses demonstrated no possibility of benefit of the supplements to the anticipated degree (25% reduction in prostate cancer incidence) with additional follow-up. Study leadership agreed that the randomized trial should be terminated but believed that the cohort should be maintained and followed as the additional follow-up would contribute important information to the understanding of the biologic consequences of the intervention. Since the participants no longer needed to be seen in person to assess acute toxicities or to be given study supplements, it was determined that the most efficient and cost-effective way to follow them was via a central coordinated effort. A number of changes were necessary at the local Study Sites and SELECT Statistical Center to transition to following participants via a Central Coordinating Center. We describe the transition process from a randomized clinical trial to the observational Centralized Follow-Up (CFU) study. The process of transitioning SELECT, implemented at more than 400 Study Sites across the United States, Canada, and Puerto Rico, entailed many critical decisions and actions including updates to online documents such as the SELECT Workbench and Study Manual, a protocol amendment, reorganization of the Statistical Center, creation of a Transition Committee, development of materials for SELECT Study Sites, development of procedures to close Study Sites, and revision of data collection procedures and the process by which to contact participants. At the time of the publication of the primary SELECT results in December 2008, there were 32,569 men alive and currently active in the trial. As of 31 December 2011, 17,761 participants had been registered to the CFU study. This number is less than had been anticipated due to unforeseen difficulties with local Study Site institutional review boards (IRBs). However, from this cohort, we estimate that an additional 580 prostate cancer cases and 215 Gleason 7 or higher grade cancers will be identified. Over 109,000 individual items have been mailed to participants. Active SELECT ancillary studies have continued. The substantial SELECT biorepository is available to researchers; requests to use the specimens are reviewed for feasibility and scientific merit. As of April 2012, 12 proposals had been approved. The accrual goal of the follow-up study was not met, limiting our power to address the study objectives satisfactorily. The CFU study is also dependent on a number of factors including continued funding, continued interest of investigators in the biorepository, and the continued contribution of the participants. Our experience may be less pertinent to investigators who wish to follow participants in a treatment trial or participants in prevention trials in other medical areas. Extended follow-up of participants in prevention research is important to study the long-term effects of the interventions, such as those used in SELECT. The approach taken by SELECT investigators was to continue to follow participants centrally via an annual questionnaire and with a web-based option. The participants enrolled in the CFU study represent a large, well-characterized, generally healthy cohort. The CFU has enabled us to collect additional prostate and other cancer endpoints and longer follow-up on the almost 18,000 participants enrolled. The utility of the extensive biorepository that was developed during the course of the SELECT is enhanced by longer follow-up.
Moving a Randomized Clinical Trial into an Observational Cohort
Goodman, Phyllis J.; Hartline, Jo Ann; Tangen, Catherine M.; Crowley, John J.; Minasian, Lori M.; Klein, Eric A.; Cook, Elise D.; Darke, Amy K.; Arnold, Kathryn B.; Anderson, Karen; Yee, Monica; Meyskens, Frank L.; Baker, Laurence H.
2013-01-01
Background The Selenium and Vitamin E Cancer Prevention Trial (SELECT) was a randomized, double blind, placebo-controlled prostate cancer prevention study funded by the National Cancer Institute and conducted by SWOG (Southwest Oncology Group). A total of 35,533 men were assigned randomly to one of four treatment groups (vitamin E + placebo, selenium + placebo, vitamin E + selenium, placebo + placebo. The independent Data and Safety Monitoring Committee recommended the discontinuation of study supplements because of the lack of efficacy for risk reduction and because futility analyses demonstrated no possibility of benefit of the supplements to the anticipated degree (25% reduction in prostate cancer incidence) with additional follow-up. Study leadership agreed that the randomized trial should be terminated but believed that the cohort should be maintained and followed as the additional follow-up would contribute important information to the understanding of the biologic consequences of the intervention. Since the participants no longer needed to be seen in person to assess acute toxicities or to be given study supplements, it was determined that the most efficient and cost-effective way to follow them was via a central coordinated effort. Purpose A number of changes were necessary at the local Study Sites and SELECT Statistical Center to transition to following participants via a Central Coordinating Center. We describe the transition process from a randomized clinical trial to the observational Centralized Follow-up (CFU) study. Methods The process of transitioning SELECT, implemented at more than 400 Study Sites across the United States, Canada and Puerto Rico, entailed many critical decisions and actions including updates to online documents such as the SELECT Workbench and Study Manual, a protocol amendment, reorganization of the Statistical Center, creation of a Transition Committee, development of materials for SELECT Study Sites, development of procedures to close Study Sites, and revision of data collection procedures and the process by which to contact participants. Results At the time of the publication of the primary SELECT results in December 2008, there were 32,569 men alive and currently active in the trial. As of December 31, 2011, 17,761 participants had been registered to the CFU study. This number is less than had been anticipated due to unforeseen difficulties with local Study Site IRBs. However, from this cohort we estimate that an additional 580 prostate cancer cases and 215 Gleason 7 or higher cancers will be identified. Over 109,000 individual items have been mailed to participants. Active SELECT ancillary studies have continued. The substantial SELECT biorepository is available to researchers; requests to use the specimens are reviewed for feasibility and scientific merit. As of April 2012, 12 proposals had been approved. Limitations The accrual goal of the follow-up study was not met, limiting our power to address the study objectives satisfactorily. The CFU study is also dependent on a number of factors including continued funding, continued interest of investigators in the biorepository and the continued contribution of the participants. Our experience may be less pertinent to investigators who wish to follow participants in a treatment trial or participants in prevention trials in other medical areas. Conclusions Extended follow-up of participants in prevention research is important to study the long-term effects of the interventions, such as those used in SELECT. The approach taken by SELECT investigators was to continue to follow participants centrally via an annual questionnaire and with a web-based option. The participants enrolled in the CFU study represent a large, well-characterized, generally healthy cohort. The CFU has enabled us to collect additional prostate and other cancer endpoints and longer follow-up on the almost 18,000 participants enrolled. The utility of the extensive biorepository that was developed during the course of the SELECT is enhanced by longer follow-up. PMID:23064404
Multiple filters affect tree species assembly in mid-latitude forest communities.
Kubota, Y; Kusumoto, B; Shiono, T; Ulrich, W
2018-05-01
Species assembly patterns of local communities are shaped by the balance between multiple abiotic/biotic filters and dispersal that both select individuals from species pools at the regional scale. Knowledge regarding functional assembly can provide insight into the relative importance of the deterministic and stochastic processes that shape species assembly. We evaluated the hierarchical roles of the α niche and β niches by analyzing the influence of environmental filtering relative to functional traits on geographical patterns of tree species assembly in mid-latitude forests. Using forest plot datasets, we examined the α niche traits (leaf and wood traits) and β niche properties (cold/drought tolerance) of tree species, and tested non-randomness (clustering/over-dispersion) of trait assembly based on null models that assumed two types of species pools related to biogeographical regions. For most plots, species assembly patterns fell within the range of random expectation. However, particularly for cold/drought tolerance-related β niche properties, deviation from randomness was frequently found; non-random clustering was predominant in higher latitudes with harsh climates. Our findings demonstrate that both randomness and non-randomness in trait assembly emerged as a result of the α and β niches, although we suggest the potential role of dispersal processes and/or species equalization through trait similarities in generating the prevalence of randomness. Clustering of β niche traits along latitudinal climatic gradients provides clear evidence of species sorting by filtering particular traits. Our results reveal that multiple filters through functional niches and stochastic processes jointly shape geographical patterns of species assembly across mid-latitude forests.
The statistics of Pearce element diagrams and the Chayes closure problem
NASA Astrophysics Data System (ADS)
Nicholls, J.
1988-05-01
Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of random closed arrays can be drawn from the compositional space available to rock-forming processes. The minerals comprising the available space can be described with one additive component per mineral phase and a small number of exchange components. This space is called Thompson space. Statistics based on either space lead to the conclusion that Pearce element ratios are statistically valid and that Pearce element diagrams depict the processes that create chemical inhomogeneities in igneous rock suites.
On the information content of hydrological signatures and their relationship to catchment attributes
NASA Astrophysics Data System (ADS)
Addor, Nans; Clark, Martyn P.; Prieto, Cristina; Newman, Andrew J.; Mizukami, Naoki; Nearing, Grey; Le Vine, Nataliya
2017-04-01
Hydrological signatures, which are indices characterizing hydrologic behavior, are increasingly used for the evaluation, calibration and selection of hydrological models. Their key advantage is to provide more direct insights into specific hydrological processes than aggregated metrics (e.g., the Nash-Sutcliffe efficiency). A plethora of signatures now exists, which enable characterizing a variety of hydrograph features, but also makes the selection of signatures for new studies challenging. Here we propose that the selection of signatures should be based on their information content, which we estimated using several approaches, all leading to similar conclusions. To explore the relationship between hydrological signatures and the landscape, we extended a previously published data set of hydrometeorological time series for 671 catchments in the contiguous United States, by characterizing the climatic conditions, topography, soil, vegetation and stream network of each catchment. This new catchment attributes data set will soon be in open access, and we are looking forward to introducing it to the community. We used this data set in a data-learning algorithm (random forests) to explore whether hydrological signatures could be inferred from catchment attributes alone. We find that some signatures can be predicted remarkably well by random forests and, interestingly, the same signatures are well captured when simulating discharge using a conceptual hydrological model. We discuss what this result reveals about our understanding of hydrological processes shaping hydrological signatures. We also identify which catchment attributes exert the strongest control on catchment behavior, in particular during extreme hydrological events. Overall, climatic attributes have the most significant influence, and strongly condition how well hydrological signatures can be predicted by random forests and simulated by the hydrological model. In contrast, soil characteristics at the catchment scale are not found to be significant predictors by random forests, which raises questions on how to best use soil data for hydrological modeling, for instance for parameter estimation. We finally demonstrate that signatures with high spatial variability are poorly captured by random forests and model simulations, which makes their regionalization delicate. We conclude with a ranking of signatures based on their information content, and propose that the signatures with high information content are best suited for model calibration, model selection and understanding hydrologic similarity.
Top-down knowledge modulates onset capture in a feedforward manner.
Becker, Stefanie I; Lewis, Amanda J; Axtens, Jenna E
2017-04-01
How do we select behaviourally important information from cluttered visual environments? Previous research has shown that both top-down, goal-driven factors and bottom-up, stimulus-driven factors determine which stimuli are selected. However, it is still debated when top-down processes modulate visual selection. According to a feedforward account, top-down processes modulate visual processing even before the appearance of any stimuli, whereas others claim that top-down processes modulate visual selection only at a late stage, via feedback processing. In line with such a dual stage account, some studies found that eye movements to an irrelevant onset distractor are not modulated by its similarity to the target stimulus, especially when eye movements are launched early (within 150-ms post stimulus onset). However, in these studies the target transiently changed colour due to a colour after-effect that occurred during premasking, and the time course analyses were incomplete. The present study tested the feedforward account against the dual stage account in two eye tracking experiments, with and without colour after-effects (Exp. 1), as well when the target colour varied randomly and observers were informed of the target colour with a word cue (Exp. 2). The results showed that top-down processes modulated the earliest eye movements to the onset distractors (<150-ms latencies), without incurring any costs for selection of target matching distractors. These results unambiguously support a feedforward account of top-down modulation.
This report is a description of field work and data analysis results comparing a design comparable to systematic site selection with one based on random selection of sites. The report is expected to validate the use of random site selection in the bioassessment program for the O...
Feature-selective attention in healthy old age: a selective decline in selective attention?
Quigley, Cliodhna; Müller, Matthias M
2014-02-12
Deficient selection against irrelevant information has been proposed to underlie age-related cognitive decline. We recently reported evidence for maintained early sensory selection when older and younger adults used spatial selective attention to perform a challenging task. Here we explored age-related differences when spatial selection is not possible and feature-selective attention must be deployed. We additionally compared the integrity of feedforward processing by exploiting the well established phenomenon of suppression of visual cortical responses attributable to interstimulus competition. Electroencephalogram was measured while older and younger human adults responded to brief occurrences of coherent motion in an attended stimulus composed of randomly moving, orientation-defined, flickering bars. Attention was directed to horizontal or vertical bars by a pretrial cue, after which two orthogonally oriented, overlapping stimuli or a single stimulus were presented. Horizontal and vertical bars flickered at different frequencies and thereby elicited separable steady-state visual-evoked potentials, which were used to examine the effect of feature-based selection and the competitive influence of a second stimulus on ongoing visual processing. Age differences were found in feature-selective attentional modulation of visual responses: older adults did not show consistent modulation of magnitude or phase. In contrast, the suppressive effect of a second stimulus was robust and comparable in magnitude across age groups, suggesting that bottom-up processing of the current stimuli is essentially unchanged in healthy old age. Thus, it seems that visual processing per se is unchanged, but top-down attentional control is compromised in older adults when space cannot be used to guide selection.
Optimizing the availability of a buffered industrial process
Martz, Jr., Harry F.; Hamada, Michael S.; Koehler, Arthur J.; Berg, Eric C.
2004-08-24
A computer-implemented process determines optimum configuration parameters for a buffered industrial process. A population size is initialized by randomly selecting a first set of design and operation values associated with subsystems and buffers of the buffered industrial process to form a set of operating parameters for each member of the population. An availability discrete event simulation (ADES) is performed on each member of the population to determine the product-based availability of each member. A new population is formed having members with a second set of design and operation values related to the first set of design and operation values through a genetic algorithm and the product-based availability determined by the ADES. Subsequent population members are then determined by iterating the genetic algorithm with product-based availability determined by ADES to form improved design and operation values from which the configuration parameters are selected for the buffered industrial process.
Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design
ERIC Educational Resources Information Center
Wagler, Amy; Wagler, Ron
2014-01-01
Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…
Semiparametric Bayesian classification with longitudinal markers
De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter
2013-01-01
Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871
Cohen-Khait, Ruth; Schreiber, Gideon
2018-04-27
Protein-protein interactions mediate the vast majority of cellular processes. Though protein interactions obey basic chemical principles also within the cell, the in vivo physiological environment may not allow for equilibrium to be reached. Thus, in vitro measured thermodynamic affinity may not provide a complete picture of protein interactions in the biological context. Binding kinetics composed of the association and dissociation rate constants are relevant and important in the cell. Therefore, changes in protein-protein interaction kinetics have a significant impact on the in vivo activity of the proteins. The common protocol for the selection of tighter binders from a mutant library selects for protein complexes with slower dissociation rate constants. Here we describe a method to specifically select for variants with faster association rate constants by using pre-equilibrium selection, starting from a large random library. Toward this end, we refine the selection conditions of a TEM1-β-lactamase library against its natural nanomolar affinity binder β-lactamase inhibitor protein (BLIP). The optimal selection conditions depend on the ligand concentration and on the incubation time. In addition, we show that a second sort of the library helps to separate signal from noise, resulting in a higher percent of faster binders in the selected library. Fast associating protein variants are of particular interest for drug development and other biotechnological applications.
Ji, Peter; DuBois, David L; Flay, Brian R; Brechling, Vanessa
2008-03-01
Recruiting schools into a matched-pair randomized control trial (MP-RCT) to evaluate the efficacy of a school-level prevention program presents challenges for researchers. We considered which of 2 procedures would be most effective for recruiting schools into the study and assigning them to conditions. In 1 procedure (recruit and match/randomize), we would recruit schools and match them prior to randomization, and in the other (match/randomize and recruitment), we would match schools and randomize them prior to recruitment. We considered how each procedure impacted the randomization process and our ability to recruit schools into the study. After implementing the selected procedure, the equivalence of both treatment and control group schools and the participating and nonparticipating schools on school demographic variables was evaluated. We decided on the recruit and match/randomize procedure because we thought it would provide the opportunity to build rapport with the schools and prepare them for the randomization process, thereby increasing the likelihood that they would accept their randomly assigned conditions. Neither the treatment and control group schools nor the participating and nonparticipating schools exhibited statistically significant differences from each other on any of the school demographic variables. Recruitment of schools prior to matching and randomization in an MP-RCT may facilitate the recruitment of schools and thus enhance both the statistical power and the representativeness of study findings. Future research would benefit from the consideration of a broader range of variables (eg, readiness to implement a comprehensive prevention program) both in matching schools and in evaluating their representativeness to nonparticipating schools.
Application of random effects to the study of resource selection by animals
Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.
2006-01-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.
Application of random effects to the study of resource selection by animals.
Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L
2006-07-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.
Is Knowledge Random? Introducing Sampling and Bias through Outdoor Inquiry
ERIC Educational Resources Information Center
Stier, Sam
2010-01-01
Sampling, very generally, is the process of learning about something by selecting and assessing representative parts of that population or object. In the inquiry activity described here, students learned about sampling techniques as they estimated the number of trees greater than 12 cm dbh (diameter at breast height) in a wooded, discrete area…
78 FR 20320 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-04
...: select from for a random sample, get the survey to the appropriate respondent, and increase response rates. The survey will not be added to this package; instead, it will be processed under a different... Medicaid Services is requesting clearance for two surveys to aid in understanding levels of awareness and...
Many States Include Evolution Questions on Assessments
ERIC Educational Resources Information Center
Cavanagh, Sean
2005-01-01
The theory of evolution, pioneered most famously by Charles Darwin, posits that humans and other living creatures have descended from common ancestors over time through a process of random mutation and natural selection. It is widely considered to be a pillar of modern biology. Over the past year, however, public education has been roiled by…
National Board Certified Physical Educators: Perceived Changes Related to the Certification Process
ERIC Educational Resources Information Center
Woods, Amelia Mays; Rhoades, Jesse Lee
2012-01-01
In this study, we examined National Board certified physical education teachers' (NBCPETs) perceptions of change as a result of certification. Randomly selected NBCPETs (65; women = 53, men = 12) were interviewed. Analysis was done through the lens of Lawson's (1989) Model of the Interactive Factors Influencing Workplace Conditions for the…
Internet Usage Habits as a Part of Distance Higher Education
ERIC Educational Resources Information Center
Tufan, Firat
2016-01-01
Within the scope of this study, which deals with distance education method as a communication process, a focus group interview was conducted with voluntary students who were randomly selected from various areas/majors at the Department of Distance Education in Istanbul University in order to determine the relationship between their general…
Computers in the Schools: How Will Educators Cope with the Revolution?
ERIC Educational Resources Information Center
Gleason, Gerald T.; Reed, Timothy
A study was implemented to conduct a long-range observation and analysis of the process by which computers are channeled into educational practice. Data collection involved a structured interview with knowledgeable representatives of 35 school districts in Wisconsin. Participating schools were selected randomly and stratified by size. Questions in…
NASA Astrophysics Data System (ADS)
Tibell, Lena A. E.; Harms, Ute
2017-11-01
Modern evolutionary theory is both a central theory and an integrative framework of the life sciences. This is reflected in the common references to evolution in modern science education curricula and contexts. In fact, evolution is a core idea that is supposed to support biology learning by facilitating the organization of relevant knowledge. In addition, evolution can function as a pivotal link between concepts and highlight similarities in the complexity of biological concepts. However, empirical studies in many countries have for decades identified deficiencies in students' scientific understanding of evolution mainly focusing on natural selection. Clearly, there are major obstacles to learning natural selection, and we argue that to overcome them, it is essential to address explicitly the general abstract concepts that underlie the biological processes, e.g., randomness or probability. Hence, we propose a two-dimensional framework for analyzing and structuring teaching of natural selection. The first—purely biological—dimension embraces the three main principles variation, heredity, and selection structured in nine key concepts that form the core idea of natural selection. The second dimension encompasses four so-called thresholds, i.e., general abstract and/or non-perceptual concepts: randomness, probability, spatial scales, and temporal scales. We claim that both of these dimensions must be continuously considered, in tandem, when teaching evolution in order to allow development of a meaningful understanding of the process. Further, we suggest that making the thresholds tangible with the aid of appropriate kinds of visualizations will facilitate grasping of the threshold concepts, and thus, help learners to overcome the difficulties in understanding the central theory of life.
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe
2016-11-01
Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.
Joint Waveform Optimization and Adaptive Processing for Random-Phase Radar Signals
2014-01-01
extended targets,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 1, pp. 42– 55, June 2007. [2] S. Sen and A. Nehorai, “ OFDM mimo ...radar compared to traditional waveforms. I. INTRODUCTION There has been much recent interest in waveform design for multiple-input, multiple-output ( MIMO ...amplitude. When the resolution capability of the MIMO radar system is of interest, the transmit waveform can be designed to sharpen the radar ambiguity
Borak, T B
1986-04-01
Periodic grab sampling in combination with time-of-occupancy surveys has been the accepted procedure for estimating the annual exposure of underground U miners to Rn daughters. Temporal variations in the concentration of potential alpha energy in the mine generate uncertainties in this process. A system to randomize the selection of locations for measurement is described which can reduce uncertainties and eliminate systematic biases in the data. In general, a sample frequency of 50 measurements per year is sufficient to satisfy the criteria that the annual exposure be determined in working level months to within +/- 50% of the true value with a 95% level of confidence. Suggestions for implementing this randomization scheme are presented.
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
Effectivity of artrihpi irrigation for diabetic ulcer healing: A randomized controlled trial
NASA Astrophysics Data System (ADS)
Gayatri, Dewi; Asmorohadi, Aries; Dahlia, Debie
2018-02-01
The healing process of diabetic ulcer is often impeded by inflammation, infection, and decreased immune state. High pressure irrigation (10-15 psi) may be used to control the infection level. This research was designed to identify the effectiveness of artrihpi irrigation device towards diabetic ulcers in public hospitals in the Central Java. This research is a randomized control trial with cross over design. Sixty four subjects were selected using block randomization technique, and were divided into control and intervention group. The intervention was given in 6 days along with wound healing evaluation in every 3 days. The results demonstrated that there was a significant difference decrease scoring healing after treatment, even though the difference scoring healing between both groups was not statistically significant. However, it means difference was found that in the intervention artrihpi the wound healing was better than the spuit. These results illustrates the artrihpi may be solution of using high pressure irrigation to help healing process diabetic ulcers.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
NASA Astrophysics Data System (ADS)
Wang, Lianfeng; Yan, Biao; Guo, Lijie; Gu, Dongdong
2018-04-01
A newly transient mesoscopic model with a randomly packed powder-bed has been proposed to investigate the heat and mass transfer and laser process quality between neighboring tracks during selective laser melting (SLM) AlSi12 alloy by finite volume method (FVM), considering the solid/liquid phase transition, variable temperature-dependent properties and interfacial force. The results apparently revealed that both the operating temperature and resultant cooling rate were obviously elevated by increasing the laser power. Accordingly, the resultant viscosity of liquid significantly reduced under a large laser power and was characterized with a large velocity, which was prone to result in a more intensive convection within pool. In this case, the sufficient heat and mass transfer occurred at the interface between the previously fabricated tracks and currently building track, revealing a strongly sufficient spreading between the neighboring tracks and a resultant high-quality surface without obvious porosity. By contrast, the surface quality of SLM-processed components with a relatively low laser power notably weakened due to the limited and insufficient heat and mass transfer at the interface of neighboring tracks. Furthermore, the experimental surface morphologies of the top surface were correspondingly acquired and were in full accordance to the calculated results via simulation.
Point process statistics in atom probe tomography.
Philippe, T; Duguay, S; Grancher, G; Blavette, D
2013-09-01
We present a review of spatial point processes as statistical models that we have designed for the analysis and treatment of atom probe tomography (APT) data. As a major advantage, these methods do not require sampling. The mean distance to nearest neighbour is an attractive approach to exhibit a non-random atomic distribution. A χ(2) test based on distance distributions to nearest neighbour has been developed to detect deviation from randomness. Best-fit methods based on first nearest neighbour distance (1 NN method) and pair correlation function are presented and compared to assess the chemical composition of tiny clusters. Delaunay tessellation for cluster selection has been also illustrated. These statistical tools have been applied to APT experiments on microelectronics materials. Copyright © 2012 Elsevier B.V. All rights reserved.
Sensitivity analysis of limit state functions for probability-based plastic design
NASA Technical Reports Server (NTRS)
Frangopol, D. M.
1984-01-01
The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
NASA Astrophysics Data System (ADS)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Demaerschalk, Bart M; Brown, Robert D; Roubin, Gary S; Howard, Virginia J; Cesko, Eldina; Barrett, Kevin M; Longbottom, Mary E; Voeks, Jenifer H; Chaturvedi, Seemant; Brott, Thomas G; Lal, Brajesh K; Meschia, James F; Howard, George
2017-09-01
Multicenter clinical trials attempt to select sites that can move rapidly to randomization and enroll sufficient numbers of patients. However, there are few assessments of the success of site selection. In the CREST-2 (Carotid Revascularization and Medical Management for Asymptomatic Carotid Stenosis Trials), we assess factors associated with the time between site selection and authorization to randomize, the time between authorization to randomize and the first randomization, and the average number of randomizations per site per month. Potential factors included characteristics of the site, specialty of the principal investigator, and site type. For 147 sites, the median time between site selection to authorization to randomize was 9.9 months (interquartile range, 7.7, 12.4), and factors associated with early site activation were not identified. The median time between authorization to randomize and a randomization was 4.6 months (interquartile range, 2.6, 10.5). Sites with authorization to randomize in only the carotid endarterectomy study were slower to randomize, and other factors examined were not significantly associated with time-to-randomization. The recruitment rate was 0.26 (95% confidence interval, 0.23-0.28) patients per site per month. By univariate analysis, factors associated with faster recruitment were authorization to randomize in both trials, principal investigator specialties of interventional radiology and cardiology, pre-trial reported performance >50 carotid angioplasty and stenting procedures per year, status in the top half of recruitment in the CREST trial, and classification as a private health facility. Participation in StrokeNet was associated with slower recruitment as compared with the non-StrokeNet sites. Overall, selection of sites with high enrollment rates will likely require customization to align the sites selected to the factor under study in the trial. URL: http://www.clinicaltrials.gov. Unique identifier: NCT02089217. © 2017 American Heart Association, Inc.
Lonely Days and Lonely Nights: Completing the Doctoral Dissertation.
ERIC Educational Resources Information Center
Germeroth, Darla
A study examined areas of the doctoral dissertation process that are often problematic for the Ph.D./Ed.D. candidate in the field of communication. Subjects, 250 randomly selected Speech Communication Association members holding a Ph.D. or an Ed.D. were surveyed. Of the 250 surveys mailed, 137 were returned, representing a 54.8% return rate.…
Effects of Verbal Behavior within Curriculum Development Committees on the Curriculum Product.
ERIC Educational Resources Information Center
Talmage, Harriet
An attempt was made to ascertain what type of verbal interaction behavior manifested by a group given a problem in curriculum development affects the quality of the product. Thirty ad hoc groups, selected randomly, were given curriculum development tasks to solve. Curriculum Guide Form (CGF) and Bales' Interaction Process Analysis (IPA) were used…
Nursing Home Quality, Cost, Staffing, and Staff Mix
ERIC Educational Resources Information Center
Rantz, Marilyn J.; Hicks, Lanis; Grando, Victoria; Petroski, Gregory F.; Madsen, Richard W.; Mehr, David R.; Conn, Vicki; Zwygart-Staffacher, Mary; Scott, Jill; Flesner, Marcia; Bostick, Jane; Porter, Rose; Maas, Meridean
2004-01-01
Purpose: The purpose of this study was to describe the processes of care, organizational attributes, cost of care, staffing level, and staff mix in a sample of Missouri homes with good, average, and poor resident outcomes. Design and Methods: A three-group exploratory study design was used, with 92 nursing homes randomly selected from all nursing…
ERIC Educational Resources Information Center
Smith, Jane Ellen; Gianini, Loren M.; Garner, Bryan R.; Malek, Karen L.; Godley, Susan H.
2014-01-01
This study evaluated a process for training raters to reliably rate clinicians delivering the Adolescent Community Reinforcement Approach (A-CRA) in a national dissemination project. The unique A-CRA coding system uses specific behavioral anchors throughout its 73 procedure components. Five randomly selected raters each rated "passing"…
ERIC Educational Resources Information Center
Glik, Deborah C.; Eisenman, David P.; Zhou, Qiong; Tseng, Chi-Hong; Asch, Steven M.
2014-01-01
Only 40-50% of households in the United States are currently disaster prepared. In this intervention study, respondent-driven sampling was used to select a sample (n = 187) of low income, Latino residents of Los Angeles County, randomly assigned into two treatment conditions: (i) household preparedness education received through…
The Influence of Age, Sex, and School Size Upon the Development of Formal Operational Thought.
ERIC Educational Resources Information Center
Lewis, William Roedolph
School size, age and sex of students as related to scores on the six Piagetian Developmental Thought Processes Tasks were investigated. Five hundred seventy-four students from seventh through twelfth grades were randomly selected from 25 different schools classified as small, medium, or large. Data were treated through factorial analysis of…
ERIC Educational Resources Information Center
Freeland, Peter
2013-01-01
Charles Darwin supposed that evolution involved a process of gradual change, generated randomly, with the selection and retention over many generations of survival-promoting features. Some theists have never accepted this idea. "Intelligent design" is a relatively recent theory, supposedly based on scientific evidence, which attempts to…
ERIC Educational Resources Information Center
Scott, Joseph J.; Hansen, Vibeke; Morgan, Philip J.; Plotnikoff, Ronald C.; Lubans, David R.
2018-01-01
Objective: To explore young people's perceptions of pedometers and investigate behaviours exhibited while being monitored. Design: Qualitative design using six focus groups with participants (mean age 14.7 years). Setting: Study participants (n = 24) were randomly selected from a previous study of 123 young people aged 14-15 years from three…
De novo selection of oncogenes.
Chacón, Kelly M; Petti, Lisa M; Scheideman, Elizabeth H; Pirazzoli, Valentina; Politi, Katerina; DiMaio, Daniel
2014-01-07
All cellular proteins are derived from preexisting ones by natural selection. Because of the random nature of this process, many potentially useful protein structures never arose or were discarded during evolution. Here, we used a single round of genetic selection in mouse cells to isolate chemically simple, biologically active transmembrane proteins that do not contain any amino acid sequences from preexisting proteins. We screened a retroviral library expressing hundreds of thousands of proteins consisting of hydrophobic amino acids in random order to isolate four 29-aa proteins that induced focus formation in mouse and human fibroblasts and tumors in mice. These proteins share no amino acid sequences with known cellular or viral proteins, and the simplest of them contains only seven different amino acids. They transformed cells by forming a stable complex with the platelet-derived growth factor β receptor transmembrane domain and causing ligand-independent receptor activation. We term this approach de novo selection and suggest that it can be used to generate structures and activities not observed in nature, create prototypes for novel research reagents and therapeutics, and provide insight into cell biology, transmembrane protein-protein interactions, and possibly virus evolution and the origin of life.
An evolutionary approach to financial history.
Ferguson, N
2009-01-01
Financial history is not conventionally thought of in evolutionary terms, but it should be. Traditional ways of thinking about finance, dating back to Hilferding, emphasize the importance of concentration and economies of scale. But these approaches overlook the rich "biodiversity" that characterizes the financial world. They also overlook the role of natural selection. To be sure, natural selection in the financial world is not exactly analogous to the processes first described by Darwin and elaborated on by modern biologists. There is conscious adaptation as well as random mutation. Moreover, there is something resembling "intelligent design" in finance, whereby regulators and legislators act in a quasidivine capacity, putting dinosaurs on life support. The danger is that such interventions in the natural processes of the market may ultimately distort the evolutionary process, by getting in the way of Schumpeter's "creative destruction."
Spanier, Matthew J
2010-12-01
Leatherback sea turtles (Dermochelys coriacea) nest on dynamic, erosion-prone beaches. Erosive processes and resulting nest loss have long been presumed to be a hindrance to clutch survival. In order to better understand how leatherbacks cope with unstable nesting beaches, I investigated the role of beach erosion in leatherback nest site selection at Playa Gandoca, Costa Rica. I also examined the potential effect of nest relocation, a conservation strategy in place at Playa Gandoca to prevent nest loss to erosion, on the temperature of incubating clutches. I monitored changes in beach structure as a result of erosion at natural nest sites during the time the nest was laid, as well as in subsequent weeks. To investigate slope as a cue for nest site selection, I measured the slope of the beach where turtles ascended from the sea to nest, as well as the slopes at other random locations on the beach for comparison. I examined temperature differences between natural and relocated nest sites with thermocouples placed in the sand at depths typical of leatherback nests. Nests were distributed non-randomly in a clumped distribution along the length of the beach and laid at locations that were not undergoing erosion. The slope at nest sites was significantly different than at randomly chosen locations on the beach. The sand temperature at nest depths was significantly warmer at natural nest sites than at locations of relocated nests. The findings of this study suggest leatherbacks actively select nest sites that are not undergoing erosive processes, with slope potentially being used as a cue for site selection. The relocation of nests appears to be inadvertently cooling the nest environment. Due to the fact that leatherback clutches undergo temperature-dependent sex determination, the relocation of nests may be producing an unnatural male biasing of hatchlings. The results of this study suggest that the necessity of relocation practices, largely in place to protect nests from erosion, should be reevaluated to ensure the proper conservation of this critically endangered species.
Dai, Qiong; Cheng, Jun-Hu; Sun, Da-Wen; Zeng, Xin-An
2015-01-01
There is an increased interest in the applications of hyperspectral imaging (HSI) for assessing food quality, safety, and authenticity. HSI provides abundance of spatial and spectral information from foods by combining both spectroscopy and imaging, resulting in hundreds of contiguous wavebands for each spatial position of food samples, also known as the curse of dimensionality. It is desirable to employ feature selection algorithms for decreasing computation burden and increasing predicting accuracy, which are especially relevant in the development of online applications. Recently, a variety of feature selection algorithms have been proposed that can be categorized into three groups based on the searching strategy namely complete search, heuristic search and random search. This review mainly introduced the fundamental of each algorithm, illustrated its applications in hyperspectral data analysis in the food field, and discussed the advantages and disadvantages of these algorithms. It is hoped that this review should provide a guideline for feature selections and data processing in the future development of hyperspectral imaging technique in foods.
NASA Astrophysics Data System (ADS)
Tanimoto, Jun
2014-01-01
Network reciprocity is one mechanism for adding social viscosity, which leads to cooperative equilibrium in 2 × 2 prisoner's dilemma games. Previous studies have shown that cooperation can be enhanced by using a skewed, rather than a random, selection of partners for either strategy adaptation or the gaming process. Here we show that combining both processes for selecting a gaming partner and an adaptation partner further enhances cooperation, provided that an appropriate selection rule and parameters are adopted. We also show that this combined model significantly enhances cooperation by reducing the degree of activity in the underlying network; we measure the degree of activity with a quantity called effective degree. More precisely, during the initial evolutionary stage in which the global cooperation fraction declines because initially allocated cooperators becoming defectors, the model shows that weak cooperative clusters perish and only a few strong cooperative clusters survive. This finding is the most important key to attaining significant network reciprocity.
Randomized evaluation of a web based interview process for urology resident selection.
Shah, Satyan K; Arora, Sanjeev; Skipper, Betty; Kalishman, Summers; Timm, T Craig; Smith, Anthony Y
2012-04-01
We determined whether a web based interview process for resident selection could effectively replace the traditional on-site interview. For the 2010 to 2011 match cycle, applicants to the University of New Mexico urology residency program were randomized to participate in a web based interview process via Skype or a traditional on-site interview process. Both methods included interviews with the faculty, a tour of facilities and the opportunity to ask current residents any questions. To maintain fairness the applicants were then reinterviewed via the opposite process several weeks later. We assessed comparative effectiveness, cost, convenience and satisfaction using anonymous surveys largely scored on a 5-point Likert scale. Of 39 total participants (33 applicants and 6 faculty) 95% completed the surveys. The web based interview was less costly to applicants (mean $171 vs $364, p=0.05) and required less time away from school (10% missing 1 or more days vs 30%, p=0.04) compared to traditional on-site interview. However, applicants perceived the web based interview process as less effective than traditional on-site interview, with a mean 6-item summative effectiveness score of 21.3 vs 25.6 (p=0.003). Applicants and faculty favored continuing the web based interview process in the future as an adjunct to on-site interviews. Residency interviews can be successfully conducted via the Internet. The web based interview process reduced costs and improved convenience. The findings of this study support the use of videoconferencing as an adjunct to traditional interview methods rather than as a replacement. Copyright © 2012 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
2014-02-01
moisture level of 14% dry soil mass was maintained for the duration of the study by weekly additions of ASTM Type I water. Soil samples were collected...maintain the initial soil moisture level. One cluster of Orchard grass straw was harvested from a set of randomly selected replicate containers...decomposition is among the most integrating processes within the soil ecosystem because it involves complex interactions of soil microbial, plant , and
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Coupling GIS and multivariate approaches to reference site selection for wadeable stream monitoring.
Collier, Kevin J; Haigh, Andy; Kelly, Johlene
2007-04-01
Geographic Information System (GIS) was used to identify potential reference sites for wadeable stream monitoring, and multivariate analyses were applied to test whether invertebrate communities reflected a priori spatial and stream type classifications. We identified potential reference sites in segments with unmodified vegetation cover adjacent to the stream and in >85% of the upstream catchment. We then used various landcover, amenity and environmental impact databases to eliminate sites that had potential anthropogenic influences upstream and that fell into a range of access classes. Each site identified by this process was coded by four dominant stream classes and seven zones, and 119 candidate sites were randomly selected for follow-up assessment. This process yielded 16 sites conforming to reference site criteria using a conditional-probabilistic design, and these were augmented by an additional 14 existing or special interest reference sites. Non-metric multidimensional scaling (NMS) analysis of percent abundance invertebrate data indicated significant differences in community composition among some of the zones and stream classes identified a priori providing qualified support for this framework in reference site selection. NMS analysis of a range standardised condition and diversity metrics derived from the invertebrate data indicated a core set of 26 closely related sites, and four outliers that were considered atypical of reference site conditions and subsequently dropped from the network. Use of GIS linked to stream typology, available spatial databases and aerial photography greatly enhanced the objectivity and efficiency of reference site selection. The multi-metric ordination approach reduced variability among stream types and bias associated with non-random site selection, and provided an effective way to identify representative reference sites.
Integrated Structural Analysis and Test Program
NASA Technical Reports Server (NTRS)
Kaufman, Daniel
2005-01-01
An integrated structural-analysis and structure-testing computer program is being developed in order to: Automate repetitive processes in testing and analysis; Accelerate pre-test analysis; Accelerate reporting of tests; Facilitate planning of tests; Improve execution of tests; Create a vibration, acoustics, and shock test database; and Integrate analysis and test data. The software package includes modules pertaining to sinusoidal and random vibration, shock and time replication, acoustics, base-driven modal survey, and mass properties and static/dynamic balance. The program is commanded by use of ActiveX controls. There is minimal need to generate command lines. Analysis or test files are selected by opening a Windows Explorer display. After selecting the desired input file, the program goes to a so-called analysis data process or test data process, depending on the type of input data. The status of the process is given by a Windows status bar, and when processing is complete, the data are reported in graphical, tubular, and matrix form.
Applying a weighted random forests method to extract karst sinkholes from LiDAR data
NASA Astrophysics Data System (ADS)
Zhu, Junfeng; Pierskalla, William P.
2016-02-01
Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.
Some practical problems in implementing randomization.
Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet
2010-06-01
While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.
NASA Technical Reports Server (NTRS)
McGreevy, Michael W.; Statler, Irving C.
1998-01-01
An exploratory study was conducted to identify commercial aviation incidents that are relevant to a "controlled flight into terrain" (CFIT) accident using a NASA-developed text processing method. The QUORUM method was used to rate 67820 incident narratives, virtually all of the narratives in the Aviation Safety Reporting System (ASRS) database, according to their relevance to two official reports on the crash of American Airlines Flight 965 near Cali, Colombia in December 1995. For comparison with QUORUM's ratings, three experienced ASRS analysts read the reports of the crash and independently rated the relevance of the 100 narratives that were most highly rated by QUORUM, as well as 100 narratives randomly selected from the database. Eighty-four of 100 QUORUM-selected narratives were rated as relevant to the Cali accident by one or more of the analysts. The relevant incidents involved a variety of factors, including, over-reliance on automation, confusion and changes during descent/approach, terrain avoidance, and operations in foreign airspace. In addition, the QUORUM collection of incidents was found to be significantly more relevant than the random collection.
Holographic memories with encryption-selectable function
NASA Astrophysics Data System (ADS)
Su, Wei-Chia; Lee, Xuan-Hao
2006-03-01
Volume holographic storage has received increasing attention owing to its potential high storage capacity and access rate. In the meanwhile, encrypted holographic memory using random phase encoding technique is attractive for an optical community due to growing demand for protection of information. In this paper, encryption-selectable holographic storage algorithms in LiNbO 3 using angular multiplexing are proposed and demonstrated. Encryption-selectable holographic memory is an advance concept of security storage for content protection. It offers more flexibility to encrypt the data or not optionally during the recording processes. In our system design, the function of encryption and non-encryption storage is switched by a random phase pattern and a uniform phase pattern. Based on a 90-degree geometry, the input patterns including the encryption and non-encryption storage are stored via angular multiplexing with reference plane waves at different incident angles. Image is encrypted optionally by sliding the ground glass into one of the recording waves or removing it away in each exposure. The ground glass is a key for encryption. Besides, it is also an important key available for authorized user to decrypt the encrypted information.
Frequency of RNA–RNA interaction in a model of the RNA World
STRIGGLES, JOHN C.; MARTIN, MATTHEW B.; SCHMIDT, FRANCIS J.
2006-01-01
The RNA World model for prebiotic evolution posits the selection of catalytic/template RNAs from random populations. The mechanisms by which these random populations could be generated de novo are unclear. Non-enzymatic and RNA-catalyzed nucleic acid polymerizations are poorly processive, which means that the resulting short-chain RNA population could contain only limited diversity. Nonreciprocal recombination of smaller RNAs provides an alternative mechanism for the assembly of larger species with concomitantly greater structural diversity; however, the frequency of any specific recombination event in a random RNA population is limited by the low probability of an encounter between any two given molecules. This low probability could be overcome if the molecules capable of productive recombination were redundant, with many nonhomologous but functionally equivalent RNAs being present in a random population. Here we report fluctuation experiments to estimate the redundancy of the set of RNAs in a population of random sequences that are capable of non-Watson-Crick interaction with another RNA. Parallel SELEX experiments showed that at least one in 106 random 20-mers binds to the P5.1 stem–loop of Bacillus subtilis RNase P RNA with affinities equal to that of its naturally occurring partner. This high frequency predicts that a single RNA in an RNA World would encounter multiple interacting RNAs within its lifetime, supporting recombination as a plausible mechanism for prebiotic RNA evolution. The large number of equivalent species implies that the selection of any single interacting species in the RNA World would be a contingent event, i.e., one resulting from historical accident. PMID:16495233
Grady, S.J.; Casey, G.D.
2001-01-01
Data on volatile organic compounds (VOCs) in drinking water supplied by 2,110 randomly selected community water systems (CWSs) in 12 Northeast and Mid-Atlantic States indicate 64 VOC analytes were detected at least once during 1993-98. Selection of the 2,110 CWSs inventoried for this study targeted 20 percent of the 10,479 active CWSs in the region and represented a random subset of the total distribution by State, source of water, and size of system. The data include 21,635 analyses of drinking water collected for compliance monitoring under the Safe Drinking Water Act; the data mostly represent finished drinking water collected at the pointof- entry to, or at more distal locations within, each CWS?s distribution system following any watertreatment processes. VOC detections were more common in drinking water supplied by large systems (serving more than 3,300 people) that tap surface-water sources or both surface- and groundwater sources than in small systems supplied exclusively by ground-water sources. Trihalomethane (THM) compounds, which are potentially formed during the process of disinfecting drinking water with chlorine, were detected in 45 percent of the randomly selected CWSs. Chloroform was the most frequently detected THM, reported in 39 percent of the CWSs. The gasoline additive methyl tert-butyl ether (MTBE) was the most frequently detected VOC in drinking water after the THMs. MTBE was detected in 8.9 percent of the 1,194 randomly selected CWSs that analyzed samples for MTBE at any reporting level, and it was detected in 7.8 percent of the 1,074 CWSs that provided MTBE data at the 1.0-?g/L (microgram per liter) reporting level. As with other VOCs reported in drinking water, most MTBE concentrations were less than 5.0 ?g/L, and less than 1 percent of CWSs reported MTBE concentrations at or above the 20.0-?g/L lower limit recommended by the U.S. Environmental Protection Agency?s Drinking-Water Advisory. The frequency of MTBE detections in drinking water is significantly related to high- MTBE-use patterns. Detections are five times more likely in areas where MTBE is or has been used in gasoline at greater than 5 percent by volume as part of the oxygenated or reformulated (OXY/RFG) fuels program. Detection frequencies of the individual gasoline compounds (benzene, toluene, ethylbenzene, and xylenes (BTEX)) were mostly less than 3 percent of the randomly selected CWSs, but collectively, BTEX compounds were detected in 8.4 percent of CWSs. BTEX concentrations also were low and just three drinkingwater samples contained BTEX at concentrations exceeding 20 ?g/L. Co-occurrence of MTBE and BTEX was rare, and only 0.8 percent of CWSs reported simultaneous detections of MTBE and BTEX compounds. Low concentrations and cooccurrence of MTBE and BTEX indicate most gasoline contaminants in drinking water probably represent nonpoint sources. Solvents were frequently detected in drinking water in the 12-State area. One or more of 27 individual solvent VOCs were detected at any reporting level in 3,080 drinking-water samples from 304 randomly selected CWSs (14 percent) and in 206 CWSs (9.8 percent) at concentrations at or above 1.0 ?g/L. High co-occurrence among solvents probably reflects common sources and the presence of transformation by-products. Other VOCs were relatively rarely detected in drinking water in the 12-State area. Six percent (127) of the 2,110 randomly selected CWSs reported concentrations of 16 VOCs at or above drinking-water criteria. The 127 CWSs collectively serve 2.6 million people. The occurrence of VOCs in drinking water was significantly associated (p<0.0001) with high population- density urban areas. New Jersey, Massachusetts, and Rhode Island, States with substantial urbanization and high population density, had the highest frequency of VOC detections among the 12 States. More than two-thirds of the randomly selected CWSs in New Jersey reported detecting VOC concentrations in drinking water at or above 1
ERIC Educational Resources Information Center
Nutting, Paul A.; And Others
Six Indian Health Service (IHS) units, chosen in a non-random manner, were evaluated via a quality assessment methodology currently under development by the IHS Office of Research and Development. A set of seven health problems (tracers) was selected to represent major health problems, and clinical algorithms (process maps) were constructed for…
ERIC Educational Resources Information Center
Islam, Md. Aminul; Rahim, Noor Asliza Abdul; Liang, Tan Chee; Momtaz, Hasina
2011-01-01
This research attempted to find out the effect of demographic factors on the effectiveness of the e-learning system in a higher learning Institution. The students from this institution were randomly selected in order to evaluate the effectiveness of learning system in student's learning process. The primary data source is the questionnaires that…
ERIC Educational Resources Information Center
Ferguson, M. A.; Valenti, JoAnn Myer
Using radon (a naturally-occurring radioactive gas linked to lung cancer) as the health risk factor, a study examined which risk-taking tendencies interact with different health-risk message strategies. A phone survey pretested 837 randomly selected homeowners from three Florida counties with the highest levels of radon in the state (706 agreed to…
ERIC Educational Resources Information Center
Tough, Suzanne; Rikhy, Shivani; Benzies, Karen; Vekved, Monica; Kehler, Heather; Johnston, David W.
2013-01-01
Research Findings: This study assessed public perceptions of child care and its providers in a Canadian province where government funding for child care includes subsidies and a voluntary accreditation process. In 2007-2008, 1,443 randomly selected adults in Alberta, Canada, completed a telephone survey. Individuals were eligible to participate if…
USDA-ARS?s Scientific Manuscript database
The National Beef Quality Audit – 2011 (NBQA-2011) assessed the current status of quality and consistency of fed steers and heifers. Beef carcasses (n = 9,802), representing approximately 10 percent of each production lot in 28 beef processing facilities, were selected randomly for the survey. Car...
Emergent Complexity in Conway's Game of Life
NASA Astrophysics Data System (ADS)
Gotts, Nick
It is shown that both small, finite patterns and random infinite very low density ("sparse") arrays of the Game of Life can produce emergent structures and processes of great complexity, through ramifying feedback networks and cross-scale interactions. The implications are discussed: it is proposed that analogous networks and interactions may have been precursors to natural selection in the real world.
Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo
2017-05-16
Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.
A case study of evolutionary computation of biochemical adaptation
NASA Astrophysics Data System (ADS)
François, Paul; Siggia, Eric D.
2008-06-01
Simulations of evolution have a long history, but their relation to biology is questioned because of the perceived contingency of evolution. Here we provide an example of a biological process, adaptation, where simulations are argued to approach closer to biology. Adaptation is a common feature of sensory systems, and a plausible component of other biochemical networks because it rescales upstream signals to facilitate downstream processing. We create random gene networks numerically, by linking genes with interactions that model transcription, phosphorylation and protein-protein association. We define a fitness function for adaptation in terms of two functional metrics, and show that any reasonable combination of them will yield the same adaptive networks after repeated rounds of mutation and selection. Convergence to these networks is driven by positive selection and thus fast. There is always a path in parameter space of continuously improving fitness that leads to perfect adaptation, implying that the actual mutation rates we use in the simulation do not bias the results. Our results imply a kinetic view of evolution, i.e., it favors gene networks that can be learned quickly from the random examples supplied by mutation. This formulation allows for deductive predictions of the networks realized in nature.
Chen, Bor-Sen; Tsai, Kun-Wei; Li, Cheng-Wei
2015-01-01
Molecular biologists have long recognized carcinogenesis as an evolutionary process that involves natural selection. Cancer is driven by the somatic evolution of cell lineages. In this study, the evolution of somatic cancer cell lineages during carcinogenesis was modeled as an equilibrium point (ie, phenotype of attractor) shifting, the process of a nonlinear stochastic evolutionary biological network. This process is subject to intrinsic random fluctuations because of somatic genetic and epigenetic variations, as well as extrinsic disturbances because of carcinogens and stressors. In order to maintain the normal function (ie, phenotype) of an evolutionary biological network subjected to random intrinsic fluctuations and extrinsic disturbances, a network robustness scheme that incorporates natural selection needs to be developed. This can be accomplished by selecting certain genetic and epigenetic variations to modify the network structure to attenuate intrinsic fluctuations efficiently and to resist extrinsic disturbances in order to maintain the phenotype of the evolutionary biological network at an equilibrium point (attractor). However, during carcinogenesis, the remaining (or neutral) genetic and epigenetic variations accumulate, and the extrinsic disturbances become too large to maintain the normal phenotype at the desired equilibrium point for the nonlinear evolutionary biological network. Thus, the network is shifted to a cancer phenotype at a new equilibrium point that begins a new evolutionary process. In this study, the natural selection scheme of an evolutionary biological network of carcinogenesis was derived from a robust negative feedback scheme based on the nonlinear stochastic Nash game strategy. The evolvability and phenotypic robustness criteria of the evolutionary cancer network were also estimated by solving a Hamilton–Jacobi inequality – constrained optimization problem. The simulation revealed that the phenotypic shift of the lung cancer-associated cell network takes 54.5 years from a normal state to stage I cancer, 1.5 years from stage I to stage II cancer, and 2.5 years from stage II to stage III cancer, with a reasonable match for the statistical result of the average age of lung cancer. These results suggest that a robust negative feedback scheme, based on a stochastic evolutionary game strategy, plays a critical role in an evolutionary biological network of carcinogenesis under a natural selection scheme. PMID:26244004
Statistical mechanics of scale-free gene expression networks
NASA Astrophysics Data System (ADS)
Gross, Eitan
2012-12-01
The gene co-expression networks of many organisms including bacteria, mice and man exhibit scale-free distribution. This heterogeneous distribution of connections decreases the vulnerability of the network to random attacks and thus may confer the genetic replication machinery an intrinsic resilience to such attacks, triggered by changing environmental conditions that the organism may be subject to during evolution. This resilience to random attacks comes at an energetic cost, however, reflected by the lower entropy of the scale-free distribution compared to the more homogenous, random network. In this study we found that the cell cycle-regulated gene expression pattern of the yeast Saccharomyces cerevisiae obeys a power-law distribution with an exponent α = 2.1 and an entropy of 1.58. The latter is very close to the maximal value of 1.65 obtained from linear optimization of the entropy function under the constraint of a constant cost function, determined by the average degree connectivity
Effects of feature-selective and spatial attention at different stages of visual processing.
Andersen, Søren K; Fuchs, Sandra; Müller, Matthias M
2011-01-01
We investigated mechanisms of concurrent attentional selection of location and color using electrophysiological measures in human subjects. Two completely overlapping random dot kinematograms (RDKs) of two different colors were presented on either side of a central fixation cross. On each trial, participants attended one of these four RDKs, defined by its specific combination of color and location, in order to detect coherent motion targets. Sustained attentional selection while monitoring for targets was measured by means of steady-state visual evoked potentials (SSVEPs) elicited by the frequency-tagged RDKs. Attentional selection of transient targets and distractors was assessed by behavioral responses and by recording event-related potentials to these stimuli. Spatial attention and attention to color had independent and largely additive effects on the amplitudes of SSVEPs elicited in early visual areas. In contrast, behavioral false alarms and feature-selective modulation of P3 amplitudes to targets and distractors were limited to the attended location. These results suggest that feature-selective attention produces an early, global facilitation of stimuli having the attended feature throughout the visual field, whereas the discrimination of target events takes place at a later stage of processing that is only applied to stimuli at the attended position.
NullSeq: A Tool for Generating Random Coding Sequences with Desired Amino Acid and GC Contents.
Liu, Sophia S; Hockenberry, Adam J; Lancichinetti, Andrea; Jewett, Michael C; Amaral, Luís A N
2016-11-01
The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. In order to accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. While many tools have been developed to create random nucleotide sequences, protein coding sequences are subject to a unique set of constraints that complicates the process of generating appropriate null models. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content for the purpose of hypothesis testing. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content, which we have developed into a python package. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. Furthermore, this approach can easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes as well as more effective engineering of biological systems.
Study of Dynamic Characteristics of Aeroelastic Systems Utilizing Randomdec Signatures
NASA Technical Reports Server (NTRS)
Chang, C. S.
1975-01-01
The feasibility of utilizing the random decrement method in conjunction with a signature analysis procedure to determine the dynamic characteristics of an aeroelastic system for the purpose of on-line prediction of potential on-set of flutter was examined. Digital computer programs were developed to simulate sampled response signals of a two-mode aeroelastic system. Simulated response data were used to test the random decrement method. A special curve-fit approach was developed for analyzing the resulting signatures. A number of numerical 'experiments' were conducted on the combined processes. The method is capable of determining frequency and damping values accurately from randomdec signatures of carefully selected lengths.
Wilson, Jack H; Criss, Amy H; Spangler, Sean A; Walukevich, Katherine; Hewett, Sandra
2017-10-01
Nonsteroidal anti-inflammatory drugs work by non-selectively inhibiting cyclooxygenase enzymes. Evidence indicates that metabolites of the cyclooxygenase pathway play a critical role in the process of learning and memory. We evaluated whether acute naproxen treatment impairs short-term working memory, episodic memory, or semantic memory in a young, healthy adult population. Participants received a single dose of placebo or naproxen (750 mg) in random order separated by 7-10 days. Two hours following administration, participants completed five memory tasks. The administration of acute high-dose naproxen had no effect on memory in healthy young adults.
Optical memory development. Volume 2: Gain-assisted holographic storage media
NASA Technical Reports Server (NTRS)
Gange, R. A.; Mezrich, R. S.
1972-01-01
Thin deformable films were investigated for use as the storage medium in a holographic optical memory. The research was directed toward solving the problems of material fatigue, selective heat addressing, electrical charging of the film surface and charge patterning by light. A number of solutions to these problems were found but the main conclusion to be drawn from the work is that deformable media which employ heat in the recording process are not satisfactory for use in a high-speed random-access read/write holographic memory. They are, however, a viable approach in applications where either high speed or random-access is not required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben-Naim, Eli; Krapivsky, Paul
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
Magis, David
2014-11-01
In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.
The sources of adaptive variation
2017-01-01
The role of natural selection in the evolution of adaptive phenotypes has undergone constant probing by evolutionary biologists, employing both theoretical and empirical approaches. As Darwin noted, natural selection can act together with other processes, including random changes in the frequencies of phenotypic differences that are not under strong selection, and changes in the environment, which may reflect evolutionary changes in the organisms themselves. As understanding of genetics developed after 1900, the new genetic discoveries were incorporated into evolutionary biology. The resulting general principles were summarized by Julian Huxley in his 1942 book Evolution: the modern synthesis. Here, we examine how recent advances in genetics, developmental biology and molecular biology, including epigenetics, relate to today's understanding of the evolution of adaptations. We illustrate how careful genetic studies have repeatedly shown that apparently puzzling results in a wide diversity of organisms involve processes that are consistent with neo-Darwinism. They do not support important roles in adaptation for processes such as directed mutation or the inheritance of acquired characters, and therefore no radical revision of our understanding of the mechanism of adaptive evolution is needed. PMID:28566483
The sources of adaptive variation.
Charlesworth, Deborah; Barton, Nicholas H; Charlesworth, Brian
2017-05-31
The role of natural selection in the evolution of adaptive phenotypes has undergone constant probing by evolutionary biologists, employing both theoretical and empirical approaches. As Darwin noted, natural selection can act together with other processes, including random changes in the frequencies of phenotypic differences that are not under strong selection, and changes in the environment, which may reflect evolutionary changes in the organisms themselves. As understanding of genetics developed after 1900, the new genetic discoveries were incorporated into evolutionary biology. The resulting general principles were summarized by Julian Huxley in his 1942 book Evolution: the modern synthesis Here, we examine how recent advances in genetics, developmental biology and molecular biology, including epigenetics, relate to today's understanding of the evolution of adaptations. We illustrate how careful genetic studies have repeatedly shown that apparently puzzling results in a wide diversity of organisms involve processes that are consistent with neo-Darwinism. They do not support important roles in adaptation for processes such as directed mutation or the inheritance of acquired characters, and therefore no radical revision of our understanding of the mechanism of adaptive evolution is needed. © 2017 The Author(s).
Design Of Computer Based Test Using The Unified Modeling Language
NASA Astrophysics Data System (ADS)
Tedyyana, Agus; Danuri; Lidyawati
2017-12-01
The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.
Modulation of human extrastriate visual processing by selective attention to colours and words.
Nobre, A C; Allison, T; McCarthy, G
1998-07-01
The present study investigated the effect of visual selective attention upon neural processing within functionally specialized regions of the human extrastriate visual cortex. Field potentials were recorded directly from the inferior surface of the temporal lobes in subjects with epilepsy. The experimental task required subjects to focus attention on words from one of two competing texts. Words were presented individually and foveally. Texts were interleaved randomly and were distinguishable on the basis of word colour. Focal field potentials were evoked by words in the posterior part of the fusiform gyrus. Selective attention strongly modulated long-latency potentials evoked by words. The attention effect co-localized with word-related potentials in the posterior fusiform gyrus, and was independent of stimulus colour. The results demonstrated that stimuli receive differential processing within specialized regions of the extrastriate cortex as a function of attention. The late onset of the attention effect and its co-localization with letter string-related potentials but not with colour-related potentials recorded from nearby regions of the fusiform gyrus suggest that the attention effect is due to top-down influences from downstream regions involved in word processing.
Extrapolating Weak Selection in Evolutionary Games
Wu, Bin; García, Julián; Hauert, Christoph; Traulsen, Arne
2013-01-01
In evolutionary games, reproductive success is determined by payoffs. Weak selection means that even large differences in game outcomes translate into small fitness differences. Many results have been derived using weak selection approximations, in which perturbation analysis facilitates the derivation of analytical results. Here, we ask whether results derived under weak selection are also qualitatively valid for intermediate and strong selection. By “qualitatively valid” we mean that the ranking of strategies induced by an evolutionary process does not change when the intensity of selection increases. For two-strategy games, we show that the ranking obtained under weak selection cannot be carried over to higher selection intensity if the number of players exceeds two. For games with three (or more) strategies, previous examples for multiplayer games have shown that the ranking of strategies can change with the intensity of selection. In particular, rank changes imply that the most abundant strategy at one intensity of selection can become the least abundant for another. We show that this applies already to pairwise interactions for a broad class of evolutionary processes. Even when both weak and strong selection limits lead to consistent predictions, rank changes can occur for intermediate intensities of selection. To analyze how common such games are, we show numerically that for randomly drawn two-player games with three or more strategies, rank changes frequently occur and their likelihood increases rapidly with the number of strategies . In particular, rank changes are almost certain for , which jeopardizes the predictive power of results derived for weak selection. PMID:24339769
Bashapoor, Sajjad; Hosseini-Kiasari, Seyyedeh Tayebeh; Daneshvar, Somayeh; Kazemi-Taskooh, Zeinab
2015-01-01
Sensory information processing and alexithymia are two important factors in determining behavioral reactions. Some studies explain the effect of the sensitivity of sensory processing and alexithymia in the tendency to substance abuse. Giving that, the aim of the current study was to compare the styles of sensory information processing and alexithymia between substance-dependent people and normal ones. The research method was cross-sectional and the statistical population of the current study comprised of all substance-dependent men who are present in substance quitting camps of Masal, Iran, in October 2013 (n = 78). 36 persons were selected randomly by simple randomly sampling method from this population as the study group, and 36 persons were also selected among the normal population in the same way as the comparison group. Both groups was evaluated by using Toronto alexithymia scale (TAS) and adult sensory profile, and the multivariate analysis of variance (MANOVA) test was applied to analyze data. The results showed that there are significance differences between two groups in low registration (P < 0.020, F = 5.66), sensation seeking (P < 0.050, F = 1.92), and sensory avoidance (P < 0.008, F = 7.52) as a components of sensory processing and difficulty in describing emotions (P < 0.001, F = 15.01) and difficulty in identifying emotions (P < 0.002, F = 10.54) as a components of alexithymia. However, no significant difference were found between two groups in components of sensory sensitivity (P < 0.170, F = 1.92) and external oriented thinking style (P < 0.060, F = 3.60). These results showed that substance-dependent people process sensory information in a different way than normal people and show more alexithymia features than them.
Bashapoor, Sajjad; Hosseini-Kiasari, Seyyedeh Tayebeh; Daneshvar, Somayeh; Kazemi-Taskooh, Zeinab
2015-01-01
Background Sensory information processing and alexithymia are two important factors in determining behavioral reactions. Some studies explain the effect of the sensitivity of sensory processing and alexithymia in the tendency to substance abuse. Giving that, the aim of the current study was to compare the styles of sensory information processing and alexithymia between substance-dependent people and normal ones. Methods The research method was cross-sectional and the statistical population of the current study comprised of all substance-dependent men who are present in substance quitting camps of Masal, Iran, in October 2013 (n = 78). 36 persons were selected randomly by simple randomly sampling method from this population as the study group, and 36 persons were also selected among the normal population in the same way as the comparison group. Both groups was evaluated by using Toronto alexithymia scale (TAS) and adult sensory profile, and the multivariate analysis of variance (MANOVA) test was applied to analyze data. Findings The results showed that there are significance differences between two groups in low registration (P < 0.020, F = 5.66), sensation seeking (P < 0.050, F = 1.92), and sensory avoidance (P < 0.008, F = 7.52) as a components of sensory processing and difficulty in describing emotions (P < 0.001, F = 15.01) and difficulty in identifying emotions (P < 0.002, F = 10.54) as a components of alexithymia. However, no significant difference were found between two groups in components of sensory sensitivity (P < 0.170, F = 1.92) and external oriented thinking style (P < 0.060, F = 3.60). Conclusion These results showed that substance-dependent people process sensory information in a different way than normal people and show more alexithymia features than them. PMID:26885354
Noise sensitivity of portfolio selection in constant conditional correlation GARCH models
NASA Astrophysics Data System (ADS)
Varga-Haszonits, I.; Kondor, I.
2007-11-01
This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.
DNA capture elements for rapid detection and identification of biological agents
NASA Astrophysics Data System (ADS)
Kiel, Johnathan L.; Parker, Jill E.; Holwitt, Eric A.; Vivekananda, Jeeva
2004-08-01
DNA capture elements (DCEs; aptamers) are artificial DNA sequences, from a random pool of sequences, selected for their specific binding to potential biological warfare agents. These sequences were selected by an affinity method using filters to which the target agent was attached and the DNA isolated and amplified by polymerase chain reaction (PCR) in an iterative, increasingly stringent, process. Reporter molecules were attached to the finished sequences. To date, we have made DCEs to Bacillus anthracis spores, Shiga toxin, Venezuelan Equine Encephalitis (VEE) virus, and Francisella tularensis. These DCEs have demonstrated specificity and sensitivity equal to or better than antibody.
NASA Astrophysics Data System (ADS)
Deng, Chengbin; Wu, Changshan
2013-12-01
Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.
ERIC Educational Resources Information Center
Ray, Amber B.; Graham, Steve; Houston, Julia D.; Harris, Karen R.
2016-01-01
A random sample of middle school teachers (grades 6-9) from across the United States was surveyed about their use of writing to support students' learning. The selection process was stratified so there were an equal number of English language arts, social studies, and science teachers. More than one-half of the teachers reported applying 15 or…
ERIC Educational Resources Information Center
Young, I. Phillip
2005-01-01
This study addresses the screening decisions for a national random sample of high school principals as viewed from the attraction-similarity theory of interpersonal perceptions. Independent variables are the sex of principals, sex of applicants, and the type of focal positions sought by hypothetical job applicants (teacher or counselor). Dependent…
ERIC Educational Resources Information Center
Abrahamson, Dor
2009-01-01
This article reports on a case study from a design-based research project that investigated how students make sense of the disciplinary tools they are taught to use, and specifically, what personal, interpersonal, and material resources support this process. The probability topic of binomial distribution was selected due to robust documentation of…
Semiconductor technology program. Progress briefs
NASA Technical Reports Server (NTRS)
Bullis, W. M. (Editor)
1979-01-01
The current status of NBS work on measurement technology for semiconductor materials, process control, and devices is reported. Results of both in-house and contract research are covered. Highlighted activities include modeling of diffusion processes, analysis of model spreading resistance data, and studies of resonance ionization spectroscopy, resistivity-dopant density relationships in p-type silicon, deep level measurements, photoresist sensitometry, random fault measurements, power MOSFET thermal characteristics, power transistor switching characteristics, and gross leak testing. New and selected on-going projects are described. Compilations of recent publications and publications in press are included.
BESIII Physics Data Storing and Processing on HBase and MapReduce
NASA Astrophysics Data System (ADS)
LEI, Xiaofeng; Li, Qiang; Kan, Bowen; Sun, Gongxing; Sun, Zhenyu
2015-12-01
In the past years, we have successfully applied Hadoop to high-energy physics analysis. Although, it has not only improved the efficiency of data analysis, but also reduced the cost of cluster building so far, there are still some spaces to be optimized, like inflexible pre-selection, low-efficient random data reading and I/O bottleneck caused by Fuse that is used to access HDFS. In order to change this situation, this paper presents a new analysis platform for high-energy physics data storing and analysing. The data structure is changed from DST tree-like files to HBase according to the features of the data itself and analysis processes, since HBase is more suitable for processing random data reading than DST files and enable HDFS to be accessed directly. A few of optimization measures are taken for the purpose of getting a good performance. A customized protocol is defined for data serializing and desterilizing for the sake of decreasing the storage space in HBase. In order to make full use of locality of data storing in HBase, utilizing a new MapReduce model and a new split policy for HBase regions are proposed in the paper. In addition, a dynamic pluggable easy-to-use TAG (event metadata) based pre-selection subsystem is established. It can assist physicists even to filter out 999%o uninterested data, if the conditions are set properly. This means that a lot of I/O resources can be saved, the CPU usage can be improved and consuming time for data analysis can be reduced. Finally, several use cases are designed, the test results show that the new platform has an excellent performance with 3.4 times faster with pre-selection and 20% faster without preselection, and the new platform is stable and scalable as well.
The variability of software scoring of the CDMAM phantom associated with a limited number of images
NASA Astrophysics Data System (ADS)
Yang, Chang-Ying J.; Van Metter, Richard
2007-03-01
Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.
Improving the performance of minimizers and winnowing schemes
Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl
2017-01-01
Abstract Motivation: The minimizers scheme is a method for selecting k-mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k-mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k-mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. Results: We provide an in-depth analysis of the effect of k-mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al.) on the expected density of minimizers in a random sequence. Availability and Implementation: The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git. Contact: gmarcais@cs.cmu.edu or carlk@cs.cmu.edu PMID:28881970
The RANDOM computer program: A linear congruential random number generator
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.
Physically Unclonable Cryptographic Primitives by Chemical Vapor Deposition of Layered MoS2.
Alharbi, Abdullah; Armstrong, Darren; Alharbi, Somayah; Shahrjerdi, Davood
2017-12-26
Physically unclonable cryptographic primitives are promising for securing the rapidly growing number of electronic devices. Here, we introduce physically unclonable primitives from layered molybdenum disulfide (MoS 2 ) by leveraging the natural randomness of their island growth during chemical vapor deposition (CVD). We synthesize a MoS 2 monolayer film covered with speckles of multilayer islands, where the growth process is engineered for an optimal speckle density. Using the Clark-Evans test, we confirm that the distribution of islands on the film exhibits complete spatial randomness, hence indicating the growth of multilayer speckles is a spatial Poisson process. Such a property is highly desirable for constructing unpredictable cryptographic primitives. The security primitive is an array of 2048 pixels fabricated from this film. The complex structure of the pixels makes the physical duplication of the array impossible (i.e., physically unclonable). A unique optical response is generated by applying an optical stimulus to the structure. The basis for this unique response is the dependence of the photoemission on the number of MoS 2 layers, which by design is random throughout the film. Using a threshold value for the photoemission, we convert the optical response into binary cryptographic keys. We show that the proper selection of this threshold is crucial for maximizing combination randomness and that the optimal value of the threshold is linked directly to the growth process. This study reveals an opportunity for generating robust and versatile security primitives from layered transition metal dichalcogenides.
Hansen, Adam G.; Beauchamp, David A.
2014-01-01
Most predators eat only a subset of possible prey. However, studies evaluating diet selection rarely measure prey availability in a manner that accounts for temporal–spatial overlap with predators, the sensory mechanisms employed to detect prey, and constraints on prey capture.We evaluated the diet selection of cutthroat trout (Oncorhynchus clarkii) feeding on a diverse planktivore assemblage in Lake Washington to test the hypothesis that the diet selection of piscivores would reflect random (opportunistic) as opposed to non-random (targeted) feeding, after accounting for predator–prey overlap, visual detection and capture constraints.Diets of cutthroat trout were sampled in autumn 2005, when the abundance of transparent, age-0 longfin smelt (Spirinchus thaleichthys) was low, and 2006, when the abundance of smelt was nearly seven times higher. Diet selection was evaluated separately using depth-integrated and depth-specific (accounted for predator–prey overlap) prey abundance. The abundance of different prey was then adjusted for differences in detectability and vulnerability to predation to see whether these factors could explain diet selection.In 2005, cutthroat trout fed non-randomly by selecting against the smaller, transparent age-0 longfin smelt, but for the larger age-1 longfin smelt. After adjusting prey abundance for visual detection and capture, cutthroat trout fed randomly. In 2006, depth-integrated and depth-specific abundance explained the diets of cutthroat trout well, indicating random feeding. Feeding became non-random after adjusting for visual detection and capture. Cutthroat trout selected strongly for age-0 longfin smelt, but against similar sized threespine stickleback (Gasterosteus aculeatus) and larger age-1 longfin smelt in 2006. Overlap with juvenile sockeye salmon (O. nerka) was minimal in both years, and sockeye salmon were rare in the diets of cutthroat trout.The direction of the shift between random and non-random selection depended on the presence of a weak versus a strong year class of age-0 longfin smelt. These fish were easy to catch, but hard to see. When their density was low, poor detection could explain their rarity in the diet. When their density was high, poor detection was compensated by higher encounter rates with cutthroat trout, sufficient to elicit a targeted feeding response. The nature of the feeding selectivity of a predator can be highly dependent on fluctuations in the abundance and suitability of key prey.
Group Counseling With Emotionally Disturbed School Children in Taiwan.
ERIC Educational Resources Information Center
Chiu, Peter
The application of group counseling to emotionally disturbed school children in Chinese culture was examined. Two junior high schools located in Tao-Yuan Province were randomly selected with two eighth-grade classes randomly selected from each school. Ten emotionally disturbed students were chosen from each class and randomly assigned to two…
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
On Measuring and Reducing Selection Bias with a Quasi-Doubly Randomized Preference Trial
ERIC Educational Resources Information Center
Joyce, Ted; Remler, Dahlia K.; Jaeger, David A.; Altindag, Onur; O'Connell, Stephen D.; Crockett, Sean
2017-01-01
Randomized experiments provide unbiased estimates of treatment effects, but are costly and time consuming. We demonstrate how a randomized experiment can be leveraged to measure selection bias by conducting a subsequent observational study that is identical in every way except that subjects choose their treatment--a quasi-doubly randomized…
Dynamics of Tree Species Diversity in Unlogged and Selectively Logged Malaysian Forests.
Shima, Ken; Yamada, Toshihiro; Okuda, Toshinori; Fletcher, Christine; Kassim, Abdul Rahman
2018-01-18
Selective logging that is commonly conducted in tropical forests may change tree species diversity. In rarely disturbed tropical forests, locally rare species exhibit higher survival rates. If this non-random process occurs in a logged forest, the forest will rapidly recover its tree species diversity. Here we determined whether a forest in the Pasoh Forest Reserve, Malaysia, which was selectively logged 40 years ago, recovered its original species diversity (species richness and composition). To explore this, we compared the dynamics of secies diversity between unlogged forest plot (18.6 ha) and logged forest plot (5.4 ha). We found that 40 years are not sufficient to recover species diversity after logging. Unlike unlogged forests, tree deaths and recruitments did not contribute to increased diversity in the selectively logged forests. Our results predict that selectively logged forests require a longer time at least than our observing period (40 years) to regain their diversity.
Pervasive randomness in physics: an introduction to its modelling and spectral characterisation
NASA Astrophysics Data System (ADS)
Howard, Roy
2017-10-01
An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.
Extraction of benzene and cyclohexane using [BMIM][N(CN)2] and their equilibrium modeling
NASA Astrophysics Data System (ADS)
Ismail, Marhaina; Bustam, M. Azmi; Man, Zakaria
2017-12-01
The separation of aromatic compound from aliphatic mixture is one of the essential industrial processes for an economically green process. In order to determine the separation efficiency of ionic liquid (IL) as a solvent in the separation, the ternary diagram of liquid-liquid extraction (LLE) 1-butyl-3-methylimidazolium dicyanamide [BMIM][N(CN)2] with benzene and cyclohexane was studied at T=298.15 K and atmospheric pressure. The solute distribution coefficient and solvent selectivity derived from the equilibrium data were used to evaluate if the selected ionic liquid can be considered as potential solvent for the separation of benzene from cyclohexane. The experimental tie line data was correlated using non-random two liquid model (NRTL) and Margules model. It was found that the solute distribution coefficient is (0.4430-0.0776) and selectivity of [BMIM][N(CN)2] for benzene is (53.6-13.9). The ternary diagram showed that the selected IL can perform the separation of benzene and cyclohexane as it has extractive capacity and selectivity. Therefore, [BMIM][N(CN)2] can be considered as a potential extracting solvent for the LLE of benzene and cyclohexane.
Meirelles, S L C; Mokry, F B; Espasandín, A C; Dias, M A D; Baena, M M; de A Regitano, L C
2016-06-10
Correlation between genetic parameters and factors such as backfat thickness (BFT), rib eye area (REA), and body weight (BW) were estimated for Canchim beef cattle raised in natural pastures of Brazil. Data from 1648 animals were analyzed using multi-trait (BFT, REA, and BW) animal models by the Bayesian approach. This model included the effects of contemporary group, age, and individual heterozygosity as covariates. In addition, direct additive genetic and random residual effects were also analyzed. Heritability estimated for BFT (0.16), REA (0.50), and BW (0.44) indicated their potential for genetic improvements and response to selection processes. Furthermore, genetic correlations between BW and the remaining traits were high (P > 0.50), suggesting that selection for BW could improve REA and BFT. On the other hand, genetic correlation between BFT and REA was low (P = 0.39 ± 0.17), and included considerable variations, suggesting that these traits can be jointly included as selection criteria without influencing each other. We found that REA and BFT responded to the selection processes, as measured by ultrasound. Therefore, selection for yearling weight results in changes in REA and BFT.
Gill, C O; Moza, L F; Badoni, M; Barbut, S
2006-07-15
The log mean numbers of aerobes, coliforms, Escherichia coli and presumptive staphylococci plus listerias on chicken carcasses and carcass portions at various stages of processing at a poultry packing plant were estimated from the numbers of those bacteria recovered from groups of 25 randomly selected product units. The fractions of listerias in the presumptive staphylococci plus listerias groups of organisms were also estimated. Samples were obtained from carcasses by excising a strip of skin measuring approximately 5 x 2 cm(2) from a randomly selected site on each selected carcass, or by rinsing each selected carcass portion. The log mean numbers of aerobes, coliforms, E. coli and presumptive staphylococci plus listerias on carcasses after scalding at 58 degrees C and plucking were about 4.4, 2.5, 2.2 and 1.4 log cfu/cm(2), respectively. The numbers of bacteria on eviscerated carcasses were similar. After the series of operations for removing the crop, lungs, kidneys and neck, the numbers of aerobes were about 1 log unit less than on eviscerated carcasses, but the numbers of the other bacteria were not substantially reduced. After cooling in water, the numbers of coliforms and E. coli were about 1 log unit less and the numbers of presumptive staphylococci plus listerias were about 0.5 log unit less than the numbers on dressed carcasses, but the numbers of aerobes were not reduced. The numbers of aerobes were 1 log unit more on boneless breasts, and 0.5 log units more on skin-on thighs and breasts that had been tumbled with brine than on cooled carcasses; and presumptive staphylococci plus listerias were 0.5 log unit more on thighs than on cooled carcasses. Otherwise the numbers of bacteria on the product were not substantially affected by processing. Listerias were <20% of the presumptive staphylococci plus listerias group of organisms recovered from product at each point in the process except after breasts were tumbled with brine, when >40% of the organisms were listerias.
SNP selection and classification of genome-wide SNP data using stratified sampling random forests.
Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K
2012-09-01
For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.
NASA Astrophysics Data System (ADS)
Giona, Massimiliano; Brasiello, Antonio; Crescitelli, Silvestro
2016-04-01
We introduce a new class of stochastic processes in
Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing
NASA Astrophysics Data System (ADS)
Arcuri, Andrea; Iqbal, Muhammad Zohaib; Briand, Lionel
Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML/MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies.
Instrument Selection for Randomized Controlled Trials Why This and Not That?
Records, Kathie; Keller, Colleen; Ainsworth, Barbara; Permana, Paska
2011-01-01
A fundamental linchpin for obtaining rigorous findings in quantitative research involves the selection of survey instruments. Psychometric recommendations are available for the processes for scale development and testing and guidance for selection of established scales. These processes are necessary to address the validity link between the phenomena under investigation, the empirical measures and, ultimately, the theoretical ties between these and the world views of the participants. Detailed information is most often provided about study design and protocols, but far less frequently is a detailed theoretical explanation provided for why specific instruments are chosen. Guidance to inform choices is often difficult to find when scales are needed for specific cultural, ethnic, or racial groups. This paper details the rationale underlying instrument selection for measurement of the major processes (intervention, mediator and moderator variables, outcome variables) in an ongoing study of postpartum Latinas, Madres para la Salud [Mothers for Health]. The rationale underpinning our choices includes a discussion of alternatives, when appropriate. These exemplars may provide direction for other intervention researchers who are working with specific cultural, racial, or ethnic groups or for other investigators who are seeking to select the ‘best’ instrument. Thoughtful consideration of measurement and articulation of the rationale underlying our choices facilitates the maintenance of rigor within the study design and improves our ability to assess study outcomes. PMID:21986392
Method and apparatus for signal processing in a sensor system for use in spectroscopy
O'Connor, Paul [Bellport, NY; DeGeronimo, Gianluigi [Nesconset, NY; Grosholz, Joseph [Natrona Heights, PA
2008-05-27
A method for processing pulses arriving randomly in time on at least one channel using multiple peak detectors includes asynchronously selecting a non-busy peak detector (PD) in response to a pulse-generated trigger signal, connecting the channel to the selected PD in response to the trigger signal, and detecting a pulse peak amplitude. Amplitude and time of arrival data are output in first-in first-out (FIFO) sequence. An apparatus includes trigger comparators to generate the trigger signal for the pulse-receiving channel, PDs, a switch for connecting the channel to the selected PD, and logic circuitry which maintains the write pointer. Also included, time-to-amplitude converters (TACs) convert time of arrival to analog voltage and an analog multiplexer provides FIFO output. A multi-element sensor system for spectroscopy includes detector elements, channels, trigger comparators, PDs, a switch, and a logic circuit with asynchronous write pointer. The system includes TACs, a multiplexer and analog-to-digital converter.
Reiner, Bruce I
2017-12-01
In conventional radiology peer review practice, a small number of exams (routinely 5% of the total volume) is randomly selected, which may significantly underestimate the true error rate within a given radiology practice. An alternative and preferable approach would be to create a data-driven model which mathematically quantifies a peer review risk score for each individual exam and uses this data to identify high risk exams and readers, and selectively target these exams for peer review. An analogous model can also be created to assist in the assignment of these peer review cases in keeping with specific priorities of the service provider. An additional option to enhance the peer review process would be to assign the peer review cases in a truly blinded fashion. In addition to eliminating traditional peer review bias, this approach has the potential to better define exam-specific standard of care, particularly when multiple readers participate in the peer review process.
Li, Jianghong; Valente, Thomas W; Shin, Hee-Sung; Weeks, Margaret; Zelenev, Alexei; Moothi, Gayatri; Mosher, Heather; Heimer, Robert; Robles, Eduardo; Palmer, Greg; Obidoa, Chinekwu
2017-06-28
Intensive sociometric network data were collected from a typical respondent driven sample (RDS) of 528 people who inject drugs residing in Hartford, Connecticut in 2012-2013. This rich dataset enabled us to analyze a large number of unobserved network nodes and ties for the purpose of assessing common assumptions underlying RDS estimators. Results show that several assumptions central to RDS estimators, such as random selection, enrollment probability proportional to degree, and recruitment occurring over recruiter's network ties, were violated. These problems stem from an overly simplistic conceptualization of peer recruitment processes and dynamics. We found nearly half of participants were recruited via coupon redistribution on the street. Non-uniform patterns occurred in multiple recruitment stages related to both recruiter behavior (choosing and reaching alters, passing coupons, etc.) and recruit behavior (accepting/rejecting coupons, failing to enter study, passing coupons to others). Some factors associated with these patterns were also associated with HIV risk.
Tucker, Jalie A.; Reed, Geoffrey M.
2008-01-01
This paper examines the utility of evidentiary pluralism, a research strategy that selects methods in service of content questions, in the context of rehabilitation psychology. Hierarchical views that favor randomized controlled clinical trials (RCTs) over other evidence are discussed, and RCTs are considered as they intersect with issues in the field. RCTs are vital for establishing treatment efficacy, but whether they are uniformly the best evidence to inform practice is critically evaluated. We argue that because treatment is only one of several variables that influence functioning, disability, and participation over time, an expanded set of conceptual and data analytic approaches should be selected in an informed way to support an expanded research agenda that investigates therapeutic and extra-therapeutic influences on rehabilitation processes and outcomes. The benefits of evidentiary pluralism are considered, including helping close the gap between the narrower clinical rehabilitation model and a public health disability model. KEY WORDS: evidence-based practice, evidentiary pluralism, rehabilitation psychology, randomized controlled trials PMID:19649150
Xu, Xiaoyi; Li, Ao; Wang, Minghui
2015-08-01
Phosphorylation is a crucial post-translational modification, which regulates almost all cellular processes in life. It has long been recognised that protein phosphorylation has close relationship with diseases, and therefore many researches are undertaken to predict phosphorylation sites for disease treatment and drug design. However, despite the success achieved by these approaches, no method focuses on disease-associated phosphorylation sites prediction. Herein, for the first time the authors propose a novel approach that is specially designed to identify associations between phosphorylation sites and human diseases. To take full advantage of local sequence information, a combined feature selection method-based support vector machine (CFS-SVM) that incorporates minimum-redundancy-maximum-relevance filtering process and forward feature selection process is developed. Performance evaluation shows that CFS-SVM is significantly better than the widely used classifiers including Bayesian decision theory, k nearest neighbour and random forest. With the extremely high specificity of 99%, CFS-SVM can still achieve a high sensitivity. Besides, tests on extra data confirm the effectiveness and general applicability of CFS-SVM approach on a variety of diseases. Finally, the analysis of selected features and corresponding kinases also help the understanding of the potential mechanism of disease-phosphorylation relationships and guide further experimental validations.
Gill, C O; McGinnis, J C; Bryant, J
1998-07-21
The microbiological effects on the product of the series of operations for skinning the hindquarters of beef carcasses at three packing plants were assessed. Samples were obtained at each plant from randomly selected carcasses, by swabbing specified sites related to opening cuts, rump skinning or flank skinning operations, randomly selected sites along the lines of the opening cuts, or randomly selected sites on the skinned hindquarters of carcasses. A set of 25 samples of each type was collected at each plant, with the collection of a single sample from each selected carcass. Aerobic counts, coliforms and Escherichia coli were enumerated in each sample, and a log mean value was estimated for each set of 25 counts on the assumption of a log normal distribution of the counts. The data indicated that the hindquarters skinning operations at plant A were hygienically inferior to those at the other two plants, with mean numbers of coliforms and E. coli being about two orders of magnitude greater, and aerobic counts being an order of magnitude greater on the skinned hindquarters of carcasses from plant A than on those from plants B or C. The data further indicated that the operation for cutting open the skin at plant C was hygienically superior to the equivalent operation at plant B, but that the operations for skinning the rump and flank at plant B were hygienically superior to the equivalent operations at plant C. The findings suggest that objective assessment of the microbiological effects on carcasses of beef carcass dressing processes will be required to ensure that Hazard Analysis: Critical Control Point and Quality Management Systems are operated to control the microbiological condition of carcasses.
Takahashi, Mayumi; Wu, Xiwei; Ho, Michelle; Chomchan, Pritsana; Rossi, John J; Burnett, John C; Zhou, Jiehua
2016-09-22
The systemic evolution of ligands by exponential enrichment (SELEX) technique is a powerful and effective aptamer-selection procedure. However, modifications to the process can dramatically improve selection efficiency and aptamer performance. For example, droplet digital PCR (ddPCR) has been recently incorporated into SELEX selection protocols to putatively reduce the propagation of byproducts and avoid selection bias that result from differences in PCR efficiency of sequences within the random library. However, a detailed, parallel comparison of the efficacy of conventional solution PCR versus the ddPCR modification in the RNA aptamer-selection process is needed to understand effects on overall SELEX performance. In the present study, we took advantage of powerful high throughput sequencing technology and bioinformatics analysis coupled with SELEX (HT-SELEX) to thoroughly investigate the effects of initial library and PCR methods in the RNA aptamer identification. Our analysis revealed that distinct "biased sequences" and nucleotide composition existed in the initial, unselected libraries purchased from two different manufacturers and that the fate of the "biased sequences" was target-dependent during selection. Our comparison of solution PCR- and ddPCR-driven HT-SELEX demonstrated that PCR method affected not only the nucleotide composition of the enriched sequences, but also the overall SELEX efficiency and aptamer efficacy.
Takahashi, Mayumi; Wu, Xiwei; Ho, Michelle; Chomchan, Pritsana; Rossi, John J.; Burnett, John C.; Zhou, Jiehua
2016-01-01
The systemic evolution of ligands by exponential enrichment (SELEX) technique is a powerful and effective aptamer-selection procedure. However, modifications to the process can dramatically improve selection efficiency and aptamer performance. For example, droplet digital PCR (ddPCR) has been recently incorporated into SELEX selection protocols to putatively reduce the propagation of byproducts and avoid selection bias that result from differences in PCR efficiency of sequences within the random library. However, a detailed, parallel comparison of the efficacy of conventional solution PCR versus the ddPCR modification in the RNA aptamer-selection process is needed to understand effects on overall SELEX performance. In the present study, we took advantage of powerful high throughput sequencing technology and bioinformatics analysis coupled with SELEX (HT-SELEX) to thoroughly investigate the effects of initial library and PCR methods in the RNA aptamer identification. Our analysis revealed that distinct “biased sequences” and nucleotide composition existed in the initial, unselected libraries purchased from two different manufacturers and that the fate of the “biased sequences” was target-dependent during selection. Our comparison of solution PCR- and ddPCR-driven HT-SELEX demonstrated that PCR method affected not only the nucleotide composition of the enriched sequences, but also the overall SELEX efficiency and aptamer efficacy. PMID:27652575
Mixing rates and limit theorems for random intermittent maps
NASA Astrophysics Data System (ADS)
Bahsoun, Wael; Bose, Christopher
2016-04-01
We study random transformations built from intermittent maps on the unit interval that share a common neutral fixed point. We focus mainly on random selections of Pomeu-Manneville-type maps {{T}α} using the full parameter range 0<α <∞ , in general. We derive a number of results around a common theme that illustrates in detail how the constituent map that is fastest mixing (i.e. smallest α) combined with details of the randomizing process, determines the asymptotic properties of the random transformation. Our key result (theorem 1.1) establishes sharp estimates on the position of return time intervals for the quenched dynamics. The main applications of this estimate are to limit laws (in particular, CLT and stable laws, depending on the parameters chosen in the range 0<α <1 ) for the associated skew product; these are detailed in theorem 3.2. Since our estimates in theorem 1.1 also hold for 1≤slant α <∞ we study a second class of random transformations derived from piecewise affine Gaspard-Wang maps, prove existence of an infinite (σ-finite) invariant measure and study the corresponding correlation asymptotics. To the best of our knowledge, this latter kind of result is completely new in the setting of random transformations.
2018-01-12
sequential representations, a method is required for deter- mining which to use for the application at hand and, once a representation is selected, for...DISTRIBUTION UNLIMITED Methods , Assumptions, and Procedures 3.1 Background 3.1.1 CRMs and truncation Consider a Poisson point process on R+ := [0...the heart of the study of truncated CRMs. They provide an itera- tive method that can be terminated at any point to yield a finite approximation to the
Robert H. McAlister; Alexander Clark; Joseph R. Saucier
1997-01-01
The effect of rotation age on strength and stiffness of lumber produced from unthinned loblolly pine stands in the Coastal Plain of Georgia was examined. Six stands representing 22-, 28-, and 40-year-old roations were sampled. A stratified random sample of trees 8 to 16 inches in diameter at breast height was selected from each stand and processed into lumber....
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
1986 Proteus Survey: Technical Manual and Codebook
1992-06-01
Officer Candidate School and Direct Commission) and by gender. Female officers were oversampled (30% in the sample versus ap- proximately 16% in the...analyze the effects of this change in policy both on the individual cadets and on the Academy and to study the process of coeducation over four years...Candidate School (OCS), and Direct Commissioning (DC). Approximately 1,000 officers were randomly selected from each commissioning year group 1980-1984 from
Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D
2016-03-01
The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Sutton, Steven C; Hu, Mingxiu
2006-05-05
Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.
Seeing People, Seeing Things: Individual Differences in Selective Attention.
McIntyre, Miranda M; Graziano, William G
2016-09-01
Individuals differ in how they deploy attention to their physical and social environments. These differences have been recognized in various forms as orientations, interests, and preferences, but empirical work examining these differences at a cognitive level is scarce. To address this gap, we conducted two studies to explore the links among attentional processes and interests in people and things. The first study measured selective visual attention toward person- and thing-related image content. In the second study, participants were randomly assigned to describe visually presented scenes using either an observational or narrative story format. Linguistic analyses were conducted to assess attentional bias toward interest-congruent content. Outcomes from both studies suggest that attention and motivational processes are linked to differential interests in physical and social environments. © 2016 by the Society for Personality and Social Psychology, Inc.
The quality of care in occupational therapy: an assessment of selected Michigan hospitals.
Kirchman, M M
1979-07-01
In this study, a methodology was developed and tested for assessing the quality of care in occupational therapy between educational and noneducational clinical settings, as measured by process and outcome. An instrument was constructed for an external audit of the hospital record. Standards drafted by the investigator were established as normative by a panel of experts for use in judging the programs. Hospital records of 84 patients with residual hemiparesis or hemiplegia in three noneducational settings and of 100 patients with similar diagnoses in two educational clinical settings from selected Michigan facilities were chosen by proportionate stratified random sampling. The process study showed that occupational therapy was of significantly higher quality in the educational settings. The outcome study did not show significant differences between types of settings. Implications for education and practice are discussed.
Effects of Selected Meditative Asanas on Kinaesthetic Perception and Speed of Movement
ERIC Educational Resources Information Center
Singh, Kanwaljeet; Bal, Baljinder S.; Deol, Nishan S.
2009-01-01
Study aim: To assess the effects of selected meditative "asanas" on kinesthetic perception and movement speed. Material and methods: Thirty randomly selected male students aged 18-24 years volunteered to participate in the study. They were randomly assigned into two groups: A (medidative) and B (control). The Nelson's movement speed and…
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Kronberg, J.W.
1993-04-20
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
Kronberg, James W.
1993-01-01
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
Population differentiation in Pacific salmon: local adaptation, genetic drift, or the environment?
Adkison, Milo D.
1995-01-01
Morphological, behavioral, and life-history differences between Pacific salmon (Oncorhynchus spp.) populations are commonly thought to reflect local adaptation, and it is likewise common to assume that salmon populations separated by small distances are locally adapted. Two alternatives to local adaptation exist: random genetic differentiation owing to genetic drift and founder events, and genetic homogeneity among populations, in which differences reflect differential trait expression in differing environments. Population genetics theory and simulations suggest that both alternatives are possible. With selectively neutral alleles, genetic drift can result in random differentiation despite many strays per generation. Even weak selection can prevent genetic drift in stable populations; however, founder effects can result in random differentiation despite selective pressures. Overlapping generations reduce the potential for random differentiation. Genetic homogeneity can occur despite differences in selective regimes when straying rates are high. In sum, localized differences in selection should not always result in local adaptation. Local adaptation is favored when population sizes are large and stable, selection is consistent over large areas, selective diffeentials are large, and straying rates are neither too high nor too low. Consideration of alternatives to local adaptation would improve both biological research and salmon conservation efforts.
Weighted Scaling in Non-growth Random Networks
NASA Astrophysics Data System (ADS)
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li
2012-09-01
We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.
Pre-Enrollment Reimbursement Patterns of Medicare Beneficiaries Enrolled in “At-Risk” HMOs
Eggers, Paul W.; Prihoda, Ronald
1982-01-01
The Health Care Financing Administration (HCFA) has initiated several demonstration projects to encourage HMOs to participate in the Medicare program under a risk mechanism. These demonstrations are designed to test innovative marketing techniques, benefit packages, and reimbursement levels. HCFA's current method for prospective payments to HMOs is based on the Adjusted Average Per Capita Cost (AAPCC). An important issue in prospective reimbursement is the extent to which the AAPCC adequately reflects the risk factors which arise out of the selection process of Medicare beneficiaries into HMOs. This study examines the pre-enrollment reimbursement experience of Medicare beneficiaries who enrolled in the demonstration HMOs to determine whether or not a non-random selection process took place. The three demonstration HMOs included in the study are the Fallon Community Health Plan, the Greater Marshfield Community Health Plan, and the Kaiser-Permanente medical program of Portland, Oregon. The study includes 18,085 aged Medicare beneficiaries who had enrolled in the three plans as of April, 1981. We included comparison groups consisting of a 5 percent random sample of aged Medicare beneficiaries (N = 11,240) living in the same geographic areas as the control groups. The study compares the groups by total Medicare reimbursements for the years 1976 through 1979. Adjustments were made for AAPCC factor differences in the groups (age, sex, institutional status, and welfare status). In two of the HMO areas there was evidence of a selection process among the HMOs enrollees. Enrollees in the Fallon and Kaiser health plans were found to have had 20 percent lower Medicare reimbursements than their respective comparison groups in the four years prior to enrollment. This effect was strongest for inpatient services, but a significant difference also existed for use of physician and outpatient services. In the Marshfield HMO there was no statistically significant difference in pre-enrollment Medicare total reimbursements between the enrollee and comparison groups. However, outpatient and physician reimbursements were significantly higher (22 percent) among the enrollee group. The results of this study suggest that the AAPCC may not be an adequate mechanism for setting prospective reimbursement rates. The Marshfield results further suggest that the type of HMO may have an influence on the selection process among Medicare beneficiaries. If Medicare beneficiaries do not have to change providers to join an HMO, as in an IPA model or a staff model which includes most of the providers in an area, the selection process may be more likely to result in an unbiased risk group. PMID:10309720
NASA Astrophysics Data System (ADS)
Mailfert, Julien; Van de Kerkhove, Jeroen; De Bisschop, Peter; De Meyer, Kristin
2014-03-01
A Metal1-layer (M1) patterning study is conducted on 20nm node (N20) for random-logic applications. We quantified the printability performance on our test vehicle for N20, corresponding to Poly/M1 pitches of 90/64nm, and with a selected minimum M1 gap size of 70nm. The Metal1 layer is patterned with 193nm immersion lithography (193i) using Negative Tone Developer (NTD) resist, and a double-patterning Litho-Etch-Litho-Etch (LELE) process. Our study is based on Logic test blocks that we OPCed with a combination of calibrated models for litho and for etch. We report the Overlapping Process Window (OPW), based on a selection of test structures measured after-etch. We find that most of the OPW limiting structures are EOL (End-of-Line) configurations. Further analysis of these individual OPW limiters will reveal that they belong to different types, such as Resist 3D (R3D) and Mask 3D (M3D) sensitive structures, limiters related to OPC (Optical Proximity Corrections) options such as assist placement, or the choice of CD metrics and tolerances for calculation of the process windows itself. To guide this investigation, we will consider a `reference OPC' case to be compared with other solutions. In addition, rigorous simulations and OPC verifications will complete the after-etch measurements to help us to validate our experimental findings.
Zeinab, Jalambadani; Gholamreza, Garmaroudi; Mehdi, Yaseri; Mahmood, Tavousi; Korush, Jafarian
2017-09-21
The Trans-Theoretical model (TTM) and Theory of Planned Behaviour (TPB) may be promising models for understanding and predicting reduction in the consumption of fast food. The aim of this study was to examine the applicability of the Trans-Theoretical model (TTM) and the additional predictive role of the subjective norms and perceived behavioural control in predicting reduction consumption of fast food in obese Iranian adolescent girls. A cross sectional study design was conducted among twelve randomly selected schools in Sabzevar, Iran from 2015 to 2017. Four hundred eighty five randomly selected students consented to participate in the study. Hierarchical regression models used to predict the role of important variables that can influence the reduction in the consumption of fast food among students. using SPSS version 22. Variables Perceived behavioural control (r=0.58, P<0.001), Subjective norms (r=0.51, P<0.001), self-efficacy (r=0.49, P<0.001), decisional balance (pros) (r=0.29, P<0.001), decisional balance (cons) (r=0.25, P<0.001), stage of change (r=0.38, P<0.001), were significantly and positively correlated while experiential processes of change (r=0.08, P=0.135) and behavioural processes of change (r=0.09, P=0.145), were not significant. The study demonstrated that the TTM (except the experiential and behavioural processes of change) focusing on the perceived behavioural control and subjective norms are useful models for reduction in the consumption of fast food.
True randomness from an incoherent source
NASA Astrophysics Data System (ADS)
Qi, Bing
2017-11-01
Quantum random number generators (QRNGs) harness the intrinsic randomness in measurement processes: the measurement outputs are truly random, given the input state is a superposition of the eigenstates of the measurement operators. In the case of trusted devices, true randomness could be generated from a mixed state ρ so long as the system entangled with ρ is well protected. We propose a random number generation scheme based on measuring the quadrature fluctuations of a single mode thermal state using an optical homodyne detector. By mixing the output of a broadband amplified spontaneous emission (ASE) source with a single mode local oscillator (LO) at a beam splitter and performing differential photo-detection, we can selectively detect the quadrature fluctuation of a single mode output of the ASE source, thanks to the filtering function of the LO. Experimentally, a quadrature variance about three orders of magnitude larger than the vacuum noise has been observed, suggesting this scheme can tolerate much higher detector noise in comparison with QRNGs based on measuring the vacuum noise. The high quality of this entropy source is evidenced by the small correlation coefficients of the acquired data. A Toeplitz-hashing extractor is applied to generate unbiased random bits from the Gaussian distributed raw data, achieving an efficiency of 5.12 bits per sample. The output of the Toeplitz extractor successfully passes all the NIST statistical tests for random numbers.
DeVoe, Jennifer E; Marino, Miguel; Angier, Heather; O'Malley, Jean P; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J; Bailey, Steffani R; Gallia, Charles; Gold, Rachel
2015-01-01
In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon's randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. To estimate the effect on a child's health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. Oregon Experiment randomized natural experiment assessing the results of Oregon's 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child's Medicaid or Children's Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children's coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. Children's Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent's selection date. In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent's selection compared with children whose parents were not selected (adjusted odds ratio [AOR]=1.18; 95% CI, 1.10-1.27). The effect remained significant during months 7 to 12 (AOR=1.11; 95% CI, 1.03-1.19); months 13 to 18 showed a positive but not significant effect (AOR=1.07; 95% CI, 0.99-1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR=2.37; 95% CI, 2.14-2.64). Children's odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents' access to Medicaid coverage and their children's coverage.
NASA Astrophysics Data System (ADS)
Shea, Thomas; Krimer, Daniel; Costa, Fidel; Hammer, Julia
2014-05-01
One of the achievements in recent years in volcanology is the determination of time-scales of magmatic processes via diffusion in minerals and its addition to the petrologists' and volcanologists' toolbox. The method typically requires one-dimensional modeling of randomly cut crystals from two-dimensional thin sections. Here we address the question whether using 1D (traverse) or 2D (surface) datasets exploited from randomly cut 3D crystals introduces a bias or dispersion in the time-scales estimated, and how this error can be improved or eliminated. Computational simulations were performed using a concentration-dependent, finite-difference solution to the diffusion equation in 3D. The starting numerical models involved simple geometries (spheres, parallelepipeds), Mg/Fe zoning patterns (either normal or reverse), and isotropic diffusion coefficients. Subsequent models progressively incorporated more complexity, 3D olivines possessing representative polyhedral morphologies, diffusion anisotropy along the different crystallographic axes, and more intricate core-rim zoning patterns. Sections and profiles used to compare 1, 2 and 3D diffusion models were selected to be (1) parallel to the crystal axes, (2) randomly oriented but passing through the olivine center, or (3) randomly oriented and sectioned. Results show that time-scales estimated on randomly cut traverses (1D) or surfaces (2D) can be widely distributed around the actual durations of 3D diffusion (~0.2 to 10 times the true diffusion time). The magnitude over- or underestimations of duration are a complex combination of the geometry of the crystal, the zoning pattern, the orientation of the cuts with respect to the crystallographic axes, and the degree of diffusion anisotropy. Errors on estimated time-scales retrieved from such models may thus be significant. Drastic reductions in the uncertainty of calculated diffusion times can be obtained by following some simple guidelines during the course of data collection (i.e. selection of crystals and concentration profiles, acquisition of crystallographic orientation data), thus allowing derivation of robust time-scales.
Böcker, K B E; Gerritsen, J; Hunault, C C; Kruidenier, M; Mensinga, Tj T; Kenemans, J L
2010-07-01
Cannabis intake has been reported to affect cognitive functions such as selective attention. This study addressed the effects of exposure to cannabis with up to 69.4mg Delta(9)-tetrahydrocannabinol (THC) on Event-Related Potentials (ERPs) recorded during a visual selective attention task. Twenty-four participants smoked cannabis cigarettes with four doses of THC on four test days in a randomized, double blind, placebo-controlled, crossover study. Two hours after THC exposure the participants performed a visual selective attention task and concomitant ERPs were recorded. Accuracy decreased linearly and reaction times increased linearly with THC dose. However, performance measures and most of the ERP components related specifically to selective attention did not show significant dose effects. Only in relatively light cannabis users the Occipital Selection Negativity decreased linearly with dose. Furthermore, ERP components reflecting perceptual processing, as well as the P300 component, decreased in amplitude after THC exposure. Only the former effect showed a linear dose-response relation. The decrements in performance and ERP amplitudes induced by exposure to cannabis with high THC content resulted from a non-selective decrease in attentional or processing resources. Performance requiring attentional resources, such as vehicle control, may be compromised several hours after smoking cannabis cigarettes containing high doses of THC, as presently available in Europe and Northern America. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hikmah, N.; Yamtinah, S.; Ashadi; Indriyanti, N. Y.
2018-05-01
A Science process skill (SPS) is a fundamental scientific method to achieve good knowledge. SPS can be categorized into two levels: basic and integrated. Learning SPS helps children to grow as individuals who can access knowledge and know how to acquire it. The primary outcomes of the scientific process in learning are the application of scientific processes, scientific reasoning, accurate knowledge, problem-solving, and understanding of the relationship between science, technology, society, and everyday life’s events. Teachers’ understanding of SPS is central to the application of SPS in a learning process. Following this point, this study aims to investigate the high school chemistry teachers’ understanding of SPS pertains to their assessment of SPS in chemistry learning. The understanding of SPS is measured from the conceptual and operational aspects of SPS. This research uses qualitative analysis method, and the sample consists of eight chemistry teachers selected by random sampling. A semi-structured interview procedure is used to collect the data. The result of the analysis shows that teachers’ conceptual and operational understanding of SPS is weak. It affects the accuracy and appropriateness of the teacher’s selection of SPS assessment in chemistry learning.
Effects of dopaminergic modulation on electrophysiological brain response to affective stimuli.
Franken, Ingmar H A; Nijs, Ilse; Pepplinkhuizen, Lolke
2008-01-01
Several theoretical accounts of the role of dopamine suggest that dopamine has an influence on the processing of affective stimuli. There is some indirect evidence for this from studies showing an association between the treatment with dopaminergic agents and self-reported affect. We addressed this issue directly by examining the electrophysiological correlates of affective picture processing during a single-dose treatment with a dopamine D2 agonist (bromocriptine), a dopamine D2 antagonist (haloperidol), and a placebo. We compared early and late event-related brain potentials (ERPs) that have been associated with affective processing in the three medication treatment conditions in a randomized double-blind crossover design amongst healthy males. In each treatment condition, subjects attentively watched neutral, pleasant, and unpleasant pictures while ERPs were recorded. Results indicate that neither bromocriptine nor haloperidol has a selective effect on electrophysiological indices of affective processing. In concordance with this, no effects of dopaminergic modulation on self-reported positive or negative affect was observed. In contrast, bromocriptine decreased overall processing of all stimulus categories regardless of their affective content. The results indicate that dopaminergic D2 receptors do not seem to play a crucial role in the selective processing of affective visual stimuli.
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
The scarcity heuristic impacts reward processing within the medial-frontal cortex.
Williams, Chad C; Saffer, Boaz Y; McCulloch, Robert B; Krigolson, Olave E
2016-05-04
Objects that are rare are often perceived to be inherently more valuable than objects that are abundant - a bias brought about in part by the scarcity heuristic. In the present study, we sought to test whether perception of rarity impacted reward evaluation within the human medial-frontal cortex. Here, participants played a gambling game in which they flipped rare and abundant 'cards' on a computer screen to win financial rewards while electroencephalographic data were recorded. Unbeknownst to participants, reward outcome and frequency was random and equivalent for both rare and abundant cards; thus, only a perception of scarcity was true. Analysis of the electroencephalographic data indicated that the P300 component of the event-related brain potential differed in amplitude for wins and losses following the selection of rare cards, but not following the selection of abundant cards. Importantly, then, we found that the perception of card rarity impacted reward processing even though reward feedback was independent of and subsequent to card selection. Our data indicate a top-down influence of the scarcity heuristic on reward evaluation, and specifically the processing of reward magnitude, within the human medial-frontal cortex.
Skinner, Michael K
2015-04-26
Environment has a critical role in the natural selection process for Darwinian evolution. The primary molecular component currently considered for neo-Darwinian evolution involves genetic alterations and random mutations that generate the phenotypic variation required for natural selection to act. The vast majority of environmental factors cannot directly alter DNA sequence. Epigenetic mechanisms directly regulate genetic processes and can be dramatically altered by environmental factors. Therefore, environmental epigenetics provides a molecular mechanism to directly alter phenotypic variation generationally. Lamarck proposed in 1802 the concept that environment can directly alter phenotype in a heritable manner. Environmental epigenetics and epigenetic transgenerational inheritance provide molecular mechanisms for this process. Therefore, environment can on a molecular level influence the phenotypic variation directly. The ability of environmental epigenetics to alter phenotypic and genotypic variation directly can significantly impact natural selection. Neo-Lamarckian concept can facilitate neo-Darwinian evolution. A unified theory of evolution is presented to describe the integration of environmental epigenetic and genetic aspects of evolution. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Teaching undergraduates the process of peer review: learning by doing.
Rangachari, P K
2010-09-01
An active approach allowed undergraduates in Health Sciences to learn the dynamics of peer review at first hand. A four-stage process was used. In stage 1, students formed self-selected groups to explore specific issues. In stage 2, each group posted their interim reports online on a specific date. Each student read all the other reports and prepared detailed critiques. In stage 3, each report was discussed at sessions where the lead discussant was selected at random. All students participated in the peer review process. The written critiques were collated and returned to each group, who were asked to resubmit their revised reports within 2 wk. In stage 4, final submissions accompanied by rebuttals were graded. Student responses to a questionnaire were highly positive. They recognized the individual steps in the standard peer review, appreciated the complexities involved, and got a first-hand experience of some of the inherent variabilities involved. The absence of formal presentations and the opportunity to read each other's reports permitted them to study issues in greater depth.
Multicomponent Supramolecular Systems: Self-Organization in Coordination-Driven Self-Assembly
Zheng, Yao-Rong; Yang, Hai-Bo; Ghosh, Koushik; Zhao, Liang; Stang, Peter J.
2009-01-01
The self-organization of multicomponent supramolecular systems involving a variety of two-dimensional (2-D) polygons and three-dimensional (3-D) cages is presented. Nine self-organizing systems, SS1–SS9, have been studied. Each involving the simultaneous mixing of organoplatinum acceptors and pyridyl donors of varying geometry and their selective self-assembly into three to four specific 2-D (rectangular, triangular, and rhomboid) and/or 3-D (triangular prism and distorted and nondistorted trigonal bipyramidal) supramolecules. The formation of these discrete structures is characterized using NMR spectroscopy and electrospray ionization mass spectrometry (ESI-MS). In all cases, the self-organization process is directed by: (1) the geometric information encoded within the molecular subunits and (2) a thermodynamically driven dynamic self-correction process. The result is the selective self-assembly of multiple discrete products from a randomly formed complex. The influence of key experimental variables – temperature and solvent – on the self-correction process and the fidelity of the resulting self-organization systems is also described. PMID:19544512
NASA Astrophysics Data System (ADS)
Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.
2018-04-01
We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.
Chemical Evolution and the Evolutionary Definition of Life.
Higgs, Paul G
2017-06-01
Darwinian evolution requires a mechanism for generation of diversity in a population, and selective differences between individuals that influence reproduction. In biology, diversity is generated by mutations and selective differences arise because of the encoded functions of the sequences (e.g., ribozymes or proteins). Here, I draw attention to a process that I will call chemical evolution, in which the diversity is generated by random chemical synthesis instead of (or in addition to) mutation, and selection acts on physicochemical properties, such as hydrolysis, photolysis, solubility, or surface binding. Chemical evolution applies to short oligonucleotides that can be generated by random polymerization, as well as by template-directed replication, and which may be too short to encode a specific function. Chemical evolution is an important stage on the pathway to life, between the stage of "just chemistry" and the stage of full biological evolution. A mathematical model is presented here that illustrates the differences between these three stages. Chemical evolution leads to much larger differences in molecular concentrations than can be achieved by selection without replication. However, chemical evolution is not open-ended, unlike biological evolution. The ability to undergo Darwinian evolution is often considered to be a defining feature of life. Here, I argue that chemical evolution, although Darwinian, does not quite constitute life, and that a good place to put the conceptual boundary between non-life and life is between chemical and biological evolution.
Kinetics of Aggregation with Choice
Ben-Naim, Eli; Krapivsky, Paul
2016-12-01
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
Fine-scale spatial genetic dynamics over the life cycle of the tropical tree Prunus africana.
Berens, D G; Braun, C; González-Martínez, S C; Griebeler, E M; Nathan, R; Böhning-Gaese, K
2014-11-01
Studying fine-scale spatial genetic patterns across life stages is a powerful approach to identify ecological processes acting within tree populations. We investigated spatial genetic dynamics across five life stages in the insect-pollinated and vertebrate-dispersed tropical tree Prunus africana in Kakamega Forest, Kenya. Using six highly polymorphic microsatellite loci, we assessed genetic diversity and spatial genetic structure (SGS) from seed rain and seedlings, and different sapling stages to adult trees. We found significant SGS in all stages, potentially caused by limited seed dispersal and high recruitment rates in areas with high light availability. SGS decreased from seed and early seedling stages to older juvenile stages. Interestingly, SGS was stronger in adults than in late juveniles. The initial decrease in SGS was probably driven by both random and non-random thinning of offspring clusters during recruitment. Intergenerational variation in SGS could have been driven by variation in gene flow processes, overlapping generations in the adult stage or local selection. Our study shows that complex sequential processes during recruitment contribute to SGS of tree populations.
Polly, P David
2015-05-01
Our understanding of the evolution of the dentition has been transformed by advances in the developmental biology, genetics, and functional morphology of teeth, as well as the methods available for studying tooth form and function. The hierarchical complexity of dental developmental genetics combined with dynamic effects of cells and tissues during development allow for substantial, rapid, and potentially non-linear evolutionary changes. Studies of selection on tooth function in the wild and evolutionary functional comparisons both suggest that tooth function and adaptation to diets are the most important factors guiding the evolution of teeth, yet selection against random changes that produce malocclusions (selectional drift) may be an equally important factor in groups with tribosphenic dentitions. These advances are critically reviewed here.
Danker, Timm; Braun, Franziska; Silbernagl, Nikole; Guenther, Elke
2016-03-01
Manual patch clamp, the gold standard of electrophysiology, represents a powerful and versatile toolbox to stimulate, modulate, and record ion channel activity from membrane fragments and whole cells. The electrophysiological readout can be combined with fluorescent or optogenetic methods and allows for ultrafast solution exchanges using specialized microfluidic tools. A hallmark of manual patch clamp is the intentional selection of individual cells for recording, often an essential prerequisite to generate meaningful data. So far, available automation solutions rely on random cell usage in the closed environment of a chip and thus sacrifice much of this versatility by design. To parallelize and automate the traditional patch clamp technique while perpetuating the full versatility of the method, we developed an approach to automation, which is based on active cell handling and targeted electrode placement rather than on random processes. This is achieved through an automated pipette positioning system, which guides the tips of recording pipettes with micrometer precision to a microfluidic cell handling device. Using a patch pipette array mounted on a conventional micromanipulator, our automated patch clamp process mimics the original manual patch clamp as closely as possible, yet achieving a configuration where recordings are obtained from many patch electrodes in parallel. In addition, our implementation is extensible by design to allow the easy integration of specialized equipment such as ultrafast compound application tools. The resulting system offers fully automated patch clamp on purposely selected cells and combines high-quality gigaseal recordings with solution switching in the millisecond timescale.
Improving the performance of minimizers and winnowing schemes.
Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl
2017-07-15
The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Assessing Multivariate Constraints to Evolution across Ten Long-Term Avian Studies
Teplitsky, Celine; Tarka, Maja; Møller, Anders P.; Nakagawa, Shinichi; Balbontín, Javier; Burke, Terry A.; Doutrelant, Claire; Gregoire, Arnaud; Hansson, Bengt; Hasselquist, Dennis; Gustafsson, Lars; de Lope, Florentino; Marzal, Alfonso; Mills, James A.; Wheelwright, Nathaniel T.; Yarrall, John W.; Charmantier, Anne
2014-01-01
Background In a rapidly changing world, it is of fundamental importance to understand processes constraining or facilitating adaptation through microevolution. As different traits of an organism covary, genetic correlations are expected to affect evolutionary trajectories. However, only limited empirical data are available. Methodology/Principal Findings We investigate the extent to which multivariate constraints affect the rate of adaptation, focusing on four morphological traits often shown to harbour large amounts of genetic variance and considered to be subject to limited evolutionary constraints. Our data set includes unique long-term data for seven bird species and a total of 10 populations. We estimate population-specific matrices of genetic correlations and multivariate selection coefficients to predict evolutionary responses to selection. Using Bayesian methods that facilitate the propagation of errors in estimates, we compare (1) the rate of adaptation based on predicted response to selection when including genetic correlations with predictions from models where these genetic correlations were set to zero and (2) the multivariate evolvability in the direction of current selection to the average evolvability in random directions of the phenotypic space. We show that genetic correlations on average decrease the predicted rate of adaptation by 28%. Multivariate evolvability in the direction of current selection was systematically lower than average evolvability in random directions of space. These significant reductions in the rate of adaptation and reduced evolvability were due to a general nonalignment of selection and genetic variance, notably orthogonality of directional selection with the size axis along which most (60%) of the genetic variance is found. Conclusions These results suggest that genetic correlations can impose significant constraints on the evolution of avian morphology in wild populations. This could have important impacts on evolutionary dynamics and hence population persistence in the face of rapid environmental change. PMID:24608111
Feature-selective attention enhances color signals in early visual areas of the human brain.
Müller, M M; Andersen, S; Trujillo, N J; Valdés-Sosa, P; Malinowski, P; Hillyard, S A
2006-09-19
We used an electrophysiological measure of selective stimulus processing (the steady-state visual evoked potential, SSVEP) to investigate feature-specific attention to color cues. Subjects viewed a display consisting of spatially intermingled red and blue dots that continually shifted their positions at random. The red and blue dots flickered at different frequencies and thereby elicited distinguishable SSVEP signals in the visual cortex. Paying attention selectively to either the red or blue dot population produced an enhanced amplitude of its frequency-tagged SSVEP, which was localized by source modeling to early levels of the visual cortex. A control experiment showed that this selection was based on color rather than flicker frequency cues. This signal amplification of attended color items provides an empirical basis for the rapid identification of feature conjunctions during visual search, as proposed by "guided search" models.
Simulation of 'hitch-hiking' genealogies.
Slade, P F
2001-01-01
An ancestral influence graph is derived, an analogue of the coalescent and a composite of Griffiths' (1991) two-locus ancestral graph and Krone and Neuhauser's (1997) ancestral selection graph. This generalizes their use of branching-coalescing random graphs so as to incorporate both selection and recombination into gene genealogies. Qualitative understanding of a 'hitch-hiking' effect on genealogies is pursued via diagrammatic representation of the genealogical process in a two-locus, two-allele haploid model. Extending the simulation technique of Griffiths and Tavare (1996), computational estimation of expected times to the most recent common ancestor of samples of n genes under recombination and selection in two-locus, two-allele haploid and diploid models are presented. Such times are conditional on sample configuration. Monte Carlo simulations show that 'hitch-hiking' is a subtle effect that alters the conditional expected depth of the genealogy at the linked neutral locus depending on a mutation-selection-recombination balance.
Evolution of resource cycling in ecosystems and individuals.
Crombach, Anton; Hogeweg, Paulien
2009-06-01
Resource cycling is a defining process in the maintenance of the biosphere. Microbial communities, ranging from simple to highly diverse, play a crucial role in this process. Yet the evolutionary adaptation and speciation of micro-organisms have rarely been studied in the context of resource cycling. In this study, our basic questions are how does a community evolve its resource usage and how are resource cycles partitioned? We design a computational model in which a population of individuals evolves to take up nutrients and excrete waste. The waste of one individual is another's resource. Given a fixed amount of resources, this leads to resource cycles. We find that the shortest cycle dominates the ecological dynamics, and over evolutionary time its length is minimized. Initially a single lineage processes a long cycle of resources, later crossfeeding lineages arise. The evolutionary dynamics that follow are determined by the strength of indirect selection for resource cycling. We study indirect selection by changing the spatial setting and the strength of direct selection. If individuals are fixed at lattice sites or direct selection is low, indirect selection result in lineages that structure their local environment, leading to 'smart' individuals and stable patterns of resource dynamics. The individuals are good at cycling resources themselves and do this with a short cycle. On the other hand, if individuals randomly change position each time step, or direct selection is high, individuals are more prone to crossfeeding: an ecosystem based solution with turbulent resource dynamics, and individuals that are less capable of cycling resources themselves. In a baseline model of ecosystem evolution we demonstrate different eco-evolutionary trajectories of resource cycling. By varying the strength of indirect selection through the spatial setting and direct selection, the integration of information by the evolutionary process leads to qualitatively different results from individual smartness to cooperative community structures.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
Experimental rugged fitness landscape in protein sequence space.
Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya
2006-12-20
The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12-130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7x10(4)-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18-24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region.
Experimental Rugged Fitness Landscape in Protein Sequence Space
Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya
2006-01-01
The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12–130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7×104-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18–24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region. PMID:17183728
Sellors, John; Kaczorowski, Janusz; Sellors, Connie; Dolovich, Lisa; Woodward, Christel; Willan, Andrew; Goeree, Ron; Cosby, Roxanne; Trim, Kristina; Sebaldt, Rolf; Howard, Michelle; Hardcastle, Linda; Poston, Jeff
2003-01-01
Background Pharmacists can improve patient outcomes in institutional and pharmacy settings, but little is known about their effectiveness as consultants to primary care physicians. We examined whether an intervention by a specially trained pharmacist could reduce the number of daily medication units taken by elderly patients, as well as costs and health care use. Methods We conducted a randomized controlled trial in family practices in 24 sites in Ontario. We randomly allocated 48 randomly selected family physicians (69.6% participation rate) to the intervention or the control arm, along with 889 (69.5% participation rate) of their randomly selected community-dwelling, elderly patients who were taking 5 or more medications daily. In the intervention group, pharmacists conducted face-to-face medication reviews with the patients and then gave written recommendations to the physicians to resolve any drug-related problems. Process outcomes included the number of drug-related problems identified among the senior citizens in the intervention arm and the proportion of recommendations implemented by the physicians. Results After 5 months, seniors in the intervention and control groups were taking a mean of 12.4 and 12.2 medication units per day respectively (p = 0.50). There were no statistically significant differences in health care use or costs between groups. A mean of 2.5 drug-related problems per senior was identified in the intervention arm. Physicians implemented or attempted to implement 72.3% (790/1093) of the recommendations. Interpretation The intervention did not have a significant effect on patient outcomes. However, physicians were receptive to the recommendations to resolve drug-related problems, suggesting that collaboration between physicians and pharmacists is feasible. PMID:12847034
Modeling Dynamic Food Choice Processes to Understand Dietary Intervention Effects.
Marcum, Christopher Steven; Goldring, Megan R; McBride, Colleen M; Persky, Susan
2018-02-17
Meal construction is largely governed by nonconscious and habit-based processes that can be represented as a collection of in dividual, micro-level food choices that eventually give rise to a final plate. Despite this, dietary behavior intervention research rarely captures these micro-level food choice processes, instead measuring outcomes at aggregated levels. This is due in part to a dearth of analytic techniques to model these dynamic time-series events. The current article addresses this limitation by applying a generalization of the relational event framework to model micro-level food choice behavior following an educational intervention. Relational event modeling was used to model the food choices that 221 mothers made for their child following receipt of an information-based intervention. Participants were randomized to receive either (a) control information; (b) childhood obesity risk information; (c) childhood obesity risk information plus a personalized family history-based risk estimate for their child. Participants then made food choices for their child in a virtual reality-based food buffet simulation. Micro-level aspects of the built environment, such as the ordering of each food in the buffet, were influential. Other dynamic processes such as choice inertia also influenced food selection. Among participants receiving the strongest intervention condition, choice inertia decreased and the overall rate of food selection increased. Modeling food selection processes can elucidate the points at which interventions exert their influence. Researchers can leverage these findings to gain insight into nonconscious and uncontrollable aspects of food selection that influence dietary outcomes, which can ultimately improve the design of dietary interventions.
THE SELECTION OF A NATIONAL RANDOM SAMPLE OF TEACHERS FOR EXPERIMENTAL CURRICULUM EVALUATION.
ERIC Educational Resources Information Center
WELCH, WAYNE W.; AND OTHERS
MEMBERS OF THE EVALUATION SECTION OF HARVARD PROJECT PHYSICS, DESCRIBING WHAT IS SAID TO BE THE FIRST ATTEMPT TO SELECT A NATIONAL RANDOM SAMPLE OF (HIGH SCHOOL PHYSICS) TEACHERS, LIST THE STEPS AS (1) PURCHASE OF A LIST OF PHYSICS TEACHERS FROM THE NATIONAL SCIENCE TEACHERS ASSOCIATION (MOST COMPLETE AVAILABLE), (2) SELECTION OF 136 NAMES BY A…
Mobile access to virtual randomization for investigator-initiated trials.
Deserno, Thomas M; Keszei, András P
2017-08-01
Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization becomes available for investigator-initiated trials and potentially for large multi-center trials.
Graphene based widely-tunable and singly-polarized pulse generation with random fiber lasers
Yao, B. C.; Rao, Y. J.; Wang, Z. N.; Wu, Y.; Zhou, J. H.; Wu, H.; Fan, M. Q.; Cao, X. L.; Zhang, W. L.; Chen, Y. F.; Li, Y. R.; Churkin, D.; Turitsyn, S.; Wong, C. W.
2015-01-01
Pulse generation often requires a stabilized cavity and its corresponding mode structure for initial phase-locking. Contrastingly, modeless cavity-free random lasers provide new possibilities for high quantum efficiency lasing that could potentially be widely tunable spectrally and temporally. Pulse generation in random lasers, however, has remained elusive since the discovery of modeless gain lasing. Here we report coherent pulse generation with modeless random lasers based on the unique polarization selectivity and broadband saturable absorption of monolayer graphene. Simultaneous temporal compression of cavity-free pulses are observed with such a polarization modulation, along with a broadly-tunable pulsewidth across two orders of magnitude down to 900 ps, a broadly-tunable repetition rate across three orders of magnitude up to 3 MHz, and a singly-polarized pulse train at 41 dB extinction ratio, about an order of magnitude larger than conventional pulsed fiber lasers. Moreover, our graphene-based pulse formation also demonstrates robust pulse-to-pulse stability and wide-wavelength operation due to the cavity-less feature. Such a graphene-based architecture not only provides a tunable pulsed random laser for fiber-optic sensing, speckle-free imaging, and laser-material processing, but also a new way for the non-random CW fiber lasers to generate widely tunable and singly-polarized pulses. PMID:26687730
Graphene based widely-tunable and singly-polarized pulse generation with random fiber lasers.
Yao, B C; Rao, Y J; Wang, Z N; Wu, Y; Zhou, J H; Wu, H; Fan, M Q; Cao, X L; Zhang, W L; Chen, Y F; Li, Y R; Churkin, D; Turitsyn, S; Wong, C W
2015-12-21
Pulse generation often requires a stabilized cavity and its corresponding mode structure for initial phase-locking. Contrastingly, modeless cavity-free random lasers provide new possibilities for high quantum efficiency lasing that could potentially be widely tunable spectrally and temporally. Pulse generation in random lasers, however, has remained elusive since the discovery of modeless gain lasing. Here we report coherent pulse generation with modeless random lasers based on the unique polarization selectivity and broadband saturable absorption of monolayer graphene. Simultaneous temporal compression of cavity-free pulses are observed with such a polarization modulation, along with a broadly-tunable pulsewidth across two orders of magnitude down to 900 ps, a broadly-tunable repetition rate across three orders of magnitude up to 3 MHz, and a singly-polarized pulse train at 41 dB extinction ratio, about an order of magnitude larger than conventional pulsed fiber lasers. Moreover, our graphene-based pulse formation also demonstrates robust pulse-to-pulse stability and wide-wavelength operation due to the cavity-less feature. Such a graphene-based architecture not only provides a tunable pulsed random laser for fiber-optic sensing, speckle-free imaging, and laser-material processing, but also a new way for the non-random CW fiber lasers to generate widely tunable and singly-polarized pulses.
Tilt aftereffect following adaptation to translational Glass patterns
Pavan, Andrea; Hocketstaller, Johanna; Contillo, Adriano; Greenlee, Mark W.
2016-01-01
Glass patterns (GPs) consist of randomly distributed dot pairs (dipoles) whose orientations are determined by specific geometric transforms. We assessed whether adaptation to stationary oriented translational GPs suppresses the activity of orientation selective detectors producing a tilt aftereffect (TAE). The results showed that adaptation to GPs produces a TAE similar to that reported in previous studies, though reduced in amplitude. This suggests the involvement of orientation selective mechanisms. We also measured the interocular transfer (IOT) of the GP-induced TAE and found an almost complete IOT, indicating the involvement of orientation selective and binocularly driven units. In additional experiments, we assessed the role of attention in TAE from GPs. The results showed that distraction during adaptation similarly modulates the TAE after adapting to both GPs and gratings. Moreover, in the case of GPs, distraction is likely to interfere with the adaptation process rather than with the spatial summation of local dipoles. We conclude that TAE from GPs possibly relies on visual processing levels in which the global orientation of GPs has been encoded by neurons that are mostly binocularly driven, orientation selective and whose adaptation-related neural activity is strongly modulated by attention. PMID:27005949
Applications of random forest feature selection for fine-scale genetic population assignment.
Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G
2018-02-01
Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.
High-Tg Polynorbornene-Based Block and Random Copolymers for Butanol Pervaporation Membranes
NASA Astrophysics Data System (ADS)
Register, Richard A.; Kim, Dong-Gyun; Takigawa, Tamami; Kashino, Tomomasa; Burtovyy, Oleksandr; Bell, Andrew
Vinyl addition polymers of substituted norbornene (NB) monomers possess desirably high glass transition temperatures (Tg); however, until very recently, the lack of an applicable living polymerization chemistry has precluded the synthesis of such polymers with controlled architecture, or copolymers with controlled sequence distribution. We have recently synthesized block and random copolymers of NB monomers bearing hydroxyhexafluoroisopropyl and n-butyl substituents (HFANB and BuNB) via living vinyl addition polymerization with Pd-based catalysts. Both series of polymers were cast into the selective skin layers of thin film composite (TFC) membranes, and these organophilic membranes investigated for the isolation of n-butanol from dilute aqueous solution (model fermentation broth) via pervaporation. The block copolymers show well-defined microphase-separated morphologies, both in bulk and as the selective skin layers on TFC membranes, while the random copolymers are homogeneous. Both block and random vinyl addition copolymers are effective as n-butanol pervaporation membranes, with the block copolymers showing a better flux-selectivity balance. While polyHFANB has much higher permeability and n-butanol selectivity than polyBuNB, incorporating BuNB units into the polymer (in either a block or random sequence) limits the swelling of the polyHFANB and thereby improves the n-butanol pervaporation selectivity.
Bocedi, Greta; Reid, Jane M
2015-01-01
Explaining the evolution and maintenance of polyandry remains a key challenge in evolutionary ecology. One appealing explanation is the sexually selected sperm (SSS) hypothesis, which proposes that polyandry evolves due to indirect selection stemming from positive genetic covariance with male fertilization efficiency, and hence with a male's success in postcopulatory competition for paternity. However, the SSS hypothesis relies on verbal analogy with “sexy-son” models explaining coevolution of female preferences for male displays, and explicit models that validate the basic SSS principle are surprisingly lacking. We developed analogous genetically explicit individual-based models describing the SSS and “sexy-son” processes. We show that the analogy between the two is only partly valid, such that the genetic correlation arising between polyandry and fertilization efficiency is generally smaller than that arising between preference and display, resulting in less reliable coevolution. Importantly, indirect selection was too weak to cause polyandry to evolve in the presence of negative direct selection. Negatively biased mutations on fertilization efficiency did not generally rescue runaway evolution of polyandry unless realized fertilization was highly skewed toward a single male, and coevolution was even weaker given random mating order effects on fertilization. Our models suggest that the SSS process is, on its own, unlikely to generally explain the evolution of polyandry. PMID:25330405
Caries status in 16 year-olds with varying exposure to water fluoridation in Ireland.
Mullen, J; McGaffin, J; Farvardin, N; Brightman, S; Haire, C; Freeman, R
2012-12-01
Most of the Republic of Ireland's public water supplies have been fluoridated since the mid-1960s while Northern Ireland has never been fluoridated, apart from some small short-lived schemes in east Ulster. This study examines dental caries status in 16 year-olds in a part of Ireland straddling fluoridated and non-fluoridated water supply areas and compares two methods of assessing the effectiveness of water fluoridation. The cross-sectional survey tested differences in caries status by two methods: 1, Estimated Fluoridation Status as used previously in national and regional studies in the Republic and in the All-Island study of 2002; 2, Percentage Lifetime Exposure, a modification of a system described by Slade in 1995 and used in Australian caries research. Adolescents were selected for the study by a two-part random sampling process. Firstly, schools were selected in each area by creating three tiers based on school size, and selecting schools randomly from each tier. Then random sampling of 16-year-olds from these schools, based on a pre-set sampling fraction for each tier of schools. With both systems of measurement, significantly lower caries levels were found in those children with the greatest exposure to fluoridated water when compared to those with the least exposure. The survey provides further evidence of the effectiveness in reducing dental caries experience up to 16 years of age. The extra intricacies involved in using the Percentage Lifetime Exposure method did not provide much more information when compared to the simpler Estimated Fluoridation Status method.
Lancarotte, Inês; Nobre, Moacyr Roberto
2016-01-01
The aim of this study was to identify and reflect on the methods employed by studies focusing on intervention programs for the primordial and primary prevention of cardiovascular diseases. The PubMed, EMBASE, SciVerse Hub-Scopus, and Cochrane Library electronic databases were searched using the terms ‘effectiveness AND primary prevention AND risk factors AND cardiovascular diseases’ for systematic reviews, meta-analyses, randomized clinical trials, and controlled clinical trials in the English language. A descriptive analysis of the employed strategies, theories, frameworks, applied activities, and measurement of the variables was conducted. Nineteen primary studies were analyzed. Heterogeneity was observed in the outcome evaluations, not only in the selected domains but also in the indicators used to measure the variables. There was also a predominance of repeated cross-sectional survey design, differences in community settings, and variability related to the randomization unit when randomization was implemented as part of the sample selection criteria; furthermore, particularities related to measures, limitations, and confounding factors were observed. The employed strategies, including their advantages and limitations, and the employed theories and frameworks are discussed, and risk communication, as the key element of the interventions, is emphasized. A methodological process of selecting and presenting the information to be communicated is recommended, and a systematic theoretical perspective to guide the communication of information is advised. The risk assessment concept, its essential elements, and the relevant role of risk perception are highlighted. It is fundamental for communication that statements targeting other people’s understanding be prepared using systematic data. PMID:27982169
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
Schroy, Paul C; Duhovic, Emir; Chen, Clara A; Heeren, Timothy C; Lopez, William; Apodaca, Danielle L; Wong, John B
2016-05-01
Eliciting patient preferences within the context of shared decision making has been advocated for colorectal cancer (CRC) screening, yet providers often fail to comply with patient preferences that differ from their own. To determine whether risk stratification for advanced colorectal neoplasia (ACN) influences provider willingness to comply with patient preferences when selecting a desired CRC screening option. Randomized controlled trial. Asymptomatic, average-risk patients due for CRC screening in an urban safety net health care setting. Patients were randomized 1:1 to a decision aid alone (n= 168) or decision aid plus risk assessment (n= 173) arm between September 2012 and September 2014. The primary outcome was concordance between patient preference and test ordered; secondary outcomes included patient satisfaction with the decision-making process, screening intentions, test completion rates, and provider satisfaction. Although providers perceived risk stratification to be useful in selecting an appropriate screening test for their average-risk patients, no significant differences in concordance were observed between the decision aid alone and decision aid plus risk assessment groups (88.1% v. 85.0%,P= 0.40) or high- and low-risk groups (84.5% v. 87.1%,P= 0.51). Concordance was highest for colonoscopy and relatively low for tests other than colonoscopy, regardless of study arm or risk group. Failure to comply with patient preferences was negatively associated with satisfaction with the decision-making process, screening intentions, and test completion rates. Single-institution setting; lack of provider education about the utility of risk stratification into their decision making. Providers perceived risk stratification to be useful in their decision making but often failed to comply with patient preferences for tests other than colonoscopy, even among those deemed to be at low risk of ACN. © The Author(s) 2016.
Numerical simulation and parametric analysis of selective laser melting process of AlSi10Mg powder
NASA Astrophysics Data System (ADS)
Pei, Wei; Zhengying, Wei; Zhen, Chen; Junfeng, Li; Shuzhe, Zhang; Jun, Du
2017-08-01
A three-dimensional numerical model was developed to investigate effects of laser scanning speed, laser power, and hatch spacing on the thermodynamic behaviors of the molten pool during selective laser melting of AlSi10Mg powder. A randomly distributed packed powder bed was achieved using discrete element method (DEM). The powder bed can be treated as a porous media with interconnected voids in the simulation. A good agreement between numerical results and experimental results establish the validity of adopted method. The numerical results show that the Marangoni flow within the molten pool was significantly affected by the processing parameters. An intense Marangoni flow leads to a perturbation within the molten pool. In addition, a relatively high scanning speed tends to cause melt instability. The perturbation or the instability within the molten pool results in the formation of pores during SLM, which have a direct influence on the densification level.
Evolutionary engineering for industrial microbiology.
Vanee, Niti; Fisher, Adam B; Fong, Stephen S
2012-01-01
Superficially, evolutionary engineering is a paradoxical field that balances competing interests. In natural settings, evolution iteratively selects and enriches subpopulations that are best adapted to a particular ecological niche using random processes such as genetic mutation. In engineering desired approaches utilize rational prospective design to address targeted problems. When considering details of evolutionary and engineering processes, more commonality can be found. Engineering relies on detailed knowledge of the problem parameters and design properties in order to predict design outcomes that would be an optimized solution. When detailed knowledge of a system is lacking, engineers often employ algorithmic search strategies to identify empirical solutions. Evolution epitomizes this iterative optimization by continuously diversifying design options from a parental design, and then selecting the progeny designs that represent satisfactory solutions. In this chapter, the technique of applying the natural principles of evolution to engineer microbes for industrial applications is discussed to highlight the challenges and principles of evolutionary engineering.
Zdrodowska, B; Liedtke, K; Radkowski, M
2014-01-01
Turkeys carcasses at selected point after slaughter on dressing line in poultry were sampled and analyzed for Salmonella. These slaughter turkeys came from the northeast part of Poland. The examinations were carried out in each month of 2009. Three hundred turkeys were selected at random from a commercial slaughter line, immediately after completing the cooling process. The percentage of these 300 turkeys from which Salmonella spp. were isolated was relatively high (8.3%; Salmonella positive results were observed in 25 cases). The lowest Salmonella spp. rate (1.3 %) for slaughter birds was found in the fourth quarter, and the highest contamination rate at 18.6% was found, in the third quarter. The serological types of Salmonella spp. isolated from the whole turkey carcasses were S. Saintpaul, S. Senftenberg, S. Anatum, S. Heidelberg, S. Hadar, S. Typhimurium and S. Infantis.
Mate choice theory and the mode of selection in sexual populations.
Carson, Hampton L
2003-05-27
Indirect new data imply that mate and/or gamete choice are major selective forces driving genetic change in sexual populations. The system dictates nonrandom mating, an evolutionary process requiring both revised genetic theory and new data on heritability of characters underlying Darwinian fitness. Successfully reproducing individuals represent rare selections from among vigorous, competing survivors of preadult natural selection. Nonrandom mating has correlated demographic effects: reduced effective population size, inbreeding, low gene flow, and emphasis on deme structure. Characters involved in choice behavior at reproduction appear based on quantitative trait loci. This variability serves selection for fitness within the population, having only an incidental relationship to the origin of genetically based reproductive isolation between populations. The claim that extensive hybridization experiments with Drosophila indicate that selection favors a gradual progression of "isolating mechanisms" is flawed, because intra-group random mating is assumed. Over deep time, local sexual populations are strong, independent genetic systems that use rich fields of variable polygenic components of fitness. The sexual reproduction system thus particularizes, in small subspecific populations, the genetic basis of the grand adaptive sweep of selective evolutionary change, much as Darwin proposed.
Capture-SELEX: Selection of DNA Aptamers for Aminoglycoside Antibiotics
2012-01-01
Small organic molecules are challenging targets for an aptamer selection using the SELEX technology (SELEX—Systematic Evolution of Ligans by EXponential enrichment). Often they are not suitable for immobilization on solid surfaces, which is a common procedure in known aptamer selection methods. The Capture-SELEX procedure allows the selection of DNA aptamers for solute targets. A special SELEX library was constructed with the aim to immobilize this library on magnetic beads or other surfaces. For this purpose a docking sequence was incorporated into the random region of the library enabling hybridization to a complementary oligo fixed on magnetic beads. Oligonucleotides of the library which exhibit high affinity to the target and a secondary structure fitting to the target are released from the beads for binding to the target during the aptamer selection process. The oligonucleotides of these binding complexes were amplified, purified, and immobilized via the docking sequence to the magnetic beads as the starting point of the following selection round. Based on this Capture-SELEX procedure, the successful DNA aptamer selection for the aminoglycoside antibiotic kanamycin A as a small molecule target is described. PMID:23326761
Interacting particle systems on graphs
NASA Astrophysics Data System (ADS)
Sood, Vishal
In this dissertation, the dynamics of socially or biologically interacting populations are investigated. The individual members of the population are treated as particles that interact via links on a social or biological network represented as a graph. The effect of the structure of the graph on the properties of the interacting particle system is studied using statistical physics techniques. In the first chapter, the central concepts of graph theory and social and biological networks are presented. Next, interacting particle systems that are drawn from physics, mathematics and biology are discussed in the second chapter. In the third chapter, the random walk on a graph is studied. The mean time for a random walk to traverse between two arbitrary sites of a random graph is evaluated. Using an effective medium approximation it is found that the mean first-passage time between pairs of sites, as well as all moments of this first-passage time, are insensitive to the density of links in the graph. The inverse of the mean-first passage time varies non-monotonically with the density of links near the percolation transition of the random graph. Much of the behavior can be understood by simple heuristic arguments. Evolutionary dynamics, by which mutants overspread an otherwise uniform population on heterogeneous graphs, are studied in the fourth chapter. Such a process underlies' epidemic propagation, emergence of fads, social cooperation or invasion of an ecological niche by a new species. The first part of this chapter is devoted to neutral dynamics, in which the mutant genotype does not have a selective advantage over the resident genotype. The time to extinction of one of the two genotypes is derived. In the second part of this chapter, selective advantage or fitness is introduced such that the mutant genotype has a higher birth rate or a lower death rate. This selective advantage leads to a dynamical competition in which selection dominates for large populations, while for small populations the dynamics are similar to the neutral case. The likelihood for the fitter mutants to drive the resident genotype to extinction is calculated.
Adaptive consensus of scale-free multi-agent system by randomly selecting links
NASA Astrophysics Data System (ADS)
Mou, Jinping; Ge, Huafeng
2016-06-01
This paper investigates an adaptive consensus problem for distributed scale-free multi-agent systems (SFMASs) by randomly selecting links, where the degree of each node follows a power-law distribution. The randomly selecting links are based on the assumption that every agent decides to select links among its neighbours according to the received data with a certain probability. Accordingly, a novel consensus protocol with the range of the received data is developed, and each node updates its state according to the protocol. By the iterative method and Cauchy inequality, the theoretical analysis shows that all errors among agents converge to zero, and in the meanwhile, several criteria of consensus are obtained. One numerical example shows the reliability of the proposed methods.
Aguiar, Elroy J; Morgan, Philip J; Collins, Clare E; Plotnikoff, Ronald C; Young, Myles D; Callister, Robin
2017-07-01
Men are underrepresented in weight loss and type 2 diabetes mellitus (T2DM) prevention studies. To determine the effectiveness of recruitment, and acceptability of the T2DM Prevention Using LifeStyle Education (PULSE) Program-a gender-targeted, self-administered intervention for men. Men (18-65 years, high risk for T2DM) were randomized to intervention ( n = 53) or wait-list control groups ( n = 48). The 6-month PULSE Program intervention focused on weight loss, diet, and exercise for T2DM prevention. A process evaluation questionnaire was administered at 6 months to examine recruitment and selection processes, and acceptability of the intervention's delivery and content. Associations between self-monitoring and selected outcomes were assessed using Spearman's rank correlation. A pragmatic recruitment and online screening process was effective in identifying men at high risk of T2DM (prediabetes prevalence 70%). Men reported the trial was appealing because it targeted weight loss, T2DM prevention, and getting fit, and because it was perceived as "doable" and tailored for men. The intervention was considered acceptable, with men reporting high overall satisfaction (83%) and engagement with the various components. Adherence to self-monitoring was poor, with only 13% meeting requisite criteria. However, significant associations were observed between weekly self-monitoring of weight and change in weight ( r s = -.47, p = .004) and waist circumference ( r s = -.38, p = .026). Men reported they would have preferred more intervention contact, for example, by phone or email. Gender-targeted, self-administered lifestyle interventions are feasible, appealing, and satisfying for men. Future studies should explore the effects of additional non-face-to-face contact on motivation, accountability, self-monitoring adherence, and program efficacy.
Pincham, H L; Bryce, D; Fonagy, P; Fearon, R M Pasco
2018-05-25
Decision making and feedback processing are two important cognitive processes that are impacted by social context, particularly during adolescence. The current study examined whether a psychosocial intervention could improve psychological wellbeing in at-risk adolescent boys, thereby improving their decision making and feedback processing skills. Two groups of at-risk adolescents were compared: those who were relatively new to a psychosocial intervention, and those who had engaged over a longer time period. Electroencephalography was recorded while the young people participated in a modified version of the Taylor Aggression Paradigm. The late positive potential (LPP) was measured during the decision phase of the task (where participants selected punishments for their opponents). The feedback-related negativity (FRN) and P3 components were measured during the task's outcome phase (where participants received 'win' or 'lose' feedback). Adolescents who were new to the intervention (the minimal-intervention group) were harsher in their punishment selections than those who had been engaged in the program for much longer. The minimal-intervention group also showed an enhanced LPP during the decision phase of the task, which may be indicative of immature decision making in that group. Analysis of the FRN and P3 amplitudes revealed that the minimal-intervention group was physiologically hypo-sensitive to feedback, compared with the extended-intervention group. Overall, these findings suggest that long-term community-based psychosocial intervention programs are beneficial for at-risk adolescents, and that event-related potentials can be employed as biomarkers of therapeutic change. However, because participants were not randomly allocated to treatment groups, alternative explanations cannot be excluded until further randomized controlled trials are undertaken.
Effects of topology on network evolution
NASA Astrophysics Data System (ADS)
Oikonomou, Panos; Cluzel, Philippe
2006-08-01
The ubiquity of scale-free topology in nature raises the question of whether this particular network design confers an evolutionary advantage. A series of studies has identified key principles controlling the growth and the dynamics of scale-free networks. Here, we use neuron-based networks of boolean components as a framework for modelling a large class of dynamical behaviours in both natural and artificial systems. Applying a training algorithm, we characterize how networks with distinct topologies evolve towards a pre-established target function through a process of random mutations and selection. We find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. Whereas homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously. Remarkably, this latter property is robust to variations of the degree exponent. In contrast, homogeneous random networks require a specific tuning of their connectivity to optimize their ability to evolve. These results highlight an organizing principle that governs the evolution of complex networks and that can improve the design of engineered systems.
Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas
2017-04-15
The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Selection of antigenically advanced variants of seasonal influenza viruses.
Li, Chengjun; Hatta, Masato; Burke, David F; Ping, Jihui; Zhang, Ying; Ozawa, Makoto; Taft, Andrew S; Das, Subash C; Hanson, Anthony P; Song, Jiasheng; Imai, Masaki; Wilker, Peter R; Watanabe, Tokiko; Watanabe, Shinji; Ito, Mutsumi; Iwatsuki-Horimoto, Kiyoko; Russell, Colin A; James, Sarah L; Skepner, Eugene; Maher, Eileen A; Neumann, Gabriele; Klimov, Alexander I; Kelso, Anne; McCauley, John; Wang, Dayan; Shu, Yuelong; Odagiri, Takato; Tashiro, Masato; Xu, Xiyan; Wentworth, David E; Katz, Jacqueline M; Cox, Nancy J; Smith, Derek J; Kawaoka, Yoshihiro
2016-05-23
Influenza viruses mutate frequently, necessitating constant updates of vaccine viruses. To establish experimental approaches that may complement the current vaccine strain selection process, we selected antigenic variants from human H1N1 and H3N2 influenza virus libraries possessing random mutations in the globular head of the haemagglutinin protein (which includes the antigenic sites) by incubating them with human and/or ferret convalescent sera to human H1N1 and H3N2 viruses. We also selected antigenic escape variants from human viruses treated with convalescent sera and from mice that had been previously immunized against human influenza viruses. Our pilot studies with past influenza viruses identified escape mutants that were antigenically similar to variants that emerged in nature, establishing the feasibility of our approach. Our studies with contemporary human influenza viruses identified escape mutants before they caused an epidemic in 2014-2015. This approach may aid in the prediction of potential antigenic escape variants and the selection of future vaccine candidates before they become widespread in nature.
Effect of Expanding Medicaid for Parents on Children’s Health Insurance Coverage
DeVoe, Jennifer E.; Marino, Miguel; Angier, Heather; O’Malley, Jean P.; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J.; Bailey, Steffani R.; Gallia, Charles; Gold, Rachel
2016-01-01
IMPORTANCE In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon’s randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. OBJECTIVE To estimate the effect on a child’s health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. DESIGN, SETTING, AND PARTICIPANTS Oregon Experiment randomized natural experiment assessing the results of Oregon’s 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child’s Medicaid or Children’s Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children’s coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14 409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. EXPOSURES For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. MAIN OUTCOMES AND MEASURES Children’s Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent’s selection date. RESULTS In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent’s selection compared with children whose parents were not selected (adjusted odds ratio [AOR] = 1.18; 95% CI, 1.10–1.27). The effect remained significant during months 7 to 12 (AOR = 1.11; 95% CI, 1.03–1.19); months 13 to 18 showed a positive but not significant effect (AOR = 1.07; 95% CI, 0.99–1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR = 2.37; 95% CI, 2.14–2.64). CONCLUSIONS AND RELEVANCE Children’s odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents’ access to Medicaid coverage and their children’s coverage. PMID:25561041
Predicting rates of inbreeding in populations undergoing selection.
Woolliams, J A; Bijma, P
2000-01-01
Tractable forms of predicting rates of inbreeding (DeltaF) in selected populations with general indices, nonrandom mating, and overlapping generations were developed, with the principal results assuming a period of equilibrium in the selection process. An existing theorem concerning the relationship between squared long-term genetic contributions and rates of inbreeding was extended to nonrandom mating and to overlapping generations. DeltaF was shown to be approximately (1)/(4)(1 - omega) times the expected sum of squared lifetime contributions, where omega is the deviation from Hardy-Weinberg proportions. This relationship cannot be used for prediction since it is based upon observed quantities. Therefore, the relationship was further developed to express DeltaF in terms of expected long-term contributions that are conditional on a set of selective advantages that relate the selection processes in two consecutive generations and are predictable quantities. With random mating, if selected family sizes are assumed to be independent Poisson variables then the expected long-term contribution could be substituted for the observed, providing (1)/(4) (since omega = 0) was increased to (1)/(2). Established theory was used to provide a correction term to account for deviations from the Poisson assumptions. The equations were successfully applied, using simple linear models, to the problem of predicting DeltaF with sib indices in discrete generations since previously published solutions had proved complex. PMID:10747074
Jayasena, Dinesh D; Jung, Samooel; Kim, Sun Hyo; Kim, Hyun Joo; Alahakoon, Amali U; Lee, Jun Heon; Jo, Cheorun
2015-03-15
In this study the effects of sex, meat cut and thermal processing on the carnosine, anserine, creatine, betaine and carnitine contents of Korean native chicken (KNC) meat were determined. Forty 1-day-old chicks (20 chicks of each sex) from a commercial KNC strain (Woorimatdag™) were reared under similar standard commercial conditions with similar diets, and ten birds of each sex were randomly selected and slaughtered at 14 weeks of age. Raw and cooked meat samples were prepared from both breast and leg meats and analyzed for the aforementioned functional compounds. Female KNCs had significantly higher betaine and creatine contents. The breast meat showed significantly higher carnosine and anserine contents, whereas the leg meat had a higher betaine and carnitine content. The content of all functional compounds was significantly depleted by thermal processing. This study confirms that KNC meat is a good source of the above-mentioned functional compounds, which can be considered attractive nutritional quality factors. However, their concentrations were significantly affected by thermal processing conditions, meat cut and sex. Further experiments are needed to select the best thermal processing method to preserve these functional compounds. © 2014 Society of Chemical Industry.
Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.
2014-01-01
Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504
Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H
2014-11-01
Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.
Application of stochastic processes in random growth and evolutionary dynamics
NASA Astrophysics Data System (ADS)
Oikonomou, Panagiotis
We study the effect of power-law distributed randomness on the dynamical behavior of processes such as stochastic growth patterns and evolution. First, we examine the geometrical properties of random shapes produced by a generalized stochastic Loewner Evolution driven by a superposition of a Brownian motion and a stable Levy process. The situation is defined by the usual stochastic Loewner Evolution parameter, kappa, as well as alpha which defines the power-law tail of the stable Levy distribution. We show that the properties of these patterns change qualitatively and singularly at critical values of kappa and alpha. It is reasonable to call such changes "phase transitions". These transitions occur as kappa passes through four and as alpha passes through one. Numerical simulations are used to explore the global scaling behavior of these patterns in each "phase". We show both analytically and numerically that the growth continues indefinitely in the vertical direction for alpha greater than 1, goes as logarithmically with time for alpha equals to 1, and saturates for alpha smaller than 1. The probability density has two different scales corresponding to directions along and perpendicular to the boundary. Scaling functions for the probability density are given for various limiting cases. Second, we study the effect of the architecture of biological networks on their evolutionary dynamics. In recent years, studies of the architecture of large networks have unveiled a common topology, called scale-free, in which a majority of the elements are poorly connected except for a small fraction of highly connected components. We ask how networks with distinct topologies can evolve towards a pre-established target phenotype through a process of random mutations and selection. We use networks of Boolean components as a framework to model a large class of phenotypes. Within this approach, we find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. While homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously towards the target phenotype. Moreover, we show that scale-free networks always evolve faster than homogeneous random networks; remarkably, this property does not depend on the precise value of the topological parameter. By contrast, homogeneous random networks require a specific tuning of their topological parameter in order to optimize their fitness. This model suggests that the evolutionary paths of biological networks, punctuated or continuous, may solely be determined by the network topology.
Dittmann, Clara; Müller-Engelmann, Meike; Resick, Patricia A; Gutermann, Jana; Stangier, Ulrich; Priebe, Kathlen; Fydrich, Thomas; Ludäscher, Petra; Herzog, Julia; Steil, Regina
2017-11-01
The assessment of therapeutic adherence is essential for accurately interpreting treatment outcomes in psychotherapy research. However, such assessments are often neglected. To fill this gap, we aimed to develop and test a scale that assessed therapeutic adherence to Cognitive Processing Therapy - Cognitive Only (CPT), which was adapted for a treatment study targeting patients with post-traumatic stress disorder and co-occurring borderline personality symptoms. Two independent, trained raters assessed 30 randomly selected treatment sessions involving seven therapists and eight patients who were treated in a multicentre randomized controlled trial. The inter-rater reliability for all items and the total score yielded good to excellent results (intraclass correlation coefficient [ICC] = 0.70 to 1.00). Cronbach's α was .56 for the adherence scale. Regarding content validity, three experts confirmed the relevance and appropriateness of each item. The adherence rating scale for the adapted version of CPT is a reliable instrument that can be helpful for interpreting treatment effects, analysing possible relationships between therapeutic adherence and treatment outcomes and teaching therapeutic skills.
Selective attention to sound location or pitch studied with fMRI.
Degerman, Alexander; Rinne, Teemu; Salmi, Juha; Salonen, Oili; Alho, Kimmo
2006-03-10
We used 3-T functional magnetic resonance imaging to compare the brain mechanisms underlying selective attention to sound location and pitch. In different tasks, the subjects (N = 10) attended to a designated sound location or pitch or to pictures presented on the screen. In the Attend Location conditions, the sound location varied randomly (left or right), while the pitch was kept constant (high or low). In the Attend Pitch conditions, sounds of randomly varying pitch (high or low) were presented at a constant location (left or right). Both attention to location and attention to pitch produced enhanced activity (in comparison with activation caused by the same sounds when attention was focused on the pictures) in widespread areas of the superior temporal cortex. Attention to either sound feature also activated prefrontal and inferior parietal cortical regions. These activations were stronger during attention to location than during attention to pitch. Attention to location but not to pitch produced a significant increase of activation in the premotor/supplementary motor cortices of both hemispheres and in the right prefrontal cortex, while no area showed activity specifically related to attention to pitch. The present results suggest some differences in the attentional selection of sounds on the basis of their location and pitch consistent with the suggested auditory "what" and "where" processing streams.
Clustering of financial time series with application to index and enhanced index tracking portfolio
NASA Astrophysics Data System (ADS)
Dose, Christian; Cincotti, Silvano
2005-09-01
A stochastic-optimization technique based on time series cluster analysis is described for index tracking and enhanced index tracking problems. Our methodology solves the problem in two steps, i.e., by first selecting a subset of stocks and then setting the weight of each stock as a result of an optimization process (asset allocation). Present formulation takes into account constraints on the number of stocks and on the fraction of capital invested in each of them, whilst not including transaction costs. Computational results based on clustering selection are compared to those of random techniques and show the importance of clustering in noise reduction and robust forecasting applications, in particular for enhanced index tracking.
Application of Methods of Numerical Analysis to Physical and Engineering Data.
1980-10-15
directed algorithm would seem to be called for. However, 1(0) is itself a random process, making its gradient too unreliable for such a sensitive algorithm...radiation energy on the detector . Active laser systems, on the other hand, have created now the possibility for extremely narrow path band systems...emitted by the earth and its atmosphere. The broad spectral range was selected so that the field of view of the detector could be narrowed to obtain
Extended observability of linear time-invariant systems under recurrent loss of output data
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok; Halevi, Yoram
1989-01-01
Recurrent loss of sensor data in integrated control systems of an advanced aircraft may occur under different operating conditions that include detected frame errors and queue saturation in computer networks, and bad data suppression in signal processing. This paper presents an extension of the concept of observability based on a set of randomly selected nonconsecutive outputs in finite-dimensional, linear, time-invariant systems. Conditions for testing extended observability have been established.
2004-03-01
definition efficiency is the amount of the time that the processing element is gainfully employed , which is calculated by using the ratio of the... employs an interest- ing form of tournament selection called Pareto domination tournaments. Two members of the population are chosen at random and they...it has a set of solutions and using a template for each solution is not feasible. So the MOMGA employs a different competitive template during the
Chen, Bor-Sen; Lin, Ying-Po
2011-01-01
In the evolutionary process, the random transmission and mutation of genes provide biological diversities for natural selection. In order to preserve functional phenotypes between generations, gene networks need to evolve robustly under the influence of random perturbations. Therefore, the robustness of the phenotype, in the evolutionary process, exerts a selection force on gene networks to keep network functions. However, gene networks need to adjust, by variations in genetic content, to generate phenotypes for new challenges in the network’s evolution, ie, the evolvability. Hence, there should be some interplay between the evolvability and network robustness in evolutionary gene networks. In this study, the interplay between the evolvability and network robustness of a gene network and a biochemical network is discussed from a nonlinear stochastic system point of view. It was found that if the genetic robustness plus environmental robustness is less than the network robustness, the phenotype of the biological network is robust in evolution. The tradeoff between the genetic robustness and environmental robustness in evolution is discussed from the stochastic stability robustness and sensitivity of the nonlinear stochastic biological network, which may be relevant to the statistical tradeoff between bias and variance, the so-called bias/variance dilemma. Further, the tradeoff could be considered as an antagonistic pleiotropic action of a gene network and discussed from the systems biology perspective. PMID:22084563
The Effects of Aerobic Exercise and Gaming on Cognitive Performance.
Douris, Peter C; Handrakis, John P; Apergis, Demitra; Mangus, Robert B; Patel, Rima; Limtao, Jessica; Platonova, Svetlana; Gregorio, Aladino; Luty, Elliot
2018-03-01
The purpose of our study was to investigate the effects of video gaming, aerobic exercise (biking), and the combination of these two activities on the domains of cognitive performance: selective attention, processing speed, and executive functioning. The study was a randomized clinical trial with 40 subjects (mean age 23.7 ± 1.8 years) randomized to one of four thirty-minute conditions: video gaming, biking, simultaneous gaming and biking, and a control condition. Cognitive performance was measured pre and post condition using the Stroop test and Trails B test. A mixed design was utilized. While video gaming, biking, simultaneous gaming and biking conditions improved selective attention and processing speed (p < 0.05), only the bike condition improved the highest order of cognitive performance, executive function (p < 0.01). There were no changes in cognitive performance for the control condition. Previous studies have shown that if tasks approach the limits of attentional capacity there is an increase in the overall chance for errors, known as the dual-task deficit. Simultaneous biking and gaming may have surpassed attentional capacity limits, ultimately increasing errors during the executive function tests of our cognitive performance battery. The results suggest that the fatiguing effects of a combined physically and mentally challenging task that extends after the exercise cessation may overcome the eventual beneficial cognitive effects derived from the physical exercise.
Evolving artificial metalloenzymes via random mutagenesis
NASA Astrophysics Data System (ADS)
Yang, Hao; Swartz, Alan M.; Park, Hyun June; Srivastava, Poonam; Ellis-Guardiola, Ken; Upp, David M.; Lee, Gihoon; Belsare, Ketaki; Gu, Yifan; Zhang, Chen; Moellering, Raymond E.; Lewis, Jared C.
2018-03-01
Random mutagenesis has the potential to optimize the efficiency and selectivity of protein catalysts without requiring detailed knowledge of protein structure; however, introducing synthetic metal cofactors complicates the expression and screening of enzyme libraries, and activity arising from free cofactor must be eliminated. Here we report an efficient platform to create and screen libraries of artificial metalloenzymes (ArMs) via random mutagenesis, which we use to evolve highly selective dirhodium cyclopropanases. Error-prone PCR and combinatorial codon mutagenesis enabled multiplexed analysis of random mutations, including at sites distal to the putative ArM active site that are difficult to identify using targeted mutagenesis approaches. Variants that exhibited significantly improved selectivity for each of the cyclopropane product enantiomers were identified, and higher activity than previously reported ArM cyclopropanases obtained via targeted mutagenesis was also observed. This improved selectivity carried over to other dirhodium-catalysed transformations, including N-H, S-H and Si-H insertion, demonstrating that ArMs evolved for one reaction can serve as starting points to evolve catalysts for others.
NASA Astrophysics Data System (ADS)
Rochman, Auliya Noor; Prasetyo, Hari; Nugroho, Munajat Tri
2017-06-01
Vehicle Routing Problem (VRP) often occurs when the manufacturers need to distribute their product to some customers/outlets. The distribution process is typically restricted by the capacity of the vehicle and the working hours at the distributor. This type of VRP is also known as Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). A Biased Random Key Genetic Algorithm (BRKGA) was designed and coded in MATLAB to solve the CVRPTW case of soft drink distribution. The standard BRKGA was then modified by applying chromosome insertion into the initial population and defining chromosome gender for parent undergoing crossover operation. The performance of the established algorithms was then compared to a heuristic procedure for solving a soft drink distribution. Some findings are revealed (1) the total distribution cost of BRKGA with insertion (BRKGA-I) results in a cost saving of 39% compared to the total cost of heuristic method, (2) BRKGA with the gender selection (BRKGA-GS) could further improve the performance of the heuristic method. However, the BRKGA-GS tends to yield worse results compared to that obtained from the standard BRKGA.
Weaver, Addie; Greeno, Catherine G; Goughler, Donald H; Yarzebinski, Kathleen; Zimmerman, Tina; Anderson, Carol
2013-07-01
This study examined the effect of using the Toyota Production System (TPS) to change intake procedures on treatment timeliness within a semi-rural community mental health clinic. One hundred randomly selected cases opened the year before the change and 100 randomly selected cases opened the year after the change were reviewed. An analysis of covariance demonstrated that changing intake procedures significantly decreased the number of days consumers waited for appointments (F(1,160) = 4.9; p = .03) from an average of 11 to 8 days. The pattern of difference on treatment timeliness was significantly different between adult and child programs (F(1,160) = 4.2; p = .04), with children waiting an average of 4 days longer than adults for appointments. Findings suggest that small system level changes may elicit important changes and that TPS offers a valuable model to improve processes within community mental health settings. Results also indicate that different factors drive adult and children's treatment timeliness.
Ion channel gene expression predicts survival in glioma patients
Wang, Rong; Gurguis, Christopher I.; Gu, Wanjun; Ko, Eun A; Lim, Inja; Bang, Hyoweon; Zhou, Tong; Ko, Jae-Hong
2015-01-01
Ion channels are important regulators in cell proliferation, migration, and apoptosis. The malfunction and/or aberrant expression of ion channels may disrupt these important biological processes and influence cancer progression. In this study, we investigate the expression pattern of ion channel genes in glioma. We designate 18 ion channel genes that are differentially expressed in high-grade glioma as a prognostic molecular signature. This ion channel gene expression based signature predicts glioma outcome in three independent validation cohorts. Interestingly, 16 of these 18 genes were down-regulated in high-grade glioma. This signature is independent of traditional clinical, molecular, and histological factors. Resampling tests indicate that the prognostic power of the signature outperforms random gene sets selected from human genome in all the validation cohorts. More importantly, this signature performs better than the random gene signatures selected from glioma-associated genes in two out of three validation datasets. This study implicates ion channels in brain cancer, thus expanding on knowledge of their roles in other cancers. Individualized profiling of ion channel gene expression serves as a superior and independent prognostic tool for glioma patients. PMID:26235283
Weaver, A.; Greeno, C.G.; Goughler, D.H.; Yarzebinski, K.; Zimmerman, T.; Anderson, C.
2013-01-01
This study examined the effect of using the Toyota Production System (TPS) to change intake procedures on treatment timeliness within a semi-rural community mental health clinic. One hundred randomly selected cases opened the year before the change and one hundred randomly selected cases opened the year after the change were reviewed. An analysis of covariance (ANCOVA) demonstrated that changing intake procedures significantly decreased the number of days consumers waited for appointments (F(1,160)=4.9; p=.03) from an average of 11 days to 8 days. The pattern of difference on treatment timeliness was significantly different between adult and child programs (F(1,160)=4.2; p=.04), with children waiting an average of 4 days longer than adults for appointments. Findings suggest that small system level changes may elicit important changes and that TPS offers a valuable model to improve processes within community mental health settings. Results also indicate that different factors drive adult and children’s treatment timeliness. PMID:23576137
Refernce Conditions for Streams in the Grand Prairie Natural Division of Illinois
NASA Astrophysics Data System (ADS)
Sangunett, B.; Dewalt, R.
2005-05-01
As part of the Critical Trends Assessment Program (CTAP) of the Illinois Department of Natural Resources (IDNR), 12 potential reference quality stream sites in the Grand Prairie Natural Division were evaluated in May 2004. This agriculturally dominated region, located in east central Illinois, is the most highly modified in the state. The quality of these sites was assessed using a modified Hilsenhoff Biotic Index and species richness of Ephemeroptera, Plecoptera, and Trichoptera (EPT) insect orders and a 12 parameter Habitat Quality Index (HQI). Illinois EPA high quality fish stations, Illinois Natural History Survey insect collection data, and best professional knowledge were used to choose which streams to evaluate. For analysis, reference quality streams were compared to 37 randomly selected meandering streams and 26 randomly selected channelized streams which were assessed by CTAP between 1997 and 2001. The results showed that the reference streams exceeded both taxa richness and habitat quality of randomly selected streams in the region. Both random meandering sites and reference quality sites increased in taxa richness and HQI as stream width increased. Randomly selected channelized streams had about the same taxa richness and HQI regardless of width.
Key Aspects of Nucleic Acid Library Design for in Vitro Selection
Vorobyeva, Maria A.; Davydova, Anna S.; Vorobjev, Pavel E.; Pyshnyi, Dmitrii V.; Venyaminova, Alya G.
2018-01-01
Nucleic acid aptamers capable of selectively recognizing their target molecules have nowadays been established as powerful and tunable tools for biospecific applications, be it therapeutics, drug delivery systems or biosensors. It is now generally acknowledged that in vitro selection enables one to generate aptamers to almost any target of interest. However, the success of selection and the affinity of the resulting aptamers depend to a large extent on the nature and design of an initial random nucleic acid library. In this review, we summarize and discuss the most important features of the design of nucleic acid libraries for in vitro selection such as the nature of the library (DNA, RNA or modified nucleotides), the length of a randomized region and the presence of fixed sequences. We also compare and contrast different randomization strategies and consider computer methods of library design and some other aspects. PMID:29401748
Methods and analysis of realizing randomized grouping.
Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi
2011-07-01
Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.
Asis, Angelli Marie Jacynth M; Lacsamana, Joanne Krisha M; Santos, Mudjekeewis D
2016-01-01
Illegal trade has greatly affected marine fish stocks, decreasing fish populations worldwide. Despite having a number of aquatic species being regulated, illegal trade still persists through the transport of dried or processed products and juvenile species trafficking. In this regard, accurate species identification of illegally traded marine fish stocks by DNA barcoding is deemed to be a more efficient method in regulating and monitoring trade than by morphological means which is very difficult due to the absence of key morphological characters in juveniles and processed products. Here, live juvenile eels (elvers) and dried products of sharks and rays confiscated for illegal trade were identified. Twenty out of 23 (87%) randomly selected "elvers" were identified as Anguilla bicolor pacifica and 3 (13%) samples as Anguilla marmorata. On the other hand, 4 out of 11 (36%) of the randomly selected dried samples of sharks and rays were Manta birostris. The rest of the samples were identified as Alopias pelagicus, Taeniura meyeni, Carcharhinus falciformis, Himantura fai and Mobula japonica. These results confirm that wild juvenile eels and species of manta rays are still being caught in the country regardless of its protected status under Philippine and international laws. It is evident that the illegal trade of protected aquatic species is happening in the guise of dried or processed products thus the need to put emphasis on strengthening conservation measures. This study aims to underscore the importance of accurate species identification in such cases of illegal trade and the effectivity of DNA barcoding as a tool to do this.
Automated encoding of clinical documents based on natural language processing.
Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George
2004-01-01
The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.
Curtis, Susan B; Hewitt, Jeff; Macgillivray, Ross T A; Dunbar, W Scott
2009-02-01
During mineral processing, concentrates of sulfide minerals of economic interest are formed by froth flotation of fine ore particles. The method works well but recovery and selectivity can be poor for ores with complex mineralogy. There is considerable interest in methods that improve the selectivity of this process while avoiding the high costs of using flotation chemicals. Here we show the first application of phage biotechnology to the processing of economically important minerals in ore slurries. A random heptapeptide library was screened for peptide sequences that bind selectively to the minerals sphalerite (ZnS) and chalcopyrite (CuFeS2). After several rounds of enrichment, cloned phage containing the surface peptide loops KPLLMGS and QPKGPKQ bound specifically to sphalerite. Phage containing the peptide loop TPTTYKV bound to both sphalerite and chalcopyrite. By using an enzyme-linked immunosorbant assay (ELISA), the phage was characterized as strong binders compared to wild-type phage. Specificity of binding was confirmed by immunochemical visualization of phage bound to mineral particles but not to silica (a waste mineral) or pyrite. The current study focused primarily on the isolation of ZnS-specific phage that could be utilized in the separation of sphalerite from silica. At mining sites where sphalerite and chalcopyrite are not found together in natural ores, the separation of sphalerite from silica would be an appropriate enrichment step. At mining sites where sphalerite and chalcopyrite do occur together, more specific phage would be required. This bacteriophage has the potential to be used in a more selective method of mineral separation and to be the basis for advanced methods of mineral processing.
Evolution with Stochastic Fitness and Stochastic Migration
Rice, Sean H.; Papadopoulos, Anthony
2009-01-01
Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580
Marques, Yuri Bento; de Paiva Oliveira, Alcione; Ribeiro Vasconcelos, Ana Tereza; Cerqueira, Fabio Ribeiro
2016-12-15
MicroRNAs (miRNAs) are key gene expression regulators in plants and animals. Therefore, miRNAs are involved in several biological processes, making the study of these molecules one of the most relevant topics of molecular biology nowadays. However, characterizing miRNAs in vivo is still a complex task. As a consequence, in silico methods have been developed to predict miRNA loci. A common ab initio strategy to find miRNAs in genomic data is to search for sequences that can fold into the typical hairpin structure of miRNA precursors (pre-miRNAs). The current ab initio approaches, however, have selectivity issues, i.e., a high number of false positives is reported, which can lead to laborious and costly attempts to provide biological validation. This study presents an extension of the ab initio method miRNAFold, with the aim of improving selectivity through machine learning techniques, namely, random forest combined with the SMOTE procedure that copes with imbalance datasets. By comparing our method, termed Mirnacle, with other important approaches in the literature, we demonstrate that Mirnacle substantially improves selectivity without compromising sensitivity. For the three datasets used in our experiments, our method achieved at least 97% of sensitivity and could deliver a two-fold, 20-fold, and 6-fold increase in selectivity, respectively, compared with the best results of current computational tools. The extension of miRNAFold by the introduction of machine learning techniques, significantly increases selectivity in pre-miRNA ab initio prediction, which optimally contributes to advanced studies on miRNAs, as the need of biological validations is diminished. Hopefully, new research, such as studies of severe diseases caused by miRNA malfunction, will benefit from the proposed computational tool.
Skrivanek, Zachary; Berry, Scott; Berry, Don; Chien, Jenny; Geiger, Mary Jane; Anderson, James H.; Gaydos, Brenda
2012-01-01
Background Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. Methods To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. Results Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). Conclusions This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm—including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design. PMID:23294775
Last night I had the strangest dream: Varieties of rational thought processes in dream reports.
Wolman, Richard N; Kozmová, Miloslava
2007-12-01
From the neurophysiological perspective, thinking in dreaming and the quality of dream thought have been considered hallucinatory, bizarre, illogical, improbable, or even impossible. This empirical phenomenological research concentrates on testing whether dream thought can be defined as rational in the sense of an intervening mental process between sensory perception and the creation of meaning, leading to a conclusion or to taking action. From 10 individual dream journals of male participants aged 22-59 years and female participants aged 25-49 years, we delimited four dreams per journal and randomly selected five thought units from each dream for scoring. The units provided a base for testing a hypothesis that the thought processes of dream construction are rational. The results support the hypothesis and demonstrate that eight fundamental rational thought processes can be applied to the dreaming process.
Evidence for selective executive function deficits in ecstasy/polydrug users.
Fisk, J E; Montgomery, C
2009-01-01
Previous research has suggested that the separate aspects of executive functioning are differentially affected by ecstasy use. Although the inhibition process appears to be unaffected by ecstasy use, it is unclear whether this is true of heavy users under conditions of high demand. Tasks loading on the updating process have been shown to be adversely affected by ecstasy use. However, it remains unclear whether the deficits observed reflect the executive aspects of the tasks or whether they are domain general in nature affecting both verbal and visuo-spatial updating. Fourteen heavy ecstasy users (mean total lifetime use 1000 tablets), 39 light ecstasy users (mean total lifetime use 150 tablets) and 28 non-users were tested on tasks loading on the inhibition executive process (random letter generation) and the updating component process (letter updating, visuo-spatial updating and computation span). Heavy users were not impaired in random letter generation even under conditions designed to be more demanding. Ecstasy-related deficits were observed on all updating measures and were statistically significant for two of the three measures. Following controls for various aspects of cannabis use, statistically significant ecstasy-related deficits were obtained on all three updating measures. It was concluded that the inhibition process is unaffected by ecstasy use even among heavy users. By way of contrast, the updating process appears to be impaired in ecstasy users with the deficit apparently domain general in nature.
NASA Astrophysics Data System (ADS)
Tehrany, M. Sh.; Jones, S.
2017-10-01
This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63 %, 76 %, 88 %, 80 %, 74 %, 71 % and 65 % respectively. Evidently, using the polygon format of the inventory data didn't lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling.
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Odegård, J; Klemetsdal, G; Heringstad, B
2005-04-01
Several selection criteria for reducing incidence of mastitis were developed from a random regression sire model for test-day somatic cell score (SCS). For comparison, sire transmitting abilities were also predicted based on a cross-sectional model for lactation mean SCS. Only first-crop daughters were used in genetic evaluation of SCS, and the different selection criteria were compared based on their correlation with incidence of clinical mastitis in second-crop daughters (measured as mean daughter deviations). Selection criteria were predicted based on both complete and reduced first-crop daughter groups (261 or 65 daughters per sire, respectively). For complete daughter groups, predicted transmitting abilities at around 30 d in milk showed the best predictive ability for incidence of clinical mastitis, closely followed by average predicted transmitting abilities over the entire lactation. Both of these criteria were derived from the random regression model. These selection criteria improved accuracy of selection by approximately 2% relative to a cross-sectional model. However, for reduced daughter groups, the cross-sectional model yielded increased predictive ability compared with the selection criteria based on the random regression model. This result may be explained by the cross-sectional model being more robust, i.e., less sensitive to precision of (co)variance components estimates and effects of data structure.
ERIC Educational Resources Information Center
Kariuki, Patrick N. K.; Bush, Elizabeth Danielle
2008-01-01
The purpose of this study was to examine the effects of Total Physical Response by Storytelling and the traditional teaching method on a foreign language in a selected high school. The sample consisted of 30 students who were randomly selected and randomly assigned to experimental and control group. The experimental group was taught using Total…
Wampler, Peter J; Rediske, Richard R; Molla, Azizur R
2013-01-18
A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This method provides an important technique that can be applied to other developing countries where a randomized study design is needed but infrastructure is lacking to implement more traditional participant selection methods.
Ultsch, Alfred; Kringel, Dario; Kalso, Eija; Mogil, Jeffrey S; Lötsch, Jörn
2016-12-01
The increasing availability of "big data" enables novel research approaches to chronic pain while also requiring novel techniques for data mining and knowledge discovery. We used machine learning to combine the knowledge about n = 535 genes identified empirically as relevant to pain with the knowledge about the functions of thousands of genes. Starting from an accepted description of chronic pain as displaying systemic features described by the terms "learning" and "neuronal plasticity," a functional genomics analysis proposed that among the functions of the 535 "pain genes," the biological processes "learning or memory" (P = 8.6 × 10) and "nervous system development" (P = 2.4 × 10) are statistically significantly overrepresented as compared with the annotations to these processes expected by chance. After establishing that the hypothesized biological processes were among important functional genomics features of pain, a subset of n = 34 pain genes were found to be annotated with both Gene Ontology terms. Published empirical evidence supporting their involvement in chronic pain was identified for almost all these genes, including 1 gene identified in March 2016 as being involved in pain. By contrast, such evidence was virtually absent in a randomly selected set of 34 other human genes. Hence, the present computational functional genomics-based method can be used for candidate gene selection, providing an alternative to established methods.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
The role of color and attention-to-color in mirror-symmetry perception.
Gheorghiu, Elena; Kingdom, Frederick A A; Remkes, Aaron; Li, Hyung-Chul O; Rainville, Stéphane
2016-07-11
The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) 'segregated' - symmetric blobs were of one color, random blobs of the other color(s); (2) 'random-segregated' - as above but with the symmetric color randomly selected on each trial; (3) 'non-segregated' - symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) 'anti-symmetric' - symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective.
The role of color and attention-to-color in mirror-symmetry perception
Gheorghiu, Elena; Kingdom, Frederick A. A.; Remkes, Aaron; Li, Hyung-Chul O.; Rainville, Stéphane
2016-01-01
The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) ‘segregated’ – symmetric blobs were of one color, random blobs of the other color(s); (2) ‘random-segregated’ – as above but with the symmetric color randomly selected on each trial; (3) ‘non-segregated’ – symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) ‘anti-symmetric’ – symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective. PMID:27404804
NASA Astrophysics Data System (ADS)
Sadeghimeresht, E.; Markocsan, N.; Nylén, P.
2016-12-01
Selection of the thermal spray process is the most important step toward a proper coating solution for a given application as important coating characteristics such as adhesion and microstructure are highly dependent on it. In the present work, a process-microstructure-properties-performance correlation study was performed in order to figure out the main characteristics and corrosion performance of the coatings produced by different thermal spray techniques such as high-velocity air fuel (HVAF), high-velocity oxy fuel (HVOF), and atmospheric plasma spraying (APS). Previously optimized HVOF and APS process parameters were used to deposit Ni, NiCr, and NiAl coatings and compare with HVAF-sprayed coatings with randomly selected process parameters. As the HVAF process presented the best coating characteristics and corrosion behavior, few process parameters such as feed rate and standoff distance (SoD) were investigated to systematically optimize the HVAF coatings in terms of low porosity and high corrosion resistance. The Ni and NiAl coatings with lower porosity and better corrosion behavior were obtained at an average SoD of 300 mm and feed rate of 150 g/min. The NiCr coating sprayed at a SoD of 250 mm and feed rate of 75 g/min showed the highest corrosion resistance among all investigated samples.
Randomization Methods in Emergency Setting Trials: A Descriptive Review
ERIC Educational Resources Information Center
Corbett, Mark Stephen; Moe-Byrne, Thirimon; Oddie, Sam; McGuire, William
2016-01-01
Background: Quasi-randomization might expedite recruitment into trials in emergency care settings but may also introduce selection bias. Methods: We searched the Cochrane Library and other databases for systematic reviews of interventions in emergency medicine or urgent care settings. We assessed selection bias (baseline imbalances) in prognostic…
Middle Level Practices in European International and Department of Defense Schools.
ERIC Educational Resources Information Center
Waggoner, V. Christine; McEwin, C. Kenneth
1993-01-01
Discusses results of a 1989-90 survey of 70 randomly selected international schools and 70 randomly selected Department of Defense Schools in Europe. Programs and practices surveyed included enrollments, grade organization, curriculum and instructional plans, core subjects, grouping patterns, exploratory courses, advisory programs, and scheduling.…
Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.
Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano
2017-11-08
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.
What to expect from an evolutionary hypothesis for a human disease: The case of type 2 diabetes.
Watve, Milind; Diwekar-Joshi, Manawa
2016-10-01
Evolutionary medicine has a promise to bring in a conceptual revolution in medicine. However, as yet the field does not have the same theoretical rigour as that of many other fields in evolutionary studies. We discuss here with reference to type 2 diabetes mellitus (T2DM) what role an evolutionary hypothesis should play in the development of thinking in medicine. Starting with the thrifty gene hypothesis, evolutionary thinking in T2DM has undergone several transitions, modifications and refinements of the thrift family of hypotheses. In addition alternative hypotheses independent of thrift are also suggested. However, most hypotheses look at partial pictures; make selective use of supportive data ignoring inconvenient truths. Most hypotheses look at a superficial picture and avoid getting into the intricacies of underlying molecular, neuronal and physiological processes. Very few hypotheses have suggested clinical implications and none of them have been tested with randomized clinical trials. In the meanwhile the concepts in the pathophysiology of T2DM are undergoing radical changes and evolutionary hypotheses need to take them into account. We suggest an approach and a set of criteria to evaluate the relative merits of the alternative hypotheses. A number of hypotheses are likely to fail when critically evaluated against these criteria. It is possible that more than one selective process are at work in the evolution of propensity to T2DM, but the intercompatibility of the alternative selective forces and their relative contribution needs to be examined. The approach we describe could potentially lead to a sound evolutionary theory that is clinically useful and testable by randomized controlled clinical trials. Copyright © 2016 Elsevier GmbH. All rights reserved.
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
A novel attack method about double-random-phase-encoding-based image hiding method
NASA Astrophysics Data System (ADS)
Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen
2018-03-01
By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.
Spectroscopic Diagnosis of Arsenic Contamination in Agricultural Soils
Shi, Tiezhu; Liu, Huizeng; Chen, Yiyun; Fei, Teng; Wang, Junjie; Wu, Guofeng
2017-01-01
This study investigated the abilities of pre-processing, feature selection and machine-learning methods for the spectroscopic diagnosis of soil arsenic contamination. The spectral data were pre-processed by using Savitzky-Golay smoothing, first and second derivatives, multiplicative scatter correction, standard normal variate, and mean centering. Principle component analysis (PCA) and the RELIEF algorithm were used to extract spectral features. Machine-learning methods, including random forests (RF), artificial neural network (ANN), radial basis function- and linear function- based support vector machine (RBF- and LF-SVM) were employed for establishing diagnosis models. The model accuracies were evaluated and compared by using overall accuracies (OAs). The statistical significance of the difference between models was evaluated by using McNemar’s test (Z value). The results showed that the OAs varied with the different combinations of pre-processing, feature selection, and classification methods. Feature selection methods could improve the modeling efficiencies and diagnosis accuracies, and RELIEF often outperformed PCA. The optimal models established by RF (OA = 86%), ANN (OA = 89%), RBF- (OA = 89%) and LF-SVM (OA = 87%) had no statistical difference in diagnosis accuracies (Z < 1.96, p < 0.05). These results indicated that it was feasible to diagnose soil arsenic contamination using reflectance spectroscopy. The appropriate combination of multivariate methods was important to improve diagnosis accuracies. PMID:28471412
Keil, Andreas; Moratti, Stephan; Sabatinelli, Dean; Bradley, Margaret M; Lang, Peter J
2005-08-01
Affectively arousing visual stimuli have been suggested to automatically attract attentional resources in order to optimize sensory processing. The present study crosses the factors of spatial selective attention and affective content, and examines the relationship between instructed (spatial) and automatic attention to affective stimuli. In addition to response times and error rate, electroencephalographic data from 129 electrodes were recorded during a covert spatial attention task. This task required silent counting of random-dot targets embedded in a 10 Hz flicker of colored pictures presented to both hemifields. Steady-state visual evoked potentials (ssVEPs) were obtained to determine amplitude and phase of electrocortical responses to pictures. An increase of ssVEP amplitude was observed as an additive function of spatial attention and emotional content. Statistical parametric mapping of this effect indicated occipito-temporal and parietal cortex activation contralateral to the attended visual hemifield in ssVEP amplitude modulation. This difference was most pronounced during selection of the left visual hemifield, at right temporal electrodes. In line with this finding, phase information revealed accelerated processing of aversive arousing, compared to affectively neutral pictures. The data suggest that affective stimulus properties modulate the spatiotemporal process along the ventral stream, encompassing amplitude amplification and timing changes of posterior and temporal cortex.
Wei, Ling; Li, Yingjie; Yang, Xiaoli; Xue, Qing; Wang, Yuping
2015-10-01
The present study evaluated the topological properties of whole brain networks using graph theoretical concepts and investigated the time-evolution characteristic of brain network in mild cognitive impairment patients during a selective attention task. Electroencephalography (EEG) activities were recorded in 10 MCI patients and 17 healthy subjects when they performed a color match task. We calculated the phase synchrony index between each possible pairs of EEG channels in alpha and beta frequency bands and analyzed the local interconnectedness, overall connectedness and small-world characteristic of brain network in different degree for two groups. Relative to healthy normal controls, the properties of cortical networks in MCI patients tend to be a shift of randomization. Lower σ of MCI had suggested that patients had a further loss of small-world attribute both during active and resting states. Our results provide evidence for the functional disconnection of brain regions in MCI. Furthermore, we found the properties of cortical networks could reflect the processing of conflict information in the selective attention task. The human brain tends to be a more regular and efficient neural architecture in the late stage of information processing. In addition, the processing of conflict information needs stronger information integration and transfer between cortical areas. Copyright © 2015 Elsevier B.V. All rights reserved.
Petersen, James H.; DeAngelis, Donald L.
1992-01-01
The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.
Workshop on Incomplete Network Data Held at Sandia National Labs – Livermore
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soundarajan, Sucheta; Wendt, Jeremy D.
2016-06-01
While network analysis is applied in a broad variety of scientific fields (including physics, computer science, biology, and the social sciences), how networks are constructed and the resulting bias and incompleteness have drawn more limited attention. For example, in biology, gene networks are typically developed via experiment -- many actual interactions are likely yet to be discovered. In addition to this incompleteness, the data-collection processes can introduce significant bias into the observed network datasets. For instance, if you observe part of the World Wide Web network through a classic random walk, then high degree nodes are more likely to bemore » found than if you had selected nodes at random. Unfortunately, such incomplete and biasing data collection methods must be often used.« less
Normal aging delays and compromises early multifocal visual attention during object tracking.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2013-02-01
Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.
Adaptive Electronic Camouflage Using Texture Synthesis
2012-04-01
algorithm begins by computing the GLCMs, GIN and GOUT , of the input image (e.g., image of local environment) and output image (randomly generated...respectively. The algorithm randomly selects a pixel from the output image and cycles its gray-level through all values. For each value, GOUT is updated...The value of the selected pixel is permanently changed to the gray-level value that minimizes the error between GIN and GOUT . Without selecting a
Music intervention during daily weaning trials-A 6 day prospective randomized crossover trial.
Liang, Zhan; Ren, Dianxu; Choi, JiYeon; Happ, Mary Beth; Hravnak, Marylyn; Hoffman, Leslie A
2016-12-01
To examine the effect of patient-selected music intervention during daily weaning trials for patients on prolonged mechanical ventilation. Using a crossover repeated measures design, patients were randomized to music vs no music on the first intervention day. Provision of music was alternated for 6 days, resulting in 3 music and 3 no music days. During weaning trials on music days, data were obtained for 30min prior to music listening and continued for 60min while patients listened to selected music (total 90min). On no music days, data were collected for 90min. Outcome measures were heart rate (HR), respiratory rate (RR), oxygen saturation (SpO 2 ), blood pressure (BP), dyspnea and anxiety assessed with a visual analog scale (VAS-D, VAS-A) and weaning duration (meanh per day on music and non-music days). Of 31 patients randomized, 23 completed the 6-day intervention. When comparisons were made between the 3 music and 3 no music days, there were significant decreases in RR and VAS-D and a significant increase in daily weaning duration on music days (p<0.05). A multivariate mixed-effects model analysis that included patients who completed ≥2 days of the intervention (n=28) demonstrated significant decreases in HR, RR, VAS-A, and VAS-D and a significant increase in daily weaning duration on music days (p<0.05). Providing patient selected music during daily weaning trials is a simple, low-cost, potentially beneficial intervention for patients on prolonged mechanical ventilation. Further study is indicated to test ability of this intervention to promote weaning success and benefits earlier in the weaning process. Copyright © 2016 Elsevier Ltd. All rights reserved.
A dose optimization method for electron radiotherapy using randomized aperture beams
NASA Astrophysics Data System (ADS)
Engel, Konrad; Gauer, Tobias
2009-09-01
The present paper describes the entire optimization process of creating a radiotherapy treatment plan for advanced electron irradiation. Special emphasis is devoted to the selection of beam incidence angles and beam energies as well as to the choice of appropriate subfields generated by a refined version of intensity segmentation and a novel random aperture approach. The algorithms have been implemented in a stand-alone programme using dose calculations from a commercial treatment planning system. For this study, the treatment planning system Pinnacle from Philips has been used and connected to the optimization programme using an ASCII interface. Dose calculations in Pinnacle were performed by Monte Carlo simulations for a remote-controlled electron multileaf collimator (MLC) from Euromechanics. As a result, treatment plans for breast cancer patients could be significantly improved when using randomly generated aperture beams. The combination of beams generated through segmentation and randomization achieved the best results in terms of target coverage and sparing of critical organs. The treatment plans could be further improved by use of a field reduction algorithm. Without a relevant loss in dose distribution, the total number of MLC fields and monitor units could be reduced by up to 20%. In conclusion, using randomized aperture beams is a promising new approach in radiotherapy and exhibits potential for further improvements in dose optimization through a combination of randomized electron and photon aperture beams.
Student conceptions of natural selection and its role in evolution
NASA Astrophysics Data System (ADS)
Bishop, Beth A.; Anderson, Charles W.
Pretests and posttests on the topic of evolution by natural selection were administered to students in a college nonmajors' biology course. Analysis of test responses revealed that most students understood evolution as a process in which species respond to environmental conditions by changing gradually over time. Student thinking differed from accepted biological theory in that (a) changes in traits were attributed to a need-driven adaptive process rather than random genetic mutation and sexual recombination, (b) no role was assigned to variation on traits within a population or differences in reproductive success, and (c) traits were seen as gradually changing in all members of a population. Although students had taken an average of 1.9 years of previous biology courses, performance on the pretest was uniformly low. There was no relationship between the amount of previous biology taken and either pretest or posttest performance. Belief in the truthfulness of evolutionary theory was also unrelated to either pretest or posttest performance. Course instruction using specially designed materials was moderately successful in improving students' understanding of the evolutionary process.
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex
Lindsay, Grace W.
2017-01-01
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463
NASA Astrophysics Data System (ADS)
Wang, Xiao; Burghardt, Dirk
2018-05-01
This paper presents a new strategy for the generalization of discrete area features by using stroke grouping method and polarization transportation selection. The mentioned stroke is constructed on derive of the refined proximity graph of area features, and the refinement is under the control of four constraints to meet different grouping requirements. The area features which belong to the same stroke are detected into the same group. The stroke-based strategy decomposes the generalization process into two sub-processes by judging whether the area features related to strokes or not. For the area features which belong to the same one stroke, they normally present a linear like pat-tern, and in order to preserve this kind of pattern, typification is chosen as the operator to implement the generalization work. For the remaining area features which are not related by strokes, they are still distributed randomly and discretely, and the selection is chosen to conduct the generalization operation. For the purpose of retaining their original distribution characteristic, a Polarization Transportation (PT) method is introduced to implement the selection operation. Buildings and lakes are selected as the representatives of artificial area feature and natural area feature respectively to take the experiments. The generalized results indicate that by adopting this proposed strategy, the original distribution characteristics of building and lake data can be preserved, and the visual perception is pre-served as before.
Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A
2015-07-08
Experiments that study feature-based attention have often examined situations in which selection is based on a single feature (e.g., the color red). However, in more complex situations relevant stimuli may not be set apart from other stimuli by a single defining property but by a specific combination of features. Here, we examined sustained attentional selection of stimuli defined by conjunctions of color and orientation. Human observers attended to one out of four concurrently presented superimposed fields of randomly moving horizontal or vertical bars of red or blue color to detect brief intervals of coherent motion. Selective stimulus processing in early visual cortex was assessed by recordings of steady-state visual evoked potentials (SSVEPs) elicited by each of the flickering fields of stimuli. We directly contrasted attentional selection of single features and feature conjunctions and found that SSVEP amplitudes on conditions in which selection was based on a single feature only (color or orientation) exactly predicted the magnitude of attentional enhancement of SSVEPs when attending to a conjunction of both features. Furthermore, enhanced SSVEP amplitudes elicited by attended stimuli were accompanied by equivalent reductions of SSVEP amplitudes elicited by unattended stimuli in all cases. We conclude that attentional selection of a feature-conjunction stimulus is accomplished by the parallel and independent facilitation of its constituent feature dimensions in early visual cortex. The ability to perceive the world is limited by the brain's processing capacity. Attention affords adaptive behavior by selectively prioritizing processing of relevant stimuli based on their features (location, color, orientation, etc.). We found that attentional mechanisms for selection of different features belonging to the same object operate independently and in parallel: concurrent attentional selection of two stimulus features is simply the sum of attending to each of those features separately. This result is key to understanding attentional selection in complex (natural) scenes, where relevant stimuli are likely to be defined by a combination of stimulus features. Copyright © 2015 the authors 0270-6474/15/359912-08$15.00/0.
The temporal distribution of directional gradients under selection for an optimum.
Chevin, Luis-Miguel; Haller, Benjamin C
2014-12-01
Temporal variation in phenotypic selection is often attributed to environmental change causing movements of the adaptive surface relating traits to fitness, but this connection is rarely established empirically. Fluctuating phenotypic selection can be measured by the variance and autocorrelation of directional selection gradients through time. However, the dynamics of these gradients depend not only on environmental changes altering the fitness surface, but also on evolution of the phenotypic distribution. Therefore, it is unclear to what extent variability in selection gradients can inform us about the underlying drivers of their fluctuations. To investigate this question, we derive the temporal distribution of directional gradients under selection for a phenotypic optimum that is either constant or fluctuates randomly in various ways in a finite population. Our analytical results, combined with population- and individual-based simulations, show that although some characteristic patterns can be distinguished, very different types of change in the optimum (including a constant optimum) can generate similar temporal distributions of selection gradients, making it difficult to infer the processes underlying apparent fluctuating selection. Analyzing changes in phenotype distributions together with changes in selection gradients should prove more useful for inferring the mechanisms underlying estimated fluctuating selection. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee
2015-08-01
Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.
The Effect of CAI on Reading Achievement.
ERIC Educational Resources Information Center
Hardman, Regina
A study determined whether computer assisted instruction (CAI) had an effect on students' reading achievement. Subjects were 21 randomly selected fourth-grade students at D. S. Wentworth Elementary School on the south side of Chicago in a low-income neighborhood who received a year's exposure to a CAI program, and 21 randomly selected students at…
Access to Higher Education by the Luck of the Draw
ERIC Educational Resources Information Center
Stone, Peter
2013-01-01
Random selection is a fair way to break ties between applicants of equal merit seeking admission to institutions of higher education (with "merit" defined here in terms of the intrinsic contribution higher education would make to the applicant's life). Opponents of random selection commonly argue that differences in strength between…
ERIC Educational Resources Information Center
Beretvas, S. Natasha; Murphy, Daniel L.
2013-01-01
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
1977 Survey of the American Professoriate. Technical Report.
ERIC Educational Resources Information Center
Ladd, Everett Carll, Jr.; And Others
The development and data validation of the 1977 Ladd-Lipset national survey of the American professoriate are described. The respondents were selected from a random sample of colleges and universities and from a random sample of individual faculty members from the universities. The 158 institutions in the 1977 survey were selected from 2,406…
Site Selection in Experiments: A Follow-Up Evaluation of Site Recruitment in Two Scale-Up Studies
ERIC Educational Resources Information Center
Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castillo, Veronica
2015-01-01
Randomized experiments are commonly used to evaluate if particular interventions improve student achievement. While these experiments can establish that a treatment actually "causes" changes, typically the participants are not randomly selected from a well-defined population and therefore the results do not readily generalize. Three…
Wright's Shifting Balance Theory and the Diversification of Aposematic Signals
Chouteau, Mathieu; Angers, Bernard
2012-01-01
Despite accumulating evidence for selection within natural systems, the importance of random genetic drift opposing Wright's and Fisher's views of evolution continue to be a subject of controversy. The geographical diversification of aposematic signals appears to be a suitable system to assess the factors involved in the process of adaptation since both theories were independently proposed to explain this phenomenon. In the present study, the effects of drift and selection were assessed from population genetics and predation experiments on poison-dart frogs, Ranitomaya imitator, of Northern Peru. We specifically focus on the transient zone between two distinct aposematic signals. In contrast to regions where high predation maintains a monomorphic aposematic signal, the transient zones are characterized by lowered selection and a high phenotypic diversity. As a result, the diversification of phenotypes may occur via genetic drift without a significant loss of fitness. These new phenotypes may then colonize alternative habitats if successfully recognized and avoided by predators. This study highlights the interplay between drift and selection as determinant processes in the adaptive diversification of aposematic signals. Results are consistent with the expectations of the Wright's shifting balance theory and represent, to our knowledge, the first empirical demonstration of this highly contested theory in a natural system. PMID:22470509
Cai, Tianxi; Karlson, Elizabeth W.
2013-01-01
Objectives To test whether data extracted from full text patient visit notes from an electronic medical record (EMR) would improve the classification of PsA compared to an algorithm based on codified data. Methods From the > 1,350,000 adults in a large academic EMR, all 2318 patients with a billing code for PsA were extracted and 550 were randomly selected for chart review and algorithm training. Using codified data and phrases extracted from narrative data using natural language processing, 31 predictors were extracted and three random forest algorithms trained using coded, narrative, and combined predictors. The receiver operator curve (ROC) was used to identify the optimal algorithm and a cut point was chosen to achieve the maximum sensitivity possible at a 90% positive predictive value (PPV). The algorithm was then used to classify the remaining 1768 charts and finally validated in a random sample of 300 cases predicted to have PsA. Results The PPV of a single PsA code was 57% (95%CI 55%–58%). Using a combination of coded data and NLP the random forest algorithm reached a PPV of 90% (95%CI 86%–93%) at sensitivity of 87% (95% CI 83% – 91%) in the training data. The PPV was 93% (95%CI 89%–96%) in the validation set. Adding NLP predictors to codified data increased the area under the ROC (p < 0.001). Conclusions Using NLP with text notes from electronic medical records improved the performance of the prediction algorithm significantly. Random forests were a useful tool to accurately classify psoriatic arthritis cases to enable epidemiological research. PMID:20701955
Correlated randomness and switching phenomena
NASA Astrophysics Data System (ADS)
Stanley, H. E.; Buldyrev, S. V.; Franzese, G.; Havlin, S.; Mallamace, F.; Kumar, P.; Plerou, V.; Preis, T.
2010-08-01
One challenge of biology, medicine, and economics is that the systems treated by these serious scientific disciplines have no perfect metronome in time and no perfect spatial architecture-crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. Further, many of these processes and structures have the remarkable feature of “switching” from one behavior to another as if by magic. The past century has, philosophically, been concerned with placing aside the human tendency to see the universe as a fine-tuned machine. Here we will address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at some of the many spatial and temporal patterns in biology, medicine, and economics and even begin to characterize the switching phenomena that enables a system to pass from one state to another. Inspired by principles developed by A. Nihat Berker and scores of other statistical physicists in recent years, we discuss some applications of correlated randomness to understand switching phenomena in various fields. Specifically, we present evidence from experiments and from computer simulations supporting the hypothesis that water’s anomalies are related to a switching point (which is not unlike the “tipping point” immortalized by Malcolm Gladwell), and that the bubbles in economic phenomena that occur on all scales are not “outliers” (another Gladwell immortalization). Though more speculative, we support the idea of disease as arising from some kind of yet-to-be-understood complex switching phenomenon, by discussing data on selected examples, including heart disease and Alzheimer disease.
Selection dynamic of Escherichia coli host in M13 combinatorial peptide phage display libraries.
Zanconato, Stefano; Minervini, Giovanni; Poli, Irene; De Lucrezia, Davide
2011-01-01
Phage display relies on an iterative cycle of selection and amplification of random combinatorial libraries to enrich the initial population of those peptides that satisfy a priori chosen criteria. The effectiveness of any phage display protocol depends directly on library amino acid sequence diversity and the strength of the selection procedure. In this study we monitored the dynamics of the selective pressure exerted by the host organism on a random peptide library in the absence of any additional selection pressure. The results indicate that sequence censorship exerted by Escherichia coli dramatically reduces library diversity and can significantly impair phage display effectiveness.
A Analysis of the Development of Weather Concepts
NASA Astrophysics Data System (ADS)
Mroz, Paul John
Weather information in all forms is poorly understood and often misinterpreted by the general public. Weather literacy is necessary for everyone if critical weather messages, designed to save lives and protect property, are to be effective. The purpose of this study was to seek content and causal evidence for a developmental concept of Weather Information Processing that was consistent with Piagetian Cognitive Stages of Development. Three ordinal Content Stages Of Weather Information Processing (phenomena, process and mechanism) and three ordinal Causal Explanation Stages Of Weather Information Processing (non-real, natural, and scientifically valid abstract ideas) were explored for their relationship with Piaget's Pre-Operational, Concrete and Formal Stages of Development. One hundred and fifty -five elementary and secondary school students from two school districts were administered a written Piagetian exam. Commonly available television weather programs were categorized, randomly assigned and viewed by 42 randomly selected students who were administered three Piagetian tasks. Students were clinically interviewed for the level of content information and causal explanations (reasoning). Results indicated that content information and causal reasoning of students to televised weather information is significantly related (p <.01) to age, and Piagetian Cognitive Stages of Development. Two Piagetian logic operations (seriation and correlation) were established as significantly different (p <.05) when related to age. These findings support a developmental concept of Weather Information Processing and have implications for teaching and presenting weather information to the public.
Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.
2015-01-01
Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.
Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.
2015-01-01
Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.
Searching for patterns in remote sensing image databases using neural networks
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.
Scott, J.C.
1990-01-01
Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.
Cooperation and charity in spatial public goods game under different strategy update rules
NASA Astrophysics Data System (ADS)
Li, Yixiao; Jin, Xiaogang; Su, Xianchuang; Kong, Fansheng; Peng, Chengbin
2010-03-01
Human cooperation can be influenced by other human behaviors and recent years have witnessed the flourishing of studying the coevolution of cooperation and punishment, yet the common behavior of charity is seldom considered in game-theoretical models. In this article, we investigate the coevolution of altruistic cooperation and egalitarian charity in spatial public goods game, by considering charity as the behavior of reducing inter-individual payoff differences. Our model is that, in each generation of the evolution, individuals play games first and accumulate payoff benefits, and then each egalitarian makes a charity donation by payoff transfer in its neighborhood. To study the individual-level evolutionary dynamics, we adopt different strategy update rules and investigate their effects on charity and cooperation. These rules can be classified into two global rules: random selection rule in which individuals randomly update strategies, and threshold selection rule where only those with payoffs below a threshold update strategies. Simulation results show that random selection enhances the cooperation level, while threshold selection lowers the threshold of the multiplication factor to maintain cooperation. When charity is considered, it is incapable in promoting cooperation under random selection, whereas it promotes cooperation under threshold selection. Interestingly, the evolution of charity strongly depends on the dispersion of payoff acquisitions of the population, which agrees with previous results. Our work may shed light on understanding human egalitarianism.
[Errors in Peruvian medical journals references].
Huamaní, Charles; Pacheco-Romero, José
2009-01-01
References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.
Recurrence plots of discrete-time Gaussian stochastic processes
NASA Astrophysics Data System (ADS)
Ramdani, Sofiane; Bouchara, Frédéric; Lagarde, Julien; Lesne, Annick
2016-09-01
We investigate the statistical properties of recurrence plots (RPs) of data generated by discrete-time stationary Gaussian random processes. We analytically derive the theoretical values of the probabilities of occurrence of recurrence points and consecutive recurrence points forming diagonals in the RP, with an embedding dimension equal to 1. These results allow us to obtain theoretical values of three measures: (i) the recurrence rate (REC) (ii) the percent determinism (DET) and (iii) RP-based estimation of the ε-entropy κ(ε) in the sense of correlation entropy. We apply these results to two Gaussian processes, namely first order autoregressive processes and fractional Gaussian noise. For these processes, we simulate a number of realizations and compare the RP-based estimations of the three selected measures to their theoretical values. These comparisons provide useful information on the quality of the estimations, such as the minimum required data length and threshold radius used to construct the RP.
Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A
2017-09-15
Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code available at http://insilico.utulsa.edu/software/privateEC . brett-mckinney@utulsa.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Perceptual expertise and top-down expectation of musical notation engages the primary visual cortex.
Wong, Yetta Kwailing; Peng, Cynthia; Fratus, Kristyn N; Woodman, Geoffrey F; Gauthier, Isabel
2014-08-01
Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40-60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.
Using Maximum Entropy to Find Patterns in Genomes
NASA Astrophysics Data System (ADS)
Liu, Sophia; Hockenberry, Adam; Lancichinetti, Andrea; Jewett, Michael; Amaral, Luis
The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. To accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. This approach can also be easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes. National Institute of General Medical Science, Northwestern University Presidential Fellowship, National Science Foundation, David and Lucile Packard Foundation, Camille Dreyfus Teacher Scholar Award.
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Topical Application of Honey on Surgical Wounds: A Randomized Clinical Trial.
Goharshenasan, Peiman; Amini, Shahideh; Atria, Ali; Abtahi, Hamidreza; Khorasani, Ghasemali
2016-01-01
The antimicrobial and anti-inflammatory activity of honey and its ability to accelerate wound healing make it an attractive option in surgical wound care. We performed a randomized clinical trial to compare the efficacy of honey dressing with conventional dressing regarding the aesthetic outcome. Bilateral symmetric incisions in randomly selected plastic surgical patients were randomly covered postoperatively with conventional dressing and honey dressing for five days. The aesthetic outcome of the two sides was rated on a Visual Analog Scale by the surgeon and the patient and compared at month three and six after surgery. Seventy two symmetrical incisions in 52 patients were evaluated during the study. The mean width of the scar after the third and the sixth month was 3.64 +/- 0.83 mm and 3.49 +/- 0.87 mm on the side that received honey dressing and 5.43 +/- 0.05 mm and 5.30+/- 1.35 mm in the control group. Wilcoxon signed-rank test showed significant difference between honey and conventional dressing outcomes at third and sixth month (p < 0.001). The healing process of the surgical wound and its final aesthetic result could be improved by using honey dressing. © 2016 S. Karger GmbH, Freiburg.
Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun
2015-09-01
A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.
Kin groups and trait groups: population structure and epidemic disease selection.
Fix, A G
1984-10-01
A Monte Carlo simulation based on the population structure of a small-scale human population, the Semai Senoi of Malaysia, has been developed to study the combined effects of group, kin, and individual selection. The population structure resembles D.S. Wilson's structured deme model in that local breeding populations (Semai settlements) are subdivided into trait groups (hamlets) that may be kin-structured and are not themselves demes. Additionally, settlement breeding populations are connected by two-dimensional stepping-stone migration approaching 30% per generation. Group and kin-structured group selection occur among hamlets the survivors of which then disperse to breed within the settlement population. Genetic drift is modeled by the process of hamlet formation; individual selection as a deterministic process, and stepping-stone migration as either random or kin-structured migrant groups. The mechanism for group selection is epidemics of infectious disease that can wipe out small hamlets particularly if most adults become sick and social life collapses. Genetic resistance to a disease is an individual attribute; however, hamlet groups with several resistant adults are less likely to disintegrate and experience high social mortality. A specific human gene, hemoglobin E, which confers resistance to malaria, is studied as an example of the process. The results of the simulations show that high genetic variance among hamlet groups may be generated by moderate degrees of kin-structuring. This strong microdifferentiation provides the potential for group selection. The effect of group selection in this case is rapid increase in gene frequencies among the total set of populations. In fact, group selection in concert with individual selection produced a faster rate of gene frequency increase among a set of 25 populations than the rate within a single unstructured population subject to deterministic individual selection. Such rapid evolution with plausible rates of extinction, individual selection, and migration and a population structure realistic in its general form, has implications for specific human polymorphisms such as hemoglobin variants and for the more general problem of the tempo of evolution as well.
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
Oligometastatic Disease in the Peritoneal Space with Gastrointestinal Cancer
Rau, Beate; Brandl, Andreas; Pascher, Andreas; Raue, Wieland; Sugarbaker, Paul
2017-01-01
Objectives Treatment options for patients with gastrointestinal cancer and oligometastastic disease remain the domain of the medical oncologist. However, in selected cases, attempts to remove or destroy the tumor burden seem appropriate. Background Data During the last decade, the treatment of localized and isolated tumor nodules, such as lung, liver or peritoneal metastasis, has changed. Previously, these patients with metastatic disease only received palliative chemotherapy. Combined treatment approaches and new techniques demonstrate that additional surgery to destroy or remove the metastases seem to be of major benefit to patients. Methods The recently published important literature regarding peritoneal metastases and oligometastases in gastrointestinal cancer was analyzed. Results The most important factor in the treatment of peritoneal metastases and in cytoreductive surgery is patient selection. Resection of peritoneal metastases should be considered. Hyperthermic intraperitoneal chemotherapy is feasible. However, further results of randomized trials are necessary. Several randomized trials are on the way and will be available in 1–2 years. Systemic chemotherapy alone as an adequate management plan for all sites of metastatic disease is not compatible with a high standard of care. Formulating an optimal plan combining re-operative surgery with regional plus systemic chemotherapy is a necessary task of the multidisciplinary team. Conclusions In oligometastastic disease of gastrointestinal cancer origin, the selection process is the most important factor for survival. Further studies are needed to determine optimal treatment options. PMID:28612016
Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth
2006-07-01
This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.
Selection of antigenically advanced variants of seasonal influenza viruses
Ozawa, Makoto; Taft, Andrew S.; Das, Subash C.; Hanson, Anthony P.; Song, Jiasheng; Imai, Masaki; Wilker, Peter R.; Watanabe, Tokiko; Watanabe, Shinji; Ito, Mutsumi; Iwatsuki-Horimoto, Kiyoko; Russell, Colin A.; James, Sarah L.; Skepner, Eugene; Maher, Eileen A.; Neumann, Gabriele; Kelso, Anne; McCauley, John; Wang, Dayan; Shu, Yuelong; Odagiri, Takato; Tashiro, Masato; Xu, Xiyan; Wentworth, David E.; Katz, Jacqueline M.; Cox, Nancy J.; Smith, Derek J.; Kawaoka, Yoshihiro
2016-01-01
Influenza viruses mutate frequently, necessitating constant updates of vaccine viruses. To establish experimental approaches that may complement the current vaccine strain selection process, we selected antigenic variants from human H1N1 and H3N2 influenza virus libraries possessing random mutations in the globular head of the haemagglutinin protein (which includes the antigenic sites) by incubating them with human and/or ferret convalescent sera to human H1N1 and H3N2 viruses. Further, we selected antigenic escape variants from human viruses treated with convalescent sera and from mice that had been previously immunized against human influenza viruses. Our pilot studies with past influenza viruses identified escape mutants that were antigenically similar to variants that emerged in nature, establishing the feasibility of our approach. Our studies with contemporary human influenza viruses identified escape mutants before they caused an epidemic in 2014–2015. This approach may aid in the prediction of potential antigenic escape variants and the selection of future vaccine candidates before they become widespread in nature. PMID:27572841
Seed size selection by olive baboons.
Kunz, Britta Kerstin; Linsenmair, Karl Eduard
2008-10-01
Seed size is an important plant fitness trait that can influence several steps between fruiting and the establishment of a plant's offspring. Seed size varies considerably within many plant species, yet the relevance of the trait for intra-specific fruit choice by primates has received little attention. Primates may select certain seed sizes within a species for a number of reasons, e.g. to decrease indigestible seed load or increase pulp intake per fruit. Olive baboons (Papio anubis, Cercopithecidae) are known to select seed size in unripe and mature pods of Parkia biglobosa (Mimosaceae) differentially, so that pods with small seeds, and an intermediate seed number, contribute most to dispersal by baboons. We tested whether olive baboons likewise select for smaller ripe seeds within each of nine additional fruit species whose fruit pulp baboons commonly consume, and for larger seeds in one species in which baboons feed on the seeds. Species differed in fruit type and seed number per fruit. For five of these species, baboons dispersed seeds that were significantly smaller than seeds extracted manually from randomly collected fresh fruits. In contrast, for three species, baboons swallowed seeds that were significantly longer and/or wider than seeds from fresh fruits. In two species, sizes of ingested seeds and seeds from fresh fruits did not differ significantly. Baboons frequently spat out seeds of Drypetes floribunda (Euphorbiaceae) but not those of other plant species having seeds of equal size. Oral processing of D. floribunda seeds depended on seed size: seeds that were spat out were significantly larger and swallowed seeds smaller, than seeds from randomly collected fresh fruits. We argue that seed size selection in baboons is influenced, among other traits, by the amount of pulp rewarded per fruit relative to seed load, which is likely to vary with fruit and seed shape.
NASA Astrophysics Data System (ADS)
Wu, Zhi-Xi; Rong, Zhihai; Yang, Han-Xin
2015-01-01
Recent empirical studies suggest that heavy-tailed distributions of human activities are universal in real social dynamics [L. Muchnik, S. Pei, L. C. Parra, S. D. S. Reis, J. S. Andrade Jr., S. Havlin, and H. A. Makse, Sci. Rep. 3, 1783 (2013), 10.1038/srep01783]. On the other hand, community structure is ubiquitous in biological and social networks [M. E. J. Newman, Nat. Phys. 8, 25 (2012), 10.1038/nphys2162]. Motivated by these facts, we here consider the evolutionary prisoner's dilemma game taking place on top of a real social network to investigate how the community structure and the heterogeneity in activity of individuals affect the evolution of cooperation. In particular, we account for a variation of the birth-death process (which can also be regarded as a proportional imitation rule from a social point of view) for the strategy updating under both weak and strong selection (meaning the payoffs harvested from games contribute either slightly or heavily to the individuals' performance). By implementing comparative studies, where the players are selected either randomly or in terms of their actual activities to play games with their immediate neighbors, we figure out that heterogeneous activity benefits the emergence of collective cooperation in a harsh environment (the action for cooperation is costly) under strong selection, whereas it impairs the formation of altruism under weak selection. Moreover, we find that the abundance of communities in the social network can evidently foster the formation of cooperation under strong selection, in contrast to the games evolving on randomized counterparts. Our results are therefore helpful for us to better understand the evolution of cooperation in real social systems.
Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method
NASA Astrophysics Data System (ADS)
Shamsoddini, A.; Aboodi, M. R.; Karami, J.
2017-09-01
Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.
Good, Andrew C; Hermsmeier, Mark A
2007-01-01
Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.
NASA Astrophysics Data System (ADS)
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2018-06-01
In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.
NASA Astrophysics Data System (ADS)
Shobin, L. R.; Manivannan, S.
2014-10-01
Carbon nanotube (CNT) networks are identified as potential substitute and surpass the conventional indium doped tin oxide (ITO) in transparent conducting electrodes, thin-film transistors, solar cells, and chemical sensors. Among them, CNT based gas sensors gained more interest because of its need in environmental monitoring, industrial control, and detection of gases in warfare or for averting security threats. The unique properties of CNT networks such as high surface area, low density, high thermal conductivity and chemical sensitivity making them as a potential candidate for gas sensing applications. Commercial unsorted single walled carbon nanotubes (SWCNT) were purified by thermal oxidation and acid treatment processes and dispersed in organic solvent N-methyl pyrolidone using sonication process in the absence of polymer or surfactant. Optically transparent SWCNT networks are realized on glass substrate by coating the dispersed SWCNT with the help of dynamic spray coating process at 200ºC. The SWCNT random network was characterized by scanning electron microscopy and UV-vis-NIR spectroscopy. Gas sensing property of transparent film towards ammonia vapor is studied at room temperature by measuring the resistance change with respect to the concentration in the range 0-1000 ppm. The sensor response is increased logarithmically in the concentration range 0 to 1000 ppm with the detection limit 0.007 ppm. The random networks are able to detect ammonia vapor selectively because of the high electron donating nature of ammonia molecule to the SWCNT. The sensor is reversible and selective to ammonia vapor with response time 70 seconds and recovery time 423 seconds for 62.5 ppm with 90% optical transparency at 550 nm.
The Effects of Social Capital Levels in Elementary Schools on Organizational Information Sharing
ERIC Educational Resources Information Center
Ekinci, Abdurrahman
2012-01-01
This study aims to assess the effects of social capital levels at elementary schools on organizational information sharing as reported by teachers. Participants were 267 teachers selected randomly from 16 elementary schools; schools also selected randomly among 42 elementary schools located in the city center of Batman. The data were analyzed by…
ERIC Educational Resources Information Center
Rafferty, Karen; Watson, Patrice; Lappe, Joan M.
2011-01-01
Objective: To assess the impact of calcium-fortified food and dairy food on selected nutrient intakes in the diets of adolescent girls. Design: Randomized controlled trial, secondary analysis. Setting and Participants: Adolescent girls (n = 149) from a midwestern metropolitan area participated in randomized controlled trials of bone physiology…
ERIC Educational Resources Information Center
Thomas, Henry B.; Kaplan, E. Joseph
A national survey was conducted of randomly selected chief student personnel officers as listed in the 1979 "Education Directory of Colleges and Universities." The survey addressed specific institutional demographics, policy-making authority, reporting structure, and areas of responsibility of the administrators. Over 93 percent of the respondents…
Nonmanufacturing Businesses. U.S. Metric Study Interim Report.
ERIC Educational Resources Information Center
Cornog, June R.; Bunten, Elaine D.
In this fifth interim report on the feasibility of a United States changeover to a metric system stems from the U.S. Metric Study, a primary stratified sample of 2,828 nonmanufacturing firms was randomly selected from 28,184 businesses taken from Social Security files, a secondary sample of 2,258 firms was randomly selected for replacement…
ERIC Educational Resources Information Center
Juhasz, Stephen; And Others
Table of contents (TOC) practices of some 120 primary journals were analyzed. The journals were randomly selected. The method of randomization is described. The samples were selected from a university library with a holding of approximately 12,000 titles published worldwide. A questionnaire was designed. Purpose was to find uniformity and…
Molecular selection in a unified evolutionary sequence
NASA Technical Reports Server (NTRS)
Fox, S. W.
1986-01-01
With guidance from experiments and observations that indicate internally limited phenomena, an outline of unified evolutionary sequence is inferred. Such unification is not visible for a context of random matrix and random mutation. The sequence proceeds from Big Bang through prebiotic matter, protocells, through the evolving cell via molecular and natural selection, to mind, behavior, and society.
Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…
ERIC Educational Resources Information Center
Bibiso, Abyot; Olango, Menna; Bibiso, Mesfin
2017-01-01
The purpose of this study was to investigate the relationship between teacher's commitment and female students academic achievement in selected secondary school of Wolaita zone, Southern Ethiopia. The research method employed was survey study and the sampling techniques were purposive, simple random and stratified random sampling. Questionnaire…
ERIC Educational Resources Information Center
Martinez, John; Fraker, Thomas; Manno, Michelle; Baird, Peter; Mamun, Arif; O'Day, Bonnie; Rangarajan, Anu; Wittenburg, David
2010-01-01
This report focuses on the seven original Youth Transition Demonstration (YTD) projects selected for funding in 2003. Three of the original seven projects were selected for a national random assignment evaluation in 2005; however, this report only focuses on program operations prior to joining the random assignment evaluation for the three…
Tušek-Bunc, Ksenija; Petek, Davorina
2018-04-10
Family medicine plays an important role in quality of care (QoC) of coronary heart disease (CHD) patients. This study's aim was to determine the quality of secondary cardiovascular disease prevention in the everyday practice of family physicians. This study was observational cross-sectional. About 36 randomly selected family medicine practices stratified by size and location in Slovenia. CHD patients randomly selected from a patient register available in family medicine practices. The instrument for assessment of quality included a form for collecting data from medical records, a general practice assessment questionnaire and a patient questionnaire. QoC was defined by two composite variables, namely risk factor registration and CHD patient process of care, as the two care outcomes. In multivariate analysis, we performed multilevel regression analysis to identify the associations between QoC, the patient and the practice characteristics. The final sample included 423 CHD patients from 36 family medicine practices. Risk factor registration was associated with the practice organisation score (P = 0.004), practice size (P = 0.042), presence of comorbid atherosclerotic diseases (P = 0.043) and a lower age of CHD patients (P = 0.001). CHD patient process of care was associated with the practice organisation score (0.045) and a lower age of CHD patients (P = 0.035). The most important factors affecting the quality of CHD patient care were linked to the organisational characteristics of the family medicine practices.
Tillman, Fred; Anning, David W.; Heilman, Julian A.; Buto, Susan G.; Miller, Matthew P.
2018-01-01
Elevated concentrations of dissolved-solids (salinity) including calcium, sodium, sulfate, and chloride, among others, in the Colorado River cause substantial problems for its water users. Previous efforts to reduce dissolved solids in upper Colorado River basin (UCRB) streams often focused on reducing suspended-sediment transport to streams, but few studies have investigated the relationship between suspended sediment and salinity, or evaluated which watershed characteristics might be associated with this relationship. Are there catchment properties that may help in identifying areas where control of suspended sediment will also reduce salinity transport to streams? A random forests classification analysis was performed on topographic, climate, land cover, geology, rock chemistry, soil, and hydrologic information in 163 UCRB catchments. Two random forests models were developed in this study: one for exploring stream and catchment characteristics associated with stream sites where dissolved solids increase with increasing suspended-sediment concentration, and the other for predicting where these sites are located in unmonitored reaches. Results of variable importance from the exploratory random forests models indicate that no simple source, geochemical process, or transport mechanism can easily explain the relationship between dissolved solids and suspended sediment concentrations at UCRB monitoring sites. Among the most important watershed characteristics in both models were measures of soil hydraulic conductivity, soil erodibility, minimum catchment elevation, catchment area, and the silt component of soil in the catchment. Predictions at key locations in the basin were combined with observations from selected monitoring sites, and presented in map-form to give a complete understanding of where catchment sediment control practices would also benefit control of dissolved solids in streams.
Dai, Huanping; Micheyl, Christophe
2010-01-01
A major concern when designing a psychophysical experiment is that participants may use another stimulus feature (“cue”) than that intended by the experimenter. One way to avoid this involves applying random variations to the corresponding feature across stimulus presentations, to make the “unwanted” cue unreliable. An important question facing experimenters who use this randomization (“roving”) technique is: How large should the randomization range be to ensure that participants cannot achieve a certain proportion correct (PC) by using the unwanted cue, while at the same time avoiding unnecessary interference of the randomization with task performance? Previous publications have provided formulas for the selection of adequate randomization ranges in yes-no and multiple-alternative, forced-choice tasks. In this article, we provide figures and tables, which can be used to select randomization ranges that are better suited to experiments involving a same-different, dual-pair, or oddity task. PMID:20139466
Evaluation of some random effects methodology applicable to bird ringing data
Burnham, K.P.; White, Gary C.
2002-01-01
Existing models for ring recovery and recapture data analysis treat temporal variations in annual survival probability (S) as fixed effects. Often there is no explainable structure to the temporal variation in S1,..., Sk; random effects can then be a useful model: Si = E(S) + ??i. Here, the temporal variation in survival probability is treated as random with average value E(??2) = ??2. This random effects model can now be fit in program MARK. Resultant inferences include point and interval estimation for process variation, ??2, estimation of E(S) and var (E??(S)) where the latter includes a component for ??2 as well as the traditional component for v??ar(S??\\S??). Furthermore, the random effects model leads to shrinkage estimates, Si, as improved (in mean square error) estimators of Si compared to the MLE, S??i, from the unrestricted time-effects model. Appropriate confidence intervals based on the Si are also provided. In addition, AIC has been generalized to random effects models. This paper presents results of a Monte Carlo evaluation of inference performance under the simple random effects model. Examined by simulation, under the simple one group Cormack-Jolly-Seber (CJS) model, are issues such as bias of ??s2, confidence interval coverage on ??2, coverage and mean square error comparisons for inference about Si based on shrinkage versus maximum likelihood estimators, and performance of AIC model selection over three models: Si ??? S (no effects), Si = E(S) + ??i (random effects), and S1,..., Sk (fixed effects). For the cases simulated, the random effects methods performed well and were uniformly better than fixed effects MLE for the Si.
Boitard, Simon; Loisel, Patrice
2007-05-01
The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.
Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard
2007-01-01
Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy. PMID:17543100
Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard
2007-06-01
Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy.
Foong, Hui F; Hamid, Tengku A; Ibrahim, Rahimah; Haron, Sharifah A
2018-01-01
The link between psychosocial stress and cognitive function is complex, and previous studies have indicated that it may be mediated by processing speed. Therefore, the main aim of this study was to examine whether processing speed mediates the association between psychosocial stress and global cognition in older adults. Moreover, the moderating role of gender in this model is examined as well. The study included 2322 community-dwelling older adults in Malaysia who were randomly selected through a multistage proportional cluster random sampling technique. Global cognition construct was measured by the Mini-Mental State Examination and Montreal Cognitive Assessment; psychosocial stress construct was measured by perceived stress, depression, loneliness, and neuroticism; and processing speed was assessed by the Digit Symbol Substitution Test. Structural equation modelling was used to analyze the mediation and moderation tests. Processing speed was found to partially mediate the relationship between psychosocial stress and global cognition (β in the direct model = -0.15, P < 0.001; β in the full mediation model = -0.11, P < 0.001). Moreover, the relationship between psychosocial stress and global cognition was found to be significant in men only, whereas the association between processing speed and global cognition was significant in men and women. Psychosocial stress may increase the likelihood that older adults will experience poor processing capacity, which could reduce their higher level cognition. Results indicate that there is a need to develop processing capacity intervention programmes for psychologically distressed older adults to prevent them from suffering cognitive decline. © 2018 Japanese Psychogeriatric Society.
Plant-parasite coevolution: bridging the gap between genetics and ecology.
Brown, James K M; Tellier, Aurélien
2011-01-01
We review current ideas about coevolution of plants and parasites, particularly processes that generate genetic diversity. Frequencies of host resistance and parasite virulence alleles that interact in gene-for-gene (GFG) relationships coevolve in the familiar boom-and-bust cycle, in which resistance is selected when virulence is rare, and virulence is selected when resistance is common. The cycle can result in stable polymorphism when diverse ecological and epidemiological factors cause negative direct frequency-dependent selection (ndFDS) on host resistance, parasite virulence, or both, such that the benefit of a trait to fitness declines as its frequency increases. Polymorphism can also be stabilized by overdominance, when heterozygous hosts have greater resistance than homozygotes to diverse pathogens. Genetic diversity can also persist in the form of statistical polymorphism, sustained by random processes acting on gene frequencies and population size. Stable polymorphism allows alleles to be long-lived and genetic variation to be detectable in natural populations. In agriculture, many of the factors promoting stability in host-parasite interactions have been lost, leading to arms races of host defenses and parasite effectors. Copyright © 2011 by Annual Reviews. All rights reserved.
A prediction of templates in the auditory cortex system
NASA Astrophysics Data System (ADS)
Ghanbeigi, Kimia
In this study variation of human auditory evoked mismatch field amplitudes in response to complex tones as a function of the removal in single partials in the onset period was investigated. It was determined: 1-A single frequency elimination in a sound stimulus plays a significant role in human brain sound recognition. 2-By comparing the mismatches of the brain response due to a single frequency elimination in the "Starting Transient" and "Sustain Part" of the sound stimulus, it is found that the brain is more sensitive to frequency elimination in the Starting Transient. This study involves 4 healthy subjects with normal hearing. Neural activity was recorded with stimulus whole-head MEG. Verification of spatial location in the auditory cortex was determined by comparing with MRI images. In the first set of stimuli, repetitive ('standard') tones with five selected onset frequencies were randomly embedded in the string of rare ('deviant') tones with randomly varying inter stimulus intervals. In the deviant tones one of the frequency components was omitted relative to the deviant tones during the onset period. The frequency of the test partial of the complex tone was intentionally selected to preclude its reinsertion by generation of harmonics or combination tones due to either the nonlinearity of the ear, the electronic equipment or the brain processing. In the second set of stimuli, time structured as above, repetitive ('standard') tones with five selected sustained frequency components were embedded in the string of rare '(deviant') tones for which one of these selected frequencies was omitted in the sustained tone. In both measurements, the carefully frequency selection precluded their reinsertion by generation of harmonics or combination tones due to the nonlinearity of the ear, the electronic equipment and brain processing. The same considerations for selecting the test frequency partial were applied. Results. By comparing MMN of the two data sets, the relative contribution to sound recognition of the omitted partial frequency components in the onset and sustained regions has been determined. Conclusion. The presence of significant mismatch negativity, due to neural activity of auditory cortex, emphasizes that the brain recognizes the elimination of a single frequency of carefully chosen anharmonic frequencies. It was shown this mismatch is more significant if the single frequency elimination occurs in the onset period.
Maintenance of tactile short-term memory for locations is mediated by spatial attention.
Katus, Tobias; Andersen, Søren K; Müller, Matthias M
2012-01-01
According to the attention-based rehearsal hypothesis, maintenance of spatial information is mediated by covert orienting towards memorized locations. In a somatosensory memory task, participants simultaneously received bilateral pairs of mechanical sample pulses. For each hand, sample stimuli were randomly assigned to one of three locations (fingers). A subsequent visual retro-cue determined whether the left or right hand sample was to be memorized. The retro-cue elicited lateralized activity reflecting the location of the relevant sample stimulus. Sensory processing during the retention period was probed by task-irrelevant pulses randomized to locations at the cued and uncued hand. The somatosensory N140 was enhanced for probes delivered to the cued hand, relative to uncued. Probes presented shortly after the retro-cue showed greatest attentional modulations. This suggests that transient contributions from retrospective selection overlapped with the sustained effect of attention-based rehearsal. In conclusion, focal attention shifts within tactile mnemonic content occurred after retro-cues and guided sensory processing during retention. Copyright © 2011 Elsevier B.V. All rights reserved.
The Supermarket Model with Bounded Queue Lengths in Equilibrium
NASA Astrophysics Data System (ADS)
Brightwell, Graham; Fairthorne, Marianne; Luczak, Malwina J.
2018-04-01
In the supermarket model, there are n queues, each with a single server. Customers arrive in a Poisson process with arrival rate λ n , where λ = λ (n) \\in (0,1) . Upon arrival, a customer selects d=d(n) servers uniformly at random, and joins the queue of a least-loaded server amongst those chosen. Service times are independent exponentially distributed random variables with mean 1. In this paper, we analyse the behaviour of the supermarket model in the regime where λ (n) = 1 - n^{-α } and d(n) = \\lfloor n^β \\rfloor , where α and β are fixed numbers in (0, 1]. For suitable pairs (α , β ) , our results imply that, in equilibrium, with probability tending to 1 as n → ∞, the proportion of queues with length equal to k = \\lceil α /β \\rceil is at least 1-2n^{-α + (k-1)β } , and there are no longer queues. We further show that the process is rapidly mixing when started in a good state, and give bounds on the speed of mixing for more general initial conditions.
Mathematical models of cell factories: moving towards the core of industrial biotechnology.
Cvijovic, Marija; Bordel, Sergio; Nielsen, Jens
2011-09-01
Industrial biotechnology involves the utilization of cell factories for the production of fuels and chemicals. Traditionally, the development of highly productive microbial strains has relied on random mutagenesis and screening. The development of predictive mathematical models provides a new paradigm for the rational design of cell factories. Instead of selecting among a set of strains resulting from random mutagenesis, mathematical models allow the researchers to predict in silico the outcomes of different genetic manipulations and engineer new strains by performing gene deletions or additions leading to a higher productivity of the desired chemicals. In this review we aim to summarize the main modelling approaches of biological processes and illustrate the particular applications that they have found in the field of industrial microbiology. © 2010 The Authors. Journal compilation © 2010 Society for Applied Microbiology and Blackwell Publishing Ltd.
Karlsson, Stefan L; Thomson, Nicholas; Mutreja, Ankur; Connor, Thomas; Sur, Dipika; Ali, Mohammad; Clemens, John; Dougan, Gordon; Holmgren, Jan; Lebens, Michael
2016-10-01
Genomic data generated from clinical Vibrio cholerae O1 isolates collected over a five year period in an area of Kolkata, India with seasonal cholera outbreaks allowed a detailed genetic analysis of serotype switching that occurred from Ogawa to Inaba and back to Ogawa. The change from Ogawa to Inaba resulted from mutational disruption of the methyltransferase encoded by the wbeT gene. Re-emergence of the Ogawa serotype was found to result either from expansion of an already existing Ogawa clade or reversion of the mutation in an Inaba clade. Our data suggests that such transitions are not random events but rather driven by as yet unidentified selection mechanisms based on differences in the structure of the O1 antigen or in the serotype-determining wbeT gene.
[Comparative study of cone-beam CT and spiral CT in measuring the length of styloid process].
Song, Y S; Liu, L F
2018-06-19
Objective: To compare the difference of measuring the length of styloid process between spiral CT with high resolution and cone-beam CT(CBCT). Methods: Five specimens (including 5 pairs of styloid processes) were selected randomly from the Anatomy Laboratory of Otolaryngology Department, all the specimens underwent spiral CT with high resolution and cone-beam CT retrospectively.With the original DICOM data, the styloid processes were shown in one plate by multiple plate reconstruction technique, and later the length of styloid processes of each specimen were measured separately by software NNT Viewer (to CBCT) or Osrix (to spiral CT with high resolution). Results: The length of styloid processes measured by CBCT and spiral CT was (26.8±5.5) mm and (27.1±5.4) mm respectively, and there was no statistical difference between the two groups. Conclusion: In respect of measuring the length of styloid process, the CBCT has the same value in clinical practice comparing to spiral CT with high resolution.
Williams, M S; Ebel, E D; Cao, Y
2013-01-01
The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
Important issues in the justification of a control treatment in paediatric drug trials.
Kelly, Lauren E; Davies, Elin Haf; Saint-Raymond, Agnes; Tomasi, Paolo; Offringa, Martin
2016-10-01
The value of comparative effectiveness trials in informing clinical and policy decisions depends heavily on the choice of control arm (comparator). Our objective is to identify challenges in comparator reasoning and to determine justification criteria for selecting a control arm in paediatric clinical trials. A literature search was completed to identify existing sources of guidance on comparator selection. Subsequently, we reviewed a randomly selected sample of comparators selected for paediatric investigation plans (PIPs) adopted by the Paediatric Committee of the European Medicines Agency in 2013. We gathered descriptive information and evaluated their review process to identify challenges and compromises between regulators and sponsors with regard to the selection of the comparator. A tool to help investigators justify the selection of active controls and placebo arms was developed using the existing literature and empirical data. Justifying comparator selection was a challenge in 28% of PIPs. The following challenging paediatric issues in the decision-making process were identified: use of off-label medications as comparators, ethical and safe use of placebo, duration of placebo use, an undefined optimal dosing strategy, lack of age-appropriate safety and efficacy data, and drug dosing not supported by extrapolation of safety/efficacy evidence from other populations. In order to generate trials that will inform clinical decision-making and support marketing authorisations, researchers must systemically and transparently justify their selection of the comparator arm for their study. This report highlights key areas for justification in the choice of comparator in paediatric clinical trials. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Gonzalo Cogno, Soledad; Mato, Germán
2015-01-01
Orientation selectivity is ubiquitous in the primary visual cortex (V1) of mammals. In cats and monkeys, V1 displays spatially ordered maps of orientation preference. Instead, in mice, squirrels, and rats, orientation selective neurons in V1 are not spatially organized, giving rise to a seemingly random pattern usually referred to as a salt-and-pepper layout. The fact that such different organizations can sharpen orientation tuning leads to question the structural role of the intracortical connections; specifically the influence of plasticity and the generation of functional connectivity. In this work, we analyze the effect of plasticity processes on orientation selectivity for both scenarios. We study a computational model of layer 2/3 and a reduced one-dimensional model of orientation selective neurons, both in the balanced state. We analyze two plasticity mechanisms. The first one involves spike-timing dependent plasticity (STDP), while the second one considers the reconnection of the interactions according to the preferred orientations of the neurons. We find that under certain conditions STDP can indeed improve selectivity but it works in a somehow unexpected way, that is, effectively decreasing the modulated part of the intracortical connectivity as compared to the non-modulated part of it. For the reconnection mechanism we find that increasing functional connectivity leads, in fact, to a decrease in orientation selectivity if the network is in a stable balanced state. Both counterintuitive results are a consequence of the dynamics of the balanced state. We also find that selectivity can increase due to a reconnection process if the resulting connections give rise to an unstable balanced state. We compare these findings with recent experimental results. PMID:26347615
NASA Astrophysics Data System (ADS)
Tsao, Shih-Ming; Lai, Ji-Ching; Horng, Horng-Er; Liu, Tu-Chen; Hong, Chin-Yih
2017-04-01
Aptamers are oligonucleotides that can bind to specific target molecules. Most aptamers are generated using random libraries in the standard systematic evolution of ligands by exponential enrichment (SELEX). Each random library contains oligonucleotides with a randomized central region and two fixed primer regions at both ends. The fixed primer regions are necessary for amplifying target-bound sequences by PCR. However, these extra-sequences may cause non-specific bindings, which potentially interfere with good binding for random sequences. The Magnetic-Assisted Rapid Aptamer Selection (MARAS) is a newly developed protocol for generating single-strand DNA aptamers. No repeat selection cycle is required in the protocol. This study proposes and demonstrates a method to isolate aptamers for C-reactive proteins (CRP) from a randomized ssDNA library containing no fixed sequences at 5‧ and 3‧ termini using the MARAS platform. Furthermore, the isolated primer-free aptamer was sequenced and binding affinity for CRP was analyzed. The specificity of the obtained aptamer was validated using blind serum samples. The result was consistent with monoclonal antibody-based nephelometry analysis, which indicated that a primer-free aptamer has high specificity toward targets. MARAS is a feasible platform for efficiently generating primer-free aptamers for clinical diagnoses.
Austin, Peter C; Schuster, Tibor; Platt, Robert W
2015-10-15
Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.
DeLay, Dawn; Ha, Thao; Van Ryzin, Mark; Winter, Charlotte; Dishion, Thomas J.
2015-01-01
Adolescent friendships that promote problem behavior are often chosen in middle school. The current study examines the unintended impact of a randomized school based intervention on the selection of friends in middle school, as well as on observations of deviant talk with friends five years later. Participants included 998 middle school students (526 boys and 472 girls) recruited at the onset of middle school (age 11-12 years) from three public middle schools participating in the Family Check-up model intervention. The current study focuses only on the effects of the SHAPe curriculum—one level of the Family Check-up model—on friendship choices. Participants nominated friends and completed measures of deviant peer affiliation. Approximately half of the sample (n=500) was randomly assigned to the intervention and the other half (n=498) comprised the control group within each school. The results indicate that the SHAPe curriculum affected friend selection within School 1, but not within Schools 2 or 3. The effects of friend selection in School 1 translated into reductions in observed deviancy training five years later (age 16-17 years). By coupling longitudinal social network analysis with a randomized intervention study the current findings provide initial evidence that a randomized public middle school intervention can disrupt the formation of deviant peer groups and diminish levels of adolescent deviance five years later. PMID:26377235
Effects of task-irrelevant grouping on visual selection in partial report.
Lunau, Rasmus; Habekost, Thomas
2017-07-01
Perceptual grouping modulates performance in attention tasks such as partial report and change detection. Specifically, grouping of search items according to a task-relevant feature improves the efficiency of visual selection. However, the role of task-irrelevant feature grouping is not clearly understood. In the present study, we investigated whether grouping of targets by a task-irrelevant feature influences performance in a partial-report task. In this task, participants must report as many target letters as possible from a briefly presented circular display. The crucial manipulation concerned the color of the elements in these trials. In the sorted-color condition, the color of the display elements was arranged according to the selection criterion, and in the unsorted-color condition, colors were randomly assigned. The distractor cost was inferred by subtracting performance in partial-report trials from performance in a control condition that had no distractors in the display. Across five experiments, we manipulated trial order, selection criterion, and exposure duration, and found that attentional selectivity was improved in sorted-color trials when the exposure duration was 200 ms and the selection criterion was luminance. This effect was accompanied by impaired selectivity in unsorted-color trials. Overall, the results suggest that the benefit of task-irrelevant color grouping of targets is contingent on the processing locus of the selection criterion.
Shalev, Nir; De Wandel, Linde; Dockree, Paul; Demeyere, Nele; Chechlacz, Magdalena
2017-10-03
The Theory of Visual Attention (TVA) provides a mathematical formalisation of the "biased competition" account of visual attention. Applying this model to individual performance in a free recall task allows the estimation of 5 independent attentional parameters: visual short-term memory (VSTM) capacity, speed of information processing, perceptual threshold of visual detection; attentional weights representing spatial distribution of attention (spatial bias), and the top-down selectivity index. While the TVA focuses on selection in space, complementary accounts of attention describe how attention is maintained over time, and how temporal processes interact with selection. A growing body of evidence indicates that different facets of attention interact and share common neural substrates. The aim of the current study was to modulate a spatial attentional bias via transfer effects, based on a mechanistic understanding of the interplay between spatial, selective and temporal aspects of attention. Specifically, we examined here: (i) whether a single administration of a lateralized sustained attention task could prime spatial orienting and lead to transferable changes in attentional weights (assigned to the left vs right hemi-field) and/or other attentional parameters assessed within the framework of TVA (Experiment 1); (ii) whether the effects of such spatial-priming on TVA parameters could be further enhanced by bi-parietal high frequency transcranial random noise stimulation (tRNS) (Experiment 2). Our results demonstrate that spatial attentional bias, as assessed within the TVA framework, was primed by sustaining attention towards the right hemi-field, but this spatial-priming effect did not occur when sustaining attention towards the left. Furthermore, we show that bi-parietal high-frequency tRNS combined with the rightward spatial-priming resulted in an increased attentional selectivity. To conclude, we present a novel, theory-driven method for attentional modulation providing important insights into how the spatial and temporal processes in attention interact with attentional selection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Connectomic markers of disease expression, genetic risk and resilience in bipolar disorder
Dima, D; Roberts, R E; Frangou, S
2016-01-01
Bipolar disorder (BD) is characterized by emotional dysregulation and cognitive deficits associated with abnormal connectivity between subcortical—primarily emotional processing regions—and prefrontal regulatory areas. Given the significant contribution of genetic factors to BD, studies in unaffected first-degree relatives can identify neural mechanisms of genetic risk but also resilience, thus paving the way for preventive interventions. Dynamic causal modeling (DCM) and random-effects Bayesian model selection were used to define and assess connectomic phenotypes linked to facial affect processing and working memory in a demographically matched sample of first-degree relatives carefully selected for resilience (n=25), euthymic patients with BD (n=41) and unrelated healthy controls (n=46). During facial affect processing, patients and relatives showed similarly increased frontolimbic connectivity; resilient relatives, however, evidenced additional adaptive hyperconnectivity within the ventral visual stream. During working memory processing, patients displayed widespread hypoconnectivity within the corresponding network. In contrast, working memory network connectivity in resilient relatives was comparable to that of controls. Our results indicate that frontolimbic dysfunction during affect processing could represent a marker of genetic risk to BD, and diffuse hypoconnectivity within the working memory network a marker of disease expression. The association of hyperconnectivity within the affect-processing network with resilience to BD suggests adaptive plasticity that allows for compensatory changes and encourages further investigation of this phenotype in genetic and early intervention studies. PMID:26731443
Connectomic markers of disease expression, genetic risk and resilience in bipolar disorder.
Dima, D; Roberts, R E; Frangou, S
2016-01-05
Bipolar disorder (BD) is characterized by emotional dysregulation and cognitive deficits associated with abnormal connectivity between subcortical-primarily emotional processing regions-and prefrontal regulatory areas. Given the significant contribution of genetic factors to BD, studies in unaffected first-degree relatives can identify neural mechanisms of genetic risk but also resilience, thus paving the way for preventive interventions. Dynamic causal modeling (DCM) and random-effects Bayesian model selection were used to define and assess connectomic phenotypes linked to facial affect processing and working memory in a demographically matched sample of first-degree relatives carefully selected for resilience (n=25), euthymic patients with BD (n=41) and unrelated healthy controls (n=46). During facial affect processing, patients and relatives showed similarly increased frontolimbic connectivity; resilient relatives, however, evidenced additional adaptive hyperconnectivity within the ventral visual stream. During working memory processing, patients displayed widespread hypoconnectivity within the corresponding network. In contrast, working memory network connectivity in resilient relatives was comparable to that of controls. Our results indicate that frontolimbic dysfunction during affect processing could represent a marker of genetic risk to BD, and diffuse hypoconnectivity within the working memory network a marker of disease expression. The association of hyperconnectivity within the affect-processing network with resilience to BD suggests adaptive plasticity that allows for compensatory changes and encourages further investigation of this phenotype in genetic and early intervention studies.
An in vivo library-versus-library selection of optimized protein-protein interactions.
Pelletier, J N; Arndt, K M; Plückthun, A; Michnick, S W
1999-07-01
We describe a rapid and efficient in vivo library-versus-library screening strategy for identifying optimally interacting pairs of heterodimerizing polypeptides. Two leucine zipper libraries, semi-randomized at the positions adjacent to the hydrophobic core, were genetically fused to either one of two designed fragments of the enzyme murine dihydrofolate reductase (mDHFR), and cotransformed into Escherichia coli. Interaction between the library polypeptides reconstituted enzymatic activity of mDHFR, allowing bacterial growth. Analysis of the resulting colonies revealed important biases in the zipper sequences relative to the original libraries, which are consistent with selection for stable, heterodimerizing pairs. Using more weakly associating mDHFR fragments, we increased the stringency of selection. We enriched the best-performing leucine zipper pairs by multiple passaging of the pooled, selected colonies in liquid culture, as the best pairs allowed for better bacterial propagation. This competitive growth allowed small differences among the pairs to be amplified, and different sequence positions were enriched at different rates. We applied these selection processes to a library-versus-library sample of 2.0 x 10(6) combinations and selected a novel leucine zipper pair that may be appropriate for use in further in vivo heterodimerization strategies.
Risk-based audit selection of dairy farms.
van Asseldonk, M A P M; Velthuis, A G J
2014-02-01
Dairy farms are audited in the Netherlands on numerous process standards. Each farm is audited once every 2 years. Increasing demands for cost-effectiveness in farm audits can be met by introducing risk-based principles. This implies targeting subpopulations with a higher risk of poor process standards. To select farms for an audit that present higher risks, a statistical analysis was conducted to test the relationship between the outcome of farm audits and bulk milk laboratory results before the audit. The analysis comprised 28,358 farm audits and all conducted laboratory tests of bulk milk samples 12 mo before the audit. The overall outcome of each farm audit was classified as approved or rejected. Laboratory results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acids (FFA), and cleanliness of the milk (CLN). The bulk milk laboratory results were significantly related to audit outcomes. Rejected audits are likely to occur on dairy farms with higher mean levels of SCC, TBC, ADR, and BAB. Moreover, in a multivariable model, maxima for TBC, SCC, and FPD as well as standard deviations for TBC and FPD are risk factors for negative audit outcomes. The efficiency curve of a risk-based selection approach, on the basis of the derived regression results, dominated the current random selection approach. To capture 25, 50, or 75% of the population with poor process standards (i.e., audit outcome of rejected), respectively, only 8, 20, or 47% of the population had to be sampled based on a risk-based selection approach. Milk quality information can thus be used to preselect high-risk farms to be audited more frequently. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
A management-oriented classification of pinyon-juniper woodlands of the Great Basin
Neil E. West; Robin J. Tausch; Paul T. Tueller
1998-01-01
A hierarchical framework for the classification of Great Basin pinyon-juniper woodlands was based on a systematic sample of 426 stands from a random selection of 66 of the 110 mountain ranges in the region. That is, mountain ranges were randomly selected, but stands were systematically located on mountain ranges. The National Hierarchical Framework of Ecological Units...
School Happiness and School Success: An Investigation across Multiple Grade Levels.
ERIC Educational Resources Information Center
Parish, Joycelyn Gay; Parish, Thomas S.; Batt, Steve
A total of 572 randomly selected sixth-grade students and 908 randomly selected ninth-grade students from a large metropolitan school district in the Midwest were asked to complete a series of survey questions designed to measure the extent to which they were happy while at school, as well as questions concerning the extent to which they treated…
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2013 CFR
2013-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2011 CFR
2011-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2010 CFR
2010-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2014 CFR
2014-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2012 CFR
2012-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
Attitude and Motivation as Predictors of Academic Achievement of Students in Clothing and Textiles
ERIC Educational Resources Information Center
Uwameiye, B. E.; Osho, L. E.
2011-01-01
This study investigated attitude and motivation as predictors of academic achievement of students in clothing and textiles. Three colleges of education in Edo and Delta States were randomly selected for use in this study. From each school, 40 students were selected from Year III using simple random technique yielding a total of 240 students. The…
A morphologic analysis of 'naked' islets of Langerhans in lobular atrophy of the pancreas.
Suda, K; Tsukahara, M; Miyake, T; Hirai, S
1994-08-01
The 'naked' islets of Langerhans (NIL) in randomly selected autopsy cases and in cases of chronic alcoholic pancreatitis, cystic fibrosis, and pancreatic carcinoma were studied histopathologically. The NIL were found in 55 of 164 randomly selected cases, with age-related frequency, in 21 of 30 cases of chronic alcoholic pancreatitis, in 2 of 2 cases of cystic fibrosis, and in 25 of 32 cases of pancreatic carcinoma. The NIL were frequently accompanied by ductal alterations: epithelial metaplasia and hyperplasia in randomly selected cases, protein plugs in chronic alcoholic pancreatitis, mucus plugs in cystic fibrosis, and obliterated ducts in pancreatic carcinoma. The NIL in randomly selected cases may have been formed by ductal alterations that caused stenosis of the lumen, those in chronic alcoholic pancreatitis and cystic fibrosis were the result of protein or mucus plugging, and those in pancreatic carcinoma were a result of neoplastic involvement of the distal pancreatic duct. Therefore, the common factor in the development of NIL is thought to be obstruction of the pancreatic duct system, and in cases of NIL that have a multilobular distribution and interinsular fibrosis, a diagnosis of chronic pancreatitis can usually be made.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes
NASA Astrophysics Data System (ADS)
Orsingher, Enzo; Polito, Federico
2012-08-01
In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.
Genetic variability and evolutionary dynamics of viruses of the family Closteroviridae
Rubio, Luis; Guerri, José; Moreno, Pedro
2013-01-01
RNA viruses have a great potential for genetic variation, rapid evolution and adaptation. Characterization of the genetic variation of viral populations provides relevant information on the processes involved in virus evolution and epidemiology and it is crucial for designing reliable diagnostic tools and developing efficient and durable disease control strategies. Here we performed an updated analysis of sequences available in Genbank and reviewed present knowledge on the genetic variability and evolutionary processes of viruses of the family Closteroviridae. Several factors have shaped the genetic structure and diversity of closteroviruses. (I) A strong negative selection seems to be responsible for the high genetic stability in space and time for some viruses. (2) Long distance migration, probably by human transport of infected propagative plant material, have caused that genetically similar virus isolates are found in distant geographical regions. (3) Recombination between divergent sequence variants have generated new genotypes and plays an important role for the evolution of some viruses of the family Closteroviridae. (4) Interaction between virus strains or between different viruses in mixed infections may alter accumulation of certain strains. (5) Host change or virus transmission by insect vectors induced changes in the viral population structure due to positive selection of sequence variants with higher fitness for host-virus or vector-virus interaction (adaptation) or by genetic drift due to random selection of sequence variants during the population bottleneck associated to the transmission process. PMID:23805130
Panzacchi, Manuela; Van Moorter, Bram; Strand, Olav; Saerens, Marco; Kivimäki, Ilkka; St Clair, Colleen C; Herfindal, Ivar; Boitani, Luigi
2016-01-01
The loss, fragmentation and degradation of habitat everywhere on Earth prompts increasing attention to identifying landscape features that support animal movement (corridors) or impedes it (barriers). Most algorithms used to predict corridors assume that animals move through preferred habitat either optimally (e.g. least cost path) or as random walkers (e.g. current models), but neither extreme is realistic. We propose that corridors and barriers are two sides of the same coin and that animals experience landscapes as spatiotemporally dynamic corridor-barrier continua connecting (separating) functional areas where individuals fulfil specific ecological processes. Based on this conceptual framework, we propose a novel methodological approach that uses high-resolution individual-based movement data to predict corridor-barrier continua with increased realism. Our approach consists of two innovations. First, we use step selection functions (SSF) to predict friction maps quantifying corridor-barrier continua for tactical steps between consecutive locations. Secondly, we introduce to movement ecology the randomized shortest path algorithm (RSP) which operates on friction maps to predict the corridor-barrier continuum for strategic movements between functional areas. By modulating the parameter Ѳ, which controls the trade-off between exploration and optimal exploitation of the environment, RSP bridges the gap between algorithms assuming optimal movements (when Ѳ approaches infinity, RSP is equivalent to LCP) or random walk (when Ѳ → 0, RSP → current models). Using this approach, we identify migration corridors for GPS-monitored wild reindeer (Rangifer t. tarandus) in Norway. We demonstrate that reindeer movement is best predicted by an intermediate value of Ѳ, indicative of a movement trade-off between optimization and exploration. Model calibration allows identification of a corridor-barrier continuum that closely fits empirical data and demonstrates that RSP outperforms models that assume either optimality or random walk. The proposed approach models the multiscale cognitive maps by which animals likely navigate real landscapes and generalizes the most common algorithms for identifying corridors. Because suboptimal, but non-random, movement strategies are likely widespread, our approach has the potential to predict more realistic corridor-barrier continua for a wide range of species. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
Random phase detection in multidimensional NMR.
Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C
2011-10-04
Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.
NASA Astrophysics Data System (ADS)
Ji, Sungchul
A new mathematical formula referred to as the Planckian distribution equation (PDE) has been found to fit long-tailed histograms generated in various fields of studies, ranging from atomic physics to single-molecule enzymology, cell biology, brain neurobiology, glottometrics, econophysics, and to cosmology. PDE can be derived from a Gaussian-like equation (GLE) by non-linearly transforming its variable, x, while keeping the y coordinate constant. Assuming that GLE represents a random distribution (due to its symmetry), it is possible to define a binary logarithm of the ratio between the areas under the curves of PDE and GLE as a measure of the non-randomness (or order) underlying the biophysicochemical processes generating long-tailed histograms that fit PDE. This new function has been named the Planckian information, IP, which (i) may be a new measure of order that can be applied widely to both natural and human sciences and (ii) can serve as the opposite of the Boltzmann-Gibbs entropy, S, which is a measure of disorder. The possible rationales for the universality of PDE may include (i) the universality of the wave-particle duality embedded in PDE, (ii) the selection of subsets of random processes (thereby breaking the symmetry of GLE) as the basic mechanism of generating order, organization, and function, and (iii) the quantity-quality complementarity as the connection between PDE and Peircean semiotics.
Super-resolution processing for multi-functional LPI waveforms
NASA Astrophysics Data System (ADS)
Li, Zhengzheng; Zhang, Yan; Wang, Shang; Cai, Jingxiao
2014-05-01
Super-resolution (SR) is a radar processing technique closely related to the pulse compression (or correlation receiver). There are many super-resolution algorithms developed for the improved range resolution and reduced sidelobe contaminations. Traditionally, the waveforms used for the SR have been either phase-coding (such as LKP3 code, Barker code) or the frequency modulation (chirp, or nonlinear frequency modulation). There are, however, an important class of waveforms which are either random in nature (such as random noise waveform), or randomly modulated for multiple function operations (such as the ADS-B radar signals in [1]). These waveforms have the advantages of low-probability-of-intercept (LPI). If the existing SR techniques can be applied to these waveforms, there will be much more flexibility for using these waveforms in actual sensing missions. Also, SR usually has great advantage that the final output (as estimation of ground truth) is largely independent of the waveform. Such benefits are attractive to many important primary radar applications. In this paper the general introduction of the SR algorithms are provided first, and some implementation considerations are discussed. The selected algorithms are applied to the typical LPI waveforms, and the results are discussed. It is observed that SR algorithms can be reliably used for LPI waveforms, on the other hand, practical considerations should be kept in mind in order to obtain the optimal estimation results.
Selection of stable scFv antibodies by phage display.
Brockmann, Eeva-Christine
2012-01-01
ScFv fragments are popular recombinant antibody formats but often suffer from limited stability. Phage display is a powerful tool in antibody engineering and applicable also for stability selection. ScFv variants with improved stability can be selected from large randomly mutated phage displayed libraries with a specific antigen after the unstable variants have been inactivated by heat or GdmCl. Irreversible scFv denaturation, which is a prerequisite for efficient selection, is achieved by combining denaturation with reduction of the intradomain disulfide bonds. Repeated selection cycles of increasing stringency result in enrichment of stabilized scFv fragments. Procedures for constructing a randomly mutated scFv library by error-prone PCR and phage display selection for enrichment of stable scFv antibodies from the library are described here.
Rahbar, Mohammad H.; Wyatt, Gwen; Sikorskii, Alla; Victorson, David; Ardjomand-Hessabi, Manouchehr
2011-01-01
Background Multisite randomized clinical trials allow for increased research collaboration among investigators and expedite data collection efforts. As a result, government funding agencies typically look favorably upon this approach. As the field of complementary and alternative medicine (CAM) continues to evolve, so do increased calls for the use of more rigorous study design and trial methodologies, which can present challenges for investigators. Purpose To describe the processes involved in the coordination and management of a multisite randomized clinical trial of a CAM intervention. Methods Key aspects related to the coordination and management of a multisite CAM randomized clinical trial are presented, including organizational and site selection considerations, recruitment concerns and issues related to data collection and randomization to treatment groups. Management and monitoring of data, as well as quality assurance procedures are described. Finally, a real world perspective is shared from a recently conducted multisite randomized clinical trial of reflexology for women diagnosed with advanced breast cancer. Results The use of multiple sites in the conduct of CAM-based randomized clinical trials can provide an efficient, collaborative and robust approach to study coordination and data collection that maximizes efficiency and ensures the quality of results. Conclusions Multisite randomized clinical trial designs can offer the field of CAM research a more standardized and efficient approach to examine the effectiveness of novel therapies and treatments. Special attention must be given to intervention fidelity, consistent data collection and ensuring data quality. Assessment and reporting of quantitative indicators of data quality should be required. PMID:21664296
Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.
2010-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
An investigation into the probabilistic combination of quasi-static and random accelerations
NASA Technical Reports Server (NTRS)
Schock, R. W.; Tuell, L. P.
1984-01-01
The development of design load factors for aerospace and aircraft components and experiment support structures, which are subject to a simultaneous vehicle dynamic vibration (quasi-static) and acoustically generated random vibration, require the selection of a combination methodology. Typically, the procedure is to define the quasi-static and the random generated response separately, and arithmetically add or root sum square to get combined accelerations. Since the combination of a probabilistic and a deterministic function yield a probabilistic function, a viable alternate approach would be to determine the characteristics of the combined acceleration probability density function and select an appropriate percentile level for the combined acceleration. The following paper develops this mechanism and provides graphical data to select combined accelerations for most popular percentile levels.
NASA Astrophysics Data System (ADS)
Najafi, Ali; Acar, Erdem; Rais-Rohani, Masoud
2014-02-01
The stochastic uncertainties associated with the material, process and product are represented and propagated to process and performance responses. A finite element-based sequential coupled process-performance framework is used to simulate the forming and energy absorption responses of a thin-walled tube in a manner that both material properties and component geometry can evolve from one stage to the next for better prediction of the structural performance measures. Metamodelling techniques are used to develop surrogate models for manufacturing and performance responses. One set of metamodels relates the responses to the random variables whereas the other relates the mean and standard deviation of the responses to the selected design variables. A multi-objective robust design optimization problem is formulated and solved to illustrate the methodology and the influence of uncertainties on manufacturability and energy absorption of a metallic double-hat tube. The results are compared with those of deterministic and augmented robust optimization problems.
Rumor Processes in Random Environment on and on Galton-Watson Trees
NASA Astrophysics Data System (ADS)
Bertacchi, Daniela; Zucca, Fabio
2013-11-01
The aim of this paper is to study rumor processes in random environment. In a rumor process a signal starts from the stations of a fixed vertex (the root) and travels on a graph from vertex to vertex. We consider two rumor processes. In the firework process each station, when reached by the signal, transmits it up to a random distance. In the reverse firework process, on the other hand, stations do not send any signal but they “listen” for it up to a random distance. The first random environment that we consider is the deterministic 1-dimensional tree with a random number of stations on each vertex; in this case the root is the origin of . We give conditions for the survival/extinction on almost every realization of the sequence of stations. Later on, we study the processes on Galton-Watson trees with random number of stations on each vertex. We show that if the probability of survival is positive, then there is survival on almost every realization of the infinite tree such that there is at least one station at the root. We characterize the survival of the process in some cases and we give sufficient conditions for survival/extinction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kan, Jimmy J.; Gottwald, Matthias; Fullerton, Eric E.
We describe low-temperature characterization of magnetic tunnel junctions (MTJs) patterned by reactive ion etching for spin-transfer-torque magnetic random access memory. Magnetotransport measurements of typical MTJs show increasing tunneling magnetoresistance (TMR) and larger coercive fields as temperature is decreased down to 10 K. However, MTJs selected from the high-resistance population of an MTJ array exhibit stable intermediate magnetic states when measured at low temperature and show TMR roll-off below 100 K. These non-ideal low-temperature behaviors arise from edge damage during the etch process and can have negative impacts on thermal stability of the MTJs.
Exploring a potential energy surface by machine learning for characterizing atomic transport
NASA Astrophysics Data System (ADS)
Kanamori, Kenta; Toyoura, Kazuaki; Honda, Junya; Hattori, Kazuki; Seko, Atsuto; Karasuyama, Masayuki; Shitara, Kazuki; Shiga, Motoki; Kuwabara, Akihide; Takeuchi, Ichiro
2018-03-01
We propose a machine-learning method for evaluating the potential barrier governing atomic transport based on the preferential selection of dominant points for atomic transport. The proposed method generates numerous random samples of the entire potential energy surface (PES) from a probabilistic Gaussian process model of the PES, which enables defining the likelihood of the dominant points. The robustness and efficiency of the method are demonstrated on a dozen model cases for proton diffusion in oxides, in comparison with a conventional nudge elastic band method.
Abis, Gabor S A; Stockmann, Hein B A C; van Egmond, Marjolein; Bonjer, Hendrik J; Vandenbroucke-Grauls, Christina M J E; Oosterling, Steven J
2013-12-01
Gastrointestinal surgery is associated with a high incidence of infectious complications. Selective decontamination of the digestive tract is an antimicrobial prophylaxis regimen that aims to eradicate gastrointestinal carriage of potentially pathogenic microorganisms and represents an adjunct to regular prophylaxis in surgery. Relevant studies were identified using bibliographic searches of MEDLINE, EMBASE, and the Cochrane database (period from 1970 to November 1, 2012). Only studies investigating selective decontamination of the digestive tract in gastrointestinal surgery were included. Two randomized clinical trials and one retrospective case-control trial showed significant benefit in terms of infectious complications and anastomotic leakage in colorectal surgery. Two randomized controlled trials in esophageal surgery and two randomized clinical trials in gastric surgery reported lower levels of infectious complications. Selective decontamination of the digestive tract reduces infections following esophageal, gastric, and colorectal surgeries and also appears to have beneficial effects on anastomotic leakage in colorectal surgery. We believe these results provide the basis for a large multicenter prospective study to investigate the role of selective decontamination of the digestive tract in colorectal surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.
Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less
Selective attention to a facial feature with and without facial context: an ERP-study.
Wijers, A A; Van Besouw, N J P; Mulder, G
2002-04-01
The present experiment addressed the question whether selectively attending to a facial feature (mouth shape) would benefit from the presence of a correct facial context. Subjects attended selectively to one of two possible mouth shapes belonging to photographs of a face with a happy or sad expression, respectively. These mouths were presented randomly either in isolation, embedded in the original photos, or in an exchanged facial context. The ERP effect of attending mouth shape was a lateral posterior negativity, anterior positivity with an onset latency of 160-200 ms; this effect was completely unaffected by the type of facial context. When the mouth shape and the facial context conflicted, this resulted in a medial parieto-occipital positivity with an onset latency of 180 ms, independent of the relevance of the mouth shape. Finally, there was a late (onset at approx. 400 ms) expression (happy vs. sad) effect, which was strongly lateralized to the right posterior hemisphere and was most prominent for attended stimuli in the correct facial context. For the isolated mouth stimuli, a similarly distributed expression effect was observed at an earlier latency range (180-240 ms). These data suggest the existence of separate, independent and neuroanatomically segregated processors engaged in the selective processing of facial features and the detection of contextual congruence and emotional expression of face stimuli. The data do not support that early selective attention processes benefit from top-down constraints provided by the correct facial context.
Statistical auditing and randomness test of lotto k/N-type games
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Rapallo, F.; Scalas, E.
2008-11-01
One of the most popular lottery games worldwide is the so-called “lotto k/N”. It considers N numbers 1,2,…,N from which k are drawn randomly, without replacement. A player selects k or more numbers and the first prize is shared amongst those players whose selected numbers match all of the k randomly drawn. Exact rules may vary in different countries. In this paper, mean values and covariances for the random variables representing the numbers drawn from this kind of game are presented, with the aim of using them to audit statistically the consistency of a given sample of historical results with theoretical values coming from a hypergeometric statistical model. The method can be adapted to test pseudorandom number generators.
Foldamer hypothesis for the growth and sequence differentiation of prebiotic polymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guseva, Elizaveta; Zuckermann, Ronald N.; Dill, Ken A.
It is not known how life originated. It is thought that prebiotic processes were able to synthesize short random polymers. However, then, how do short-chain molecules spontaneously grow longer? Also, how would random chains grow more informational and become autocatalytic (i.e., increasing their own concentrations)? We study the folding and binding of random sequences of hydrophobic ( H) and polar ( P) monomers in a computational model. We find that even short hydrophobic polar ( HP) chains can collapse into relatively compact structures, exposing hydrophobic surfaces. In this way, they act as primitive versions of today’s protein catalysts, elongating othermore » such HP polymers as ribosomes would now do. Such foldamer catalysts are shown to form an autocatalytic set, through which short chains grow into longer chains that have particular sequences. An attractive feature of this model is that it does not overconverge to a single solution; it gives ensembles that could further evolve under selection. This mechanism describes how specific sequences and conformations could contribute to the chemistry-to-biology (CTB) transition.« less
Foldamer hypothesis for the growth and sequence differentiation of prebiotic polymers
Guseva, Elizaveta; Zuckermann, Ronald N.; Dill, Ken A.
2017-08-22
It is not known how life originated. It is thought that prebiotic processes were able to synthesize short random polymers. However, then, how do short-chain molecules spontaneously grow longer? Also, how would random chains grow more informational and become autocatalytic (i.e., increasing their own concentrations)? We study the folding and binding of random sequences of hydrophobic ( H) and polar ( P) monomers in a computational model. We find that even short hydrophobic polar ( HP) chains can collapse into relatively compact structures, exposing hydrophobic surfaces. In this way, they act as primitive versions of today’s protein catalysts, elongating othermore » such HP polymers as ribosomes would now do. Such foldamer catalysts are shown to form an autocatalytic set, through which short chains grow into longer chains that have particular sequences. An attractive feature of this model is that it does not overconverge to a single solution; it gives ensembles that could further evolve under selection. This mechanism describes how specific sequences and conformations could contribute to the chemistry-to-biology (CTB) transition.« less
Foldamer hypothesis for the growth and sequence differentiation of prebiotic polymers
Guseva, Elizaveta; Zuckermann, Ronald N.; Dill, Ken A.
2017-01-01
It is not known how life originated. It is thought that prebiotic processes were able to synthesize short random polymers. However, then, how do short-chain molecules spontaneously grow longer? Also, how would random chains grow more informational and become autocatalytic (i.e., increasing their own concentrations)? We study the folding and binding of random sequences of hydrophobic (H) and polar (P) monomers in a computational model. We find that even short hydrophobic polar (HP) chains can collapse into relatively compact structures, exposing hydrophobic surfaces. In this way, they act as primitive versions of today’s protein catalysts, elongating other such HP polymers as ribosomes would now do. Such foldamer catalysts are shown to form an autocatalytic set, through which short chains grow into longer chains that have particular sequences. An attractive feature of this model is that it does not overconverge to a single solution; it gives ensembles that could further evolve under selection. This mechanism describes how specific sequences and conformations could contribute to the chemistry-to-biology (CTB) transition. PMID:28831002
Potential of Using Mobile Phone Data to Assist in Mission Analysis and Area of Operations Planning
2015-08-01
tremendously beneficial especially since a sizeable portion of the population are nomads , changing location based on season. A proper AO...provided: a. User_id: Selected User’s random ID b. Timestamp: 24 h format YYYY-MM-DD-HH:M0:00 (the second digits of the minutes and all the seconds...yearly were selected. This data provided: a. User_id: Selected User’s random ID b. Timestamp: 24 h format YYYY-MM-DD-HH:M0:00 (the second digits
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
Topology-selective jamming of fully-connected, code-division random-access networks
NASA Technical Reports Server (NTRS)
Polydoros, Andreas; Cheng, Unjeng
1990-01-01
The purpose is to introduce certain models of topology selective stochastic jamming and examine its impact on a class of fully-connected, spread-spectrum, slotted ALOHA-type random access networks. The theory covers dedicated as well as half-duplex units. The dominant role of the spatial duty factor is established, and connections with the dual concept of time selective jamming are discussed. The optimal choices of coding rate and link access parameters (from the users' side) and the jamming spatial fraction are numerically established for DS and FH spreading.
NASA Astrophysics Data System (ADS)
Hasuike, Takashi; Katagiri, Hideki
2010-10-01
This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.
Damuth, John
2007-05-01
Across a wide array of animal species, mean population densities decline with species body mass such that the rate of energy use of local populations is approximately independent of body size. This "energetic equivalence" is particularly evident when ecological population densities are plotted across several or more orders of magnitude in body mass and is supported by a considerable body of evidence. Nevertheless, interpretation of the data has remained controversial, largely because of the difficulty of explaining the origin and maintenance of such a size-abundance relationship in terms of purely ecological processes. Here I describe results of a simulation model suggesting that an extremely simple mechanism operating over evolutionary time can explain the major features of the empirical data. The model specifies only the size scaling of metabolism and a process where randomly chosen species evolve to take resource energy from other species. This process of energy exchange among particular species is distinct from a random walk of species abundances and creates a situation in which species populations using relatively low amounts of energy at any body size have an elevated extinction risk. Selective extinction of such species rapidly drives size-abundance allometry in faunas toward approximate energetic equivalence and maintains it there.
Modelling Evolutionary Algorithms with Stochastic Differential Equations.
Heredia, Jorge Pérez
2017-11-20
There has been renewed interest in modelling the behaviour of evolutionary algorithms (EAs) by more traditional mathematical objects, such as ordinary differential equations or Markov chains. The advantage is that the analysis becomes greatly facilitated due to the existence of well established methods. However, this typically comes at the cost of disregarding information about the process. Here, we introduce the use of stochastic differential equations (SDEs) for the study of EAs. SDEs can produce simple analytical results for the dynamics of stochastic processes, unlike Markov chains which can produce rigorous but unwieldy expressions about the dynamics. On the other hand, unlike ordinary differential equations (ODEs), they do not discard information about the stochasticity of the process. We show that these are especially suitable for the analysis of fixed budget scenarios and present analogues of the additive and multiplicative drift theorems from runtime analysis. In addition, we derive a new more general multiplicative drift theorem that also covers non-elitist EAs. This theorem simultaneously allows for positive and negative results, providing information on the algorithm's progress even when the problem cannot be optimised efficiently. Finally, we provide results for some well-known heuristics namely Random Walk (RW), Random Local Search (RLS), the (1+1) EA, the Metropolis Algorithm (MA), and the Strong Selection Weak Mutation (SSWM) algorithm.
Scruggs, Stacie; Mama, Scherezade K; Carmack, Cindy L; Douglas, Tommy; Diamond, Pamela; Basen-Engquist, Karen
2018-01-01
This study examined whether a physical activity intervention affects transtheoretical model (TTM) variables that facilitate exercise adoption in breast cancer survivors. Sixty sedentary breast cancer survivors were randomized to a 6-month lifestyle physical activity intervention or standard care. TTM variables that have been shown to facilitate exercise adoption and progress through the stages of change, including self-efficacy, decisional balance, and processes of change, were measured at baseline, 3 months, and 6 months. Differences in TTM variables between groups were tested using repeated measures analysis of variance. The intervention group had significantly higher self-efficacy ( F = 9.55, p = .003) and perceived significantly fewer cons of exercise ( F = 5.416, p = .025) at 3 and 6 months compared with the standard care group. Self-liberation, counterconditioning, and reinforcement management processes of change increased significantly from baseline to 6 months in the intervention group, and self-efficacy and reinforcement management were significantly associated with improvement in stage of change. The stage-based physical activity intervention increased use of select processes of change, improved self-efficacy, decreased perceptions of the cons of exercise, and helped participants advance in stage of change. These results point to the importance of using a theory-based approach in interventions to increase physical activity in cancer survivors.
ERIC Educational Resources Information Center
Magnusson, Kristjan Thor; Hrafnkelsson, Hannes; Sigurgeirsson, Ingvar; Johannsson, Erlingur; Sveinsson, Thorarinn
2012-01-01
The aim of this study was to assess the effects of a 2-year cluster-randomized physical activity and dietary intervention program among 7-year-old (at baseline) elementary school participants on body composition and objectively measured cardiorespiratory fitness. Three pairs of schools were selected and matched, then randomly selected as either an…
ERIC Educational Resources Information Center
Newton, Nicola C.; Conrod, Patricia J.; Slade, Tim; Carragher, Natacha; Champion, Katrina E.; Barrett, Emma L.; Kelly, Erin V.; Nair, Natasha K.; Stapinski, Lexine; Teesson, Maree
2016-01-01
Background: This study investigated the long-term effectiveness of Preventure, a selective personality-targeted prevention program, in reducing the uptake of alcohol, harmful use of alcohol, and alcohol-related harms over a 3-year period. Methods: A cluster randomized controlled trial was conducted to assess the effectiveness of Preventure.…
Montague, Michael J; Li, Gang; Gandolfi, Barbara; Khan, Razib; Aken, Bronwen L; Searle, Steven M J; Minx, Patrick; Hillier, LaDeana W; Koboldt, Daniel C; Davis, Brian W; Driscoll, Carlos A; Barr, Christina S; Blackistone, Kevin; Quilez, Javier; Lorente-Galdos, Belen; Marques-Bonet, Tomas; Alkan, Can; Thomas, Gregg W C; Hahn, Matthew W; Menotti-Raymond, Marilyn; O'Brien, Stephen J; Wilson, Richard K; Lyons, Leslie A; Murphy, William J; Warren, Wesley C
2014-12-02
Little is known about the genetic changes that distinguish domestic cat populations from their wild progenitors. Here we describe a high-quality domestic cat reference genome assembly and comparative inferences made with other cat breeds, wildcats, and other mammals. Based upon these comparisons, we identified positively selected genes enriched for genes involved in lipid metabolism that underpin adaptations to a hypercarnivorous diet. We also found positive selection signals within genes underlying sensory processes, especially those affecting vision and hearing in the carnivore lineage. We observed an evolutionary tradeoff between functional olfactory and vomeronasal receptor gene repertoires in the cat and dog genomes, with an expansion of the feline chemosensory system for detecting pheromones at the expense of odorant detection. Genomic regions harboring signatures of natural selection that distinguish domestic cats from their wild congeners are enriched in neural crest-related genes associated with behavior and reward in mouse models, as predicted by the domestication syndrome hypothesis. Our description of a previously unidentified allele for the gloving pigmentation pattern found in the Birman breed supports the hypothesis that cat breeds experienced strong selection on specific mutations drawn from random bred populations. Collectively, these findings provide insight into how the process of domestication altered the ancestral wildcat genome and build a resource for future disease mapping and phylogenomic studies across all members of the Felidae.
Li, Gang; Gandolfi, Barbara; Khan, Razib; Aken, Bronwen L.; Searle, Steven M. J.; Minx, Patrick; Hillier, LaDeana W.; Koboldt, Daniel C.; Davis, Brian W.; Driscoll, Carlos A.; Barr, Christina S.; Blackistone, Kevin; Quilez, Javier; Lorente-Galdos, Belen; Marques-Bonet, Tomas; Alkan, Can; Thomas, Gregg W. C.; Hahn, Matthew W.; Menotti-Raymond, Marilyn; O’Brien, Stephen J.; Wilson, Richard K.; Lyons, Leslie A.; Murphy, William J.; Warren, Wesley C.
2014-01-01
Little is known about the genetic changes that distinguish domestic cat populations from their wild progenitors. Here we describe a high-quality domestic cat reference genome assembly and comparative inferences made with other cat breeds, wildcats, and other mammals. Based upon these comparisons, we identified positively selected genes enriched for genes involved in lipid metabolism that underpin adaptations to a hypercarnivorous diet. We also found positive selection signals within genes underlying sensory processes, especially those affecting vision and hearing in the carnivore lineage. We observed an evolutionary tradeoff between functional olfactory and vomeronasal receptor gene repertoires in the cat and dog genomes, with an expansion of the feline chemosensory system for detecting pheromones at the expense of odorant detection. Genomic regions harboring signatures of natural selection that distinguish domestic cats from their wild congeners are enriched in neural crest-related genes associated with behavior and reward in mouse models, as predicted by the domestication syndrome hypothesis. Our description of a previously unidentified allele for the gloving pigmentation pattern found in the Birman breed supports the hypothesis that cat breeds experienced strong selection on specific mutations drawn from random bred populations. Collectively, these findings provide insight into how the process of domestication altered the ancestral wildcat genome and build a resource for future disease mapping and phylogenomic studies across all members of the Felidae. PMID:25385592
2012-02-01
This study examined how parenting and family characteristics targeted in a selective prevention program mediated effects on key youth proximal outcomes related to violence perpetration. The selective intervention was evaluated within the context of a multi-site trial involving random assignment of 37 schools to four conditions: a universal intervention composed of a student social-cognitive curriculum and teacher training, a selective family-focused intervention with a subset of high-risk students, a condition combining these two interventions, and a no-intervention control condition. Two cohorts of sixth-grade students (total N = 1,062) exhibiting high levels of aggression and social influence were the sample for this study. Analyses of pre-post change compared to controls using intent-to-treat analyses found no significant effects. However, estimates incorporating participation of those assigned to the intervention and predicted participation among those not assigned revealed significant positive effects on student aggression, use of aggressive strategies for conflict management, and parental estimation of student's valuing of achievement. Findings also indicated intervention effects on two targeted family processes: discipline practices and family cohesion. Mediation analyses found evidence that change in these processes mediated effects on some outcomes, notably aggressive behavior and valuing of school achievement. Results support the notion that changing parenting practices and the quality of family relationships can prevent the escalation in aggression and maintain positive school engagement for high-risk youth.
Tigers on trails: occupancy modeling for cluster sampling.
Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U
2010-07-01
Occupancy modeling focuses on inference about the distribution of organisms over space, using temporal or spatial replication to allow inference about the detection process. Inference based on spatial replication strictly requires that replicates be selected randomly and with replacement, but the importance of these design requirements is not well understood. This paper focuses on an increasingly popular sampling design based on spatial replicates that are not selected randomly and that are expected to exhibit Markovian dependence. We develop two new occupancy models for data collected under this sort of design, one based on an underlying Markov model for spatial dependence and the other based on a trap response model with Markovian detections. We then simulated data under the model for Markovian spatial dependence and fit the data to standard occupancy models and to the two new models. Bias of occupancy estimates was substantial for the standard models, smaller for the new trap response model, and negligible for the new spatial process model. We also fit these models to data from a large-scale tiger occupancy survey recently conducted in Karnataka State, southwestern India. In addition to providing evidence of a positive relationship between tiger occupancy and habitat, model selection statistics and estimates strongly supported the use of the model with Markovian spatial dependence. This new model provides another tool for the decomposition of the detection process, which is sometimes needed for proper estimation and which may also permit interesting biological inferences. In addition to designs employing spatial replication, we note the likely existence of temporal Markovian dependence in many designs using temporal replication. The models developed here will be useful either directly, or with minor extensions, for these designs as well. We believe that these new models represent important additions to the suite of modeling tools now available for occupancy estimation in conservation monitoring. More generally, this work represents a contribution to the topic of cluster sampling for situations in which there is a need for specific modeling (e.g., reflecting dependence) for the distribution of the variable(s) of interest among subunits.
Effects of zinc supplementation on subscales of anorexia in children: A randomized controlled trial.
Khademian, Majid; Farhangpajouh, Neda; Shahsanaee, Armindokht; Bahreynian, Maryam; Mirshamsi, Mehran; Kelishadi, Roya
2014-01-01
This study aims to assess the effects of zinc supplementation on improving the appetite and its subscales in children. This study was conducted in 2013 in Isfahan, Iran. It had two phases. At the first step, after validation of the Child Eating Behaviour Questionaire (CEBQ), it was completed for 300 preschool children, who were randomly selected. The second phase was conducted as a randomized controlled trial. Eighty of these children were randomly selected, and were randomly assigned to two groups of equal number receiving zinc (10 mg/day) or placebo for 12 weeks. Overall 77 children completed the trial (39 in the case and 3 in the control group).The results showed that zinc supplement can improve calorie intake in children by affecting some CEBQ subscales like Emotional over Eating and Food Responsible. Zinc supplementation had positive impact in promoting the calorie intake and some subscales of anorexia.
Using environmental heterogeneity to plan for sea-level rise.
Hunter, Elizabeth A; Nibbelink, Nathan P
2017-12-01
Environmental heterogeneity is increasingly being used to select conservation areas that will provide for future biodiversity under a variety of climate scenarios. This approach, termed conserving nature's stage (CNS), assumes environmental features respond to climate change more slowly than biological communities, but will CNS be effective if the stage were to change as rapidly as the climate? We tested the effectiveness of using CNS to select sites in salt marshes for conservation in coastal Georgia (U.S.A.), where environmental features will change rapidly as sea level rises. We calculated species diversity based on distributions of 7 bird species with a variety of niches in Georgia salt marshes. Environmental heterogeneity was assessed across six landscape gradients (e.g., elevation, salinity, and patch area). We used 2 approaches to select sites with high environmental heterogeneity: site complementarity (environmental diversity [ED]) and local environmental heterogeneity (environmental richness [ER]). Sites selected based on ER predicted present-day species diversity better than randomly selected sites (up to an 8.1% improvement), were resilient to areal loss from SLR (1.0% average areal loss by 2050 compared with 0.9% loss of randomly selected sites), and provided habitat to a threatened species (0.63 average occupancy compared with 0.6 average occupancy of randomly selected sites). Sites selected based on ED predicted species diversity no better or worse than random and were not resilient to SLR (2.9% average areal loss by 2050). Despite the discrepancy between the 2 approaches, CNS is a viable strategy for conservation site selection in salt marshes because the ER approach was successful. It has potential for application in other coastal areas where SLR will affect environmental features, but its performance may depend on the magnitude of geological changes caused by SLR. Our results indicate that conservation planners that had heretofore excluded low-lying coasts from CNS planning could include coastal ecosystems in regional conservation strategies. © 2017 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Elkatlawy, Saeid; Gomariz, María.; Soto-Sánchez, Cristina; Martínez Navarrete, Gema; Fernández, Eduardo; Fimia, Antonio
2014-05-01
In this paper we report on the use of digital holographic microscopy for 3D real time imaging of cultured neurons and neural networks, in vitro. Digital holographic microscopy is employed as an assessment tool to study the biophysical origin of neurodegenerative diseases. Our study consists in the morphological characterization of the axon, dendrites and cell bodies. The average size and thickness of the soma were 21 and 13 μm, respectively. Furthermore, the average size and diameter of some randomly selected neurites were 4.8 and 0.89 μm, respectively. In addition, the spatiotemporal growth process of cellular bodies and extensions was fitted to by a non-linear behavior of the nerve system. Remarkably, this non-linear process represents the relationship between the growth process of cellular body with respect to the axon and dendrites of the neurons.
NASA Astrophysics Data System (ADS)
Shahali, Edy H. M.; Halim, Lilia; Treagust, David F.; Won, Mihye; Chandrasegaran, A. L.
2017-04-01
This study investigated the understanding of science process skills (SPS) of 329 science teachers from 52 primary schools selected by random sampling. The understanding of SPS was measured in terms of conceptual and operational aspects of SPS using an instrument called the Science Process Skills Questionnaire (SPSQ) with a Cronbach's alpha reliability of 0.88. The findings showed that the teachers' conceptual understanding of SPS was much weaker than their practical application of SPS. The teachers' understanding of SPS differed by their teaching qualifications but not so much by their teaching experience. Emphasis needs to be given to both conceptual and operational understanding of SPS during pre-service and in-service teacher education to enable science teachers to use the skills and implement inquiry-based lessons in schools.
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi
2016-01-01
Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588
Female mating preferences determine system-level evolution in a gene network model.
Fierst, Janna L
2013-06-01
Environmental patterns of directional, stabilizing and fluctuating selection can influence the evolution of system-level properties like evolvability and mutational robustness. Intersexual selection produces strong phenotypic selection and these dynamics may also affect the response to mutation and the potential for future adaptation. In order to to assess the influence of mating preferences on these evolutionary properties, I modeled a male trait and female preference determined by separate gene regulatory networks. I studied three sexual selection scenarios: sexual conflict, a Gaussian model of the Fisher process described in Lande (in Proc Natl Acad Sci 78(6):3721-3725, 1981) and a good genes model in which the male trait signalled his mutational condition. I measured the effects these mating preferences had on the potential for traits and preferences to evolve towards new states, and mutational robustness of both the phenotype and the individual's overall viability. All types of sexual selection increased male phenotypic robustness relative to a randomly mating population. The Fisher model also reduced male evolvability and mutational robustness for viability. Under good genes sexual selection, males evolved an increased mutational robustness for viability. Females choosing their mates is a scenario that is sufficient to create selective forces that impact genetic evolution and shape the evolutionary response to mutation and environmental selection. These dynamics will inevitably develop in any population where sexual selection is operating, and affect the potential for future adaptation.
A mathematical study of a random process proposed as an atmospheric turbulence model
NASA Technical Reports Server (NTRS)
Sidwell, K.
1977-01-01
A random process is formed by the product of a local Gaussian process and a random amplitude process, and the sum of that product with an independent mean value process. The mathematical properties of the resulting process are developed, including the first and second order properties and the characteristic function of general order. An approximate method for the analysis of the response of linear dynamic systems to the process is developed. The transition properties of the process are also examined.
Automatic learning-based beam angle selection for thoracic IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amit, Guy; Marshall, Andrea; Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca
Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationallymore » efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume coverage and organ at risk sparing and were superior over plans produced with fixed sets of common beam angles. The great majority of the automatic plans (93%) were approved as clinically acceptable by three radiation therapy specialists. Conclusions: The results demonstrated the feasibility of utilizing a learning-based approach for automatic selection of beam angles in thoracic IMRT planning. The proposed method may assist in reducing the manual planning workload, while sustaining plan quality.« less
DeLay, Dawn; Ha, Thao; Van Ryzin, Mark; Winter, Charlotte; Dishion, Thomas J
2016-04-01
Adolescent friendships that promote problem behavior are often chosen in middle school. The current study examines the unintended impact of a randomized school-based intervention on the selection of friends in middle school, as well as on observations of deviant talk with friends 5 years later. Participants included 998 middle school students (526 boys and 472 girls) recruited at the onset of middle school (age 11-12 years) from three public middle schools participating in the Family Check-up model intervention. The current study focuses only on the effects of the SHAPe curriculum-one level of the Family Check-up model-on friendship choices. Participants nominated friends and completed measures of deviant peer affiliation. Approximately half of the sample (n = 500) was randomly assigned to the intervention, and the other half (n = 498) comprised the control group within each school. The results indicate that the SHAPe curriculum affected friend selection within school 1 but not within schools 2 or 3. The effects of friend selection in school 1 translated into reductions in observed deviancy training 5 years later (age 16-17 years). By coupling longitudinal social network analysis with a randomized intervention study, the current findings provide initial evidence that a randomized public middle school intervention can disrupt the formation of deviant peer groups and diminish levels of adolescent deviance 5 years later.
Randomness and diversity matter in the maintenance of the public resources
NASA Astrophysics Data System (ADS)
Liu, Aizhi; Zhang, Yanling; Chen, Xiaojie; Sun, Changyin
2017-03-01
Most previous models about the public goods game usually assume two possible strategies, i.e., investing all or nothing. The real-life situation is rarely all or nothing. In this paper, we consider that multiple strategies are adopted in a well-mixed population, and each strategy represents an investment to produce the public goods. Past efforts have found that randomness matters in the evolution of fairness in the ultimatum game. In the framework involving no other mechanisms, we study how diversity and randomness influence the average investment of the population defined by the mean value of all individuals' strategies. The level of diversity is increased by increasing the strategy number, and the level of randomness is increased by increasing the mutation probability, or decreasing the population size or the selection intensity. We find that a higher level of diversity and a higher level of randomness lead to larger average investment and favor more the evolution of cooperation. Under weak selection, the average investment changes very little with the strategy number, the population size, and the mutation probability. Under strong selection, the average investment changes very little with the strategy number and the population size, but changes a lot with the mutation probability. Under intermediate selection, the average investment increases significantly with the strategy number and the mutation probability, and decreases significantly with the population size. These findings are meaningful to study how to maintain the public resource.
Different functional classes of genes are characterized by different compositional properties.
D'Onofrio, Giuseppe; Ghosh, Tapash Chandra; Saccone, Salvatore
2007-12-22
A compositional analysis on a set of human genes classified in several functional classes was performed. We found out that the GC3, i.e. the GC level at the third codon positions, of the genes involved in cellular metabolism was significantly higher than those involved in information storage and processing. Analyses of human/Xenopus ortologous genes showed that: (i) the GC3 increment of the genes involved in cellular metabolism was significantly higher than those involved in information storage and processing; and (ii) a strong correlation between the GC3 and the corresponding GCi, i.e. the GC level of introns, was found in each functional class. The non-randomness of the GC increments favours the selective hypothesis of gene/genome evolution.
Watson, Richard A; Szathmáry, Eörs
2016-02-01
The theory of evolution links random variation and selection to incremental adaptation. In a different intellectual domain, learning theory links incremental adaptation (e.g., from positive and/or negative reinforcement) to intelligent behaviour. Specifically, learning theory explains how incremental adaptation can acquire knowledge from past experience and use it to direct future behaviours toward favourable outcomes. Until recently such cognitive learning seemed irrelevant to the 'uninformed' process of evolution. In our opinion, however, new results formally linking evolutionary processes to the principles of learning might provide solutions to several evolutionary puzzles - the evolution of evolvability, the evolution of ecological organisation, and evolutionary transitions in individuality. If so, the ability for evolution to learn might explain how it produces such apparently intelligent designs. Copyright © 2015 Elsevier Ltd. All rights reserved.
National Board Certified Physical Educators: perceived changes related to the certification process.
Woods, Amelia Mays; Rhoades, Jesse Lee
2012-06-01
In this study, we examined National Board certified physical education teachers' (NBCPETs) perceptions of change as a result of certification. Randomly selected NBCPETs (65; women = 53, men = 12) were interviewed. Analysis was done through the lens of Lawson's (1989) Model of the interactive factors Influencing workplace conditions for the Physical Education Teacher Several themes connected to teachers' views of themselves as NBCPETs surfaced. In particular more teaching reflection and a greater focus on student learning and assessment, including an increased emphasis on individualizing teaching were described. An elevation in their perceived status and credibility and expanded opportunities within the educational community also emerged. Alternatively, several NBCPETs explained that the certification process had little or no effect on their teaching
Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images
Zhou, Mingyuan; Chen, Haojun; Paisley, John; Ren, Lu; Li, Lingbo; Xing, Zhengming; Dunson, David; Sapiro, Guillermo; Carin, Lawrence
2013-01-01
Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. PMID:21693421
Weighted stacking of seismic AVO data using hybrid AB semblance and local similarity
NASA Astrophysics Data System (ADS)
Deng, Pan; Chen, Yangkang; Zhang, Yu; Zhou, Hua-Wei
2016-04-01
The common-midpoint (CMP) stacking technique plays an important role in enhancing the signal-to-noise ratio (SNR) in seismic data processing and imaging. Weighted stacking is often used to improve the performance of conventional equal-weight stacking in further attenuating random noise and handling the amplitude variations in real seismic data. In this study, we propose to use a hybrid framework of combining AB semblance and a local-similarity-weighted stacking scheme. The objective is to achieve an optimal stacking of the CMP gathers with class II amplitude-variation-with-offset (AVO) polarity-reversal anomaly. The selection of high-quality near-offset reference trace is another innovation of this work because of its better preservation of useful energy. Applications to synthetic and field seismic data demonstrate a great improvement using our method to capture the true locations of weak reflections, distinguish thin-bed tuning artifacts, and effectively attenuate random noise.
Random Walk Quantum Clustering Algorithm Based on Space
NASA Astrophysics Data System (ADS)
Xiao, Shufen; Dong, Yumin; Ma, Hongyang
2018-01-01
In the random quantum walk, which is a quantum simulation of the classical walk, data points interacted when selecting the appropriate walk strategy by taking advantage of quantum-entanglement features; thus, the results obtained when the quantum walk is used are different from those when the classical walk is adopted. A new quantum walk clustering algorithm based on space is proposed by applying the quantum walk to clustering analysis. In this algorithm, data points are viewed as walking participants, and similar data points are clustered using the walk function in the pay-off matrix according to a certain rule. The walk process is simplified by implementing a space-combining rule. The proposed algorithm is validated by a simulation test and is proved superior to existing clustering algorithms, namely, Kmeans, PCA + Kmeans, and LDA-Km. The effects of some of the parameters in the proposed algorithm on its performance are also analyzed and discussed. Specific suggestions are provided.
Karlsson, Stefan L.; Thomson, Nicholas; Mutreja, Ankur; Connor, Thomas; Sur, Dipika; Ali, Mohammad; Clemens, John; Dougan, Gordon; Holmgren, Jan; Lebens, Michael
2016-01-01
Genomic data generated from clinical Vibrio cholerae O1 isolates collected over a five year period in an area of Kolkata, India with seasonal cholera outbreaks allowed a detailed genetic analysis of serotype switching that occurred from Ogawa to Inaba and back to Ogawa. The change from Ogawa to Inaba resulted from mutational disruption of the methyltransferase encoded by the wbeT gene. Re-emergence of the Ogawa serotype was found to result either from expansion of an already existing Ogawa clade or reversion of the mutation in an Inaba clade. Our data suggests that such transitions are not random events but rather driven by as yet unidentified selection mechanisms based on differences in the structure of the O1 antigen or in the serotype-determining wbeT gene. PMID:27706170