Broome, John
1984-10-01
This article considers what justification can be found for selecting randomly and in what circumstances it applies, including that of selecting patients to be treated by a scarce medical procedure. The author demonstrates that balancing the merits of fairness, common good, equal rights, and equal chance as they apply in various situations frequently leads to the conclusion that random selection may not be the most appropriate mode of selection. Broome acknowledges that, in the end, we may be forced to conclude that the only merit of random selection is the political one of guarding against partiality and oppression.
Random Selection for Drug Screening
Center for Human Reliability Studies
2007-05-01
Simple random sampling is generally the starting point for a random sampling process. This sampling technique ensures that each individual within a group (population) has an equal chance of being selected. There are a variety of ways to implement random sampling in a practical situation.
Random Selection for Drug Screening
Center for Human Reliability Studies
2007-05-01
Sampling is the process of choosing some members out of a group or population. Probablity sampling, or random sampling, is the process of selecting members by chance with a known probability of each individual being chosen.
Subset selection under random censorship
Kim, J.S.
1983-03-01
Suppose we want to model the situation commonly taking place, for example, in industrial life-testing in which two-component series system is understudy. The system functions if and only if both the Type A component and the Type B component are functioning. The distribution or an unknown parameter in the distribution of the Type A component is of interest. Let X/sub 1/, X/sub 2/, ..., X/sub n/ be independent and identically distributed random variables denoting lifelengths of n Type A components with a continuous distribution function F, and let Y/sub 1/, Y/sub 2/, ..., Y/sub n/ be independent and identically distributed random variables denoting lifelengths of n Type B components also with a continuous distribution function H(.). Failure of the Type B component causes the system failure, thereby making it impossible to observe the failure time of the Type A component. The random variables Y/sub 1/, Y/sub 2/, ..., Y/sub n/ are referred to as time-to-censorship or censoring random variables, and the distribution function H(.) as the censoring distribution. We assume that (X/sub 1/, Y/sub 1/), (X/sub 2/, Y/sub 2/), ..., (X/sub n/, Y/sub n/) is an independent and identically distributed sequence of random pairs defined on a common probability space. Our observations consist of the minima, Z/sub 1/ - min (X/sub 1/, Y/sub 1/), Z/sub 2/ = min (X/sub 2/, Y/sub 2/), ..., Z/sub n/ = min (X/sub n/, Y/sub n/, which are i.i.d. random variables. It is the objective of this paper to formulate a k-sample selection problem under random censorship.
Randomized selection on the GPU
Monroe, Laura Marie; Wendelberger, Joanne R; Michalak, Sarah E
2011-01-13
We implement here a fast and memory-sparing probabilistic top N selection algorithm on the GPU. To our knowledge, this is the first direct selection in the literature for the GPU. The algorithm proceeds via a probabilistic-guess-and-chcck process searching for the Nth element. It always gives a correct result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces the average time required for the algorithm. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be well suited to more general parallel processors with limited amounts of fast memory.
Random selection as a confidence building tool
Macarthur, Duncan W; Hauck, Danielle; Langner, Diana; Thron, Jonathan; Smith, Morag; Williams, Richard
2010-01-01
Any verification measurement performed on potentially classified nuclear material must satisfy two seemingly contradictory constraints. First and foremost, no classified information can be released. At the same time, the monitoring party must have confidence in the veracity of the measurement. The first concern can be addressed by performing the measurements within the host facility using instruments under the host's control. Because the data output in this measurement scenario is also under host control, it is difficult for the monitoring party to have confidence in that data. One technique for addressing this difficulty is random selection. The concept of random selection can be thought of as four steps: (1) The host presents several 'identical' copies of a component or system to the monitor. (2) One (or more) of these copies is randomly chosen by the monitors for use in the measurement system. (3) Similarly, one or more is randomly chosen to be validated further at a later date in a monitor-controlled facility. (4) Because the two components or systems are identical, validation of the 'validation copy' is equivalent to validation of the measurement system. This procedure sounds straightforward, but effective application may be quite difficult. Although random selection is often viewed as a panacea for confidence building, the amount of confidence generated depends on the monitor's continuity of knowledge for both validation and measurement systems. In this presentation, we will discuss the random selection technique, as well as where and how this technique might be applied to generate maximum confidence. In addition, we will discuss the role of modular measurement-system design in facilitating random selection and describe a simple modular measurement system incorporating six small {sup 3}He neutron detectors and a single high-purity germanium gamma detector.
Species selection and random drift in macroevolution.
Chevin, Luis-Miguel
2016-03-01
Species selection resulting from trait-dependent speciation and extinction is increasingly recognized as an important mechanism of phenotypic macroevolution. However, the recent bloom in statistical methods quantifying this process faces a scarcity of dynamical theory for their interpretation, notably regarding the relative contributions of deterministic versus stochastic evolutionary forces. I use simple diffusion approximations of birth-death processes to investigate how the expected and random components of macroevolutionary change depend on phenotype-dependent speciation and extinction rates, as can be estimated empirically. I show that the species selection coefficient for a binary trait, and selection differential for a quantitative trait, depend not only on differences in net diversification rates (speciation minus extinction), but also on differences in species turnover rates (speciation plus extinction), especially in small clades. The randomness in speciation and extinction events also produces a species-level equivalent to random genetic drift, which is stronger for higher turnover rates. I then show how microevolutionary processes including mutation, organismic selection, and random genetic drift cause state transitions at the species level, allowing comparison of evolutionary forces across levels. A key parameter that would be needed to apply this theory is the distribution and rate of origination of new optimum phenotypes along a phylogeny. PMID:26880617
Randomness in post-selected events
NASA Astrophysics Data System (ADS)
Phuc Thinh, Le; de la Torre, Gonzalo; Bancal, Jean-Daniel; Pironio, Stefano; Scarani, Valerio
2016-03-01
Bell inequality violations can be used to certify private randomness for use in cryptographic applications. In photonic Bell experiments, a large amount of the data that is generated comes from no-detection events and presumably contains little randomness. This raises the question as to whether randomness can be extracted only from the smaller post-selected subset corresponding to proper detection events, instead of from the entire set of data. This could in principle be feasible without opening an analogue of the detection loophole as long as the min-entropy of the post-selected data is evaluated by taking all the information into account, including no-detection events. The possibility of extracting randomness from a short string has a practical advantage, because it reduces the computational time of the extraction. Here, we investigate the above idea in a simple scenario, where the devices and the adversary behave according to i.i.d. strategies. We show that indeed almost all the randomness is present in the pair of outcomes for which at least one detection happened. We further show that in some cases applying a pre-processing on the data can capture features that an analysis based on global frequencies only misses, thus resulting in the certification of more randomness. We then briefly consider non-i.i.d strategies and provide an explicit example of such a strategy that is more powerful than any i.i.d. one even in the asymptotic limit of infinitely many measurement rounds, something that was not reported before in the context of Bell inequalities.
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Random Selection Procedures for Mass Media Services General Procedures § 1.1603...
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Random Selection Procedures for Mass Media Services General Procedures §...
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection....
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection....
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random...
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random...
Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.
2010-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
Random-temporal block selection for video stabilization
NASA Astrophysics Data System (ADS)
Battiato, S.; Bruna, A. R.; Puglisi, G.
2011-01-01
Digital video stabilization allows to acquire video sequences without disturbing jerkiness by removing from the image sequence the effects caused by unwanted camera movements. One of the bottlenecks of these approaches is the local motion estimation step. In this paper we propose a Block Selector able to speed-up the block matching based video stabilization techniques without considerably degrading the stabilization performances. Both history and random criteria are taken into account in the selection process. Experiments on real cases confirm the effectiveness of the proposed approach even in critical conditions.
Selective randomized load balancing and mesh networks with changing demands
NASA Astrophysics Data System (ADS)
Shepherd, F. B.; Winzer, P. J.
2006-05-01
We consider the problem of building cost-effective networks that are robust to dynamic changes in demand patterns. We compare several architectures using demand-oblivious routing strategies. Traditional approaches include single-hop architectures based on a (static or dynamic) circuit-switched core infrastructure and multihop (packet-switched) architectures based on point-to-point circuits in the core. To address demand uncertainty, we seek minimum cost networks that can carry the class of hose demand matrices. Apart from shortest-path routing, Valiant's randomized load balancing (RLB), and virtual private network (VPN) tree routing, we propose a third, highly attractive approach: selective randomized load balancing (SRLB). This is a blend of dual-hop hub routing and randomized load balancing that combines the advantages of both architectures in terms of network cost, delay, and delay jitter. In particular, we give empirical analyses for the cost (in terms of transport and switching equipment) for the discussed architectures, based on three representative carrier networks. Of these three networks, SRLB maintains the resilience properties of RLB while achieving significant cost reduction over all other architectures, including RLB and multihop Internet protocol/multiprotocol label switching (IP/MPLS) networks using VPN-tree routing.
Hierarchy and extremes in selections from pools of randomized proteins.
Boyer, Sébastien; Biswas, Dipanwita; Kumar Soshee, Ananda; Scaramozzino, Natale; Nizak, Clément; Rivoire, Olivier
2016-03-29
Variation and selection are the core principles of Darwinian evolution, but quantitatively relating the diversity of a population to its capacity to respond to selection is challenging. Here, we examine this problem at a molecular level in the context of populations of partially randomized proteins selected for binding to well-defined targets. We built several minimal protein libraries, screened them in vitro by phage display, and analyzed their response to selection by high-throughput sequencing. A statistical analysis of the results reveals two main findings. First, libraries with the same sequence diversity but built around different "frameworks" typically have vastly different responses; second, the distribution of responses of the best binders in a library follows a simple scaling law. We show how an elementary probabilistic model based on extreme value theory rationalizes the latter finding. Our results have implications for designing synthetic protein libraries, estimating the density of functional biomolecules in sequence space, characterizing diversity in natural populations, and experimentally investigating evolvability (i.e., the potential for future evolution). PMID:26969726
Hierarchy and extremes in selections from pools of randomized proteins
Boyer, Sébastien; Biswas, Dipanwita; Kumar Soshee, Ananda; Scaramozzino, Natale; Nizak, Clément; Rivoire, Olivier
2016-01-01
Variation and selection are the core principles of Darwinian evolution, but quantitatively relating the diversity of a population to its capacity to respond to selection is challenging. Here, we examine this problem at a molecular level in the context of populations of partially randomized proteins selected for binding to well-defined targets. We built several minimal protein libraries, screened them in vitro by phage display, and analyzed their response to selection by high-throughput sequencing. A statistical analysis of the results reveals two main findings. First, libraries with the same sequence diversity but built around different “frameworks” typically have vastly different responses; second, the distribution of responses of the best binders in a library follows a simple scaling law. We show how an elementary probabilistic model based on extreme value theory rationalizes the latter finding. Our results have implications for designing synthetic protein libraries, estimating the density of functional biomolecules in sequence space, characterizing diversity in natural populations, and experimentally investigating evolvability (i.e., the potential for future evolution). PMID:26969726
Intracytoplasmic morphologically selected sperm injection: a prospective randomized trial.
Antinori, Monica; Licata, Emanuele; Dani, Gianluca; Cerusico, Fabrizio; Versaci, Caterina; d'Angelo, Daniela; Antinori, Severino
2008-06-01
The aim of this prospective randomized study was to assess the advantages of a new modified intracytoplasmic sperm injection (ICSI) technique called intracytoplasmic morphologically selected sperm injection (IMSI) over the conventional ICSI procedure in the treatment of patients with severe oligoasthenoteratozoospermia. The new procedure consisted of IMSI based on a preliminary motile sperm organellar morphology examination under x6600 high magnification. A total of 446 couples with at least two previous diagnoses of severe oligoasthenoteratozoospermia, 3 years of primary infertility, the woman aged 35 years or younger, and an undetected female factor were randomized to IVF micro-insemination treatments: ICSI (n = 219; group 1) and IMSI (n = 227; group 2). A comparison between the two different techniques was made in terms of pregnancy, miscarriage and implantation rates. The data showed that IMSI resulted in a higher clinical pregnancy rate (39.2% versus 26.5%; P = 0.004) than ICSI when applied to severe male infertility cases. Despite their initial poor reproductive prognosis, patients with two or more previous failed attempts benefited the most from IMSI in terms of pregnancy (29.8% versus 12.9%; P = 0.017) and miscarriage rates (17.4% versus 37.5%). At present, 35 healthy babies have been born following the introduction of this promising technique in daily IVF practice.
Materials selection for oxide-based resistive random access memories
Guo, Yuzheng; Robertson, John
2014-12-01
The energies of atomic processes in resistive random access memories (RRAMs) are calculated for four typical oxides, HfO{sub 2}, TiO{sub 2}, Ta{sub 2}O{sub 5}, and Al{sub 2}O{sub 3}, to define a materials selection process. O vacancies have the lowest defect formation energy in the O-poor limit and dominate the processes. A band diagram defines the operating Fermi energy and O chemical potential range. It is shown how the scavenger metal can be used to vary the O vacancy formation energy, via controlling the O chemical potential, and the mean Fermi energy. The high endurance of Ta{sub 2}O{sub 5} RRAM is related to its more stable amorphous phase and the adaptive lattice rearrangements of its O vacancy.
Thomas, D.L.; Johnson, D.; Griffith, B.
2006-01-01
Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a
Ribozyme motif structure mapped using random recombination and selection
WANG, QING S.; UNRAU, PETER J.
2005-01-01
Isolating the core functional elements of an RNA is normally performed during the characterization of a new RNA in order to simplify further biochemical analysis. The removal of extraneous sequence is challenging and can lead to biases that result from the incomplete sampling of deletion variants. An impartial solution to this problem is to construct a library containing a large number of deletion constructs and to select functional RNA isolates that are at least as efficient as their full-length progenitors. Here, we use nonhomologous recombination and selection to isolate the catalytic core of a pyrimidine nucleotide synthase ribozyme. A variable-length pool of ~108 recombinant molecules that included deletions, inversions, and translocations of a 271-nucleotide-long ribozyme isolate was constructed by digesting and randomly religating its DNA genome. In vitro selection for functional ribozymes was then performed in a size-dependent and a size-independent manner. The final pools had nearly equivalent catalytic rates even though their length distributions were completely different, indicating that a diverse range of deletion constructs were functionally active. Four short sequence islands, requiring as little as 81 nt of sequence, were found within all of the truncated ribozymes and could be folded into a secondary structure consisting of three helix–loops. Our findings suggest that nonhomologous recombination is a highly efficient way to isolate a ribozyme’s core motif and could prove to be a useful method for evolving new ribozyme functions from pre-existing sequences in a manner that may have played an important role early in evolution. PMID:15703441
32 CFR 1624.1 - Random selection procedures for induction.
Code of Federal Regulations, 2012 CFR
2012-07-01
....1 Section 1624.1 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE... date of birth of the registrant that appears on his Selective Service Registration Record on the day... date of birth in all matters pertaining to his relations with the Selective Service System....
32 CFR 1624.1 - Random selection procedures for induction.
Code of Federal Regulations, 2011 CFR
2011-07-01
....1 Section 1624.1 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE... date of birth of the registrant that appears on his Selective Service Registration Record on the day... date of birth in all matters pertaining to his relations with the Selective Service System....
32 CFR 1624.1 - Random selection procedures for induction.
Code of Federal Regulations, 2010 CFR
2010-07-01
....1 Section 1624.1 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE... date of birth of the registrant that appears on his Selective Service Registration Record on the day... date of birth in all matters pertaining to his relations with the Selective Service System....
32 CFR 1624.1 - Random selection procedures for induction.
Code of Federal Regulations, 2013 CFR
2013-07-01
....1 Section 1624.1 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE... date of birth of the registrant that appears on his Selective Service Registration Record on the day... date of birth in all matters pertaining to his relations with the Selective Service System....
32 CFR 1624.1 - Random selection procedures for induction.
Code of Federal Regulations, 2014 CFR
2014-07-01
....1 Section 1624.1 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE... date of birth of the registrant that appears on his Selective Service Registration Record on the day... date of birth in all matters pertaining to his relations with the Selective Service System....
Study on MAX-MIN Ant System with Random Selection in Quadratic Assignment Problem
NASA Astrophysics Data System (ADS)
Iimura, Ichiro; Yoshida, Kenji; Ishibashi, Ken; Nakayama, Shigeru
Ant Colony Optimization (ACO), which is a type of swarm intelligence inspired by ants' foraging behavior, has been studied extensively and its effectiveness has been shown by many researchers. The previous studies have reported that MAX-MIN Ant System (MMAS) is one of effective ACO algorithms. The MMAS maintains the balance of intensification and diversification concerning pheromone by limiting the quantity of pheromone to the range of minimum and maximum values. In this paper, we propose MAX-MIN Ant System with Random Selection (MMASRS) for improving the search performance even further. The MMASRS is a new ACO algorithm that is MMAS into which random selection was newly introduced. The random selection is one of the edgechoosing methods by agents (ants). In our experimental evaluation using ten quadratic assignment problems, we have proved that the proposed MMASRS with the random selection is superior to the conventional MMAS without the random selection in the viewpoint of the search performance.
ATP selection in a random peptide library consisting of prebiotic amino acids.
Kang, Shou-Kai; Chen, Bai-Xue; Tian, Tian; Jia, Xi-Shuai; Chu, Xin-Yi; Liu, Rong; Dong, Peng-Fei; Yang, Qing-Yong; Zhang, Hong-Yu
2015-10-23
Based upon many theoretical findings on protein evolution, we proposed a ligand-selection model for the origin of proteins, in which the most ancient proteins originated from ATP selection in a pool of random peptides. To test this ligand-selection model, we constructed a random peptide library consisting of 15 types of prebiotic amino acids and then used cDNA display to perform six rounds of in vitro selection with ATP. By means of next-generation sequencing, the most prevalent sequence was defined. Biochemical and biophysical characterization of the selected peptide showed that it was stable and foldable and had ATP-hydrolysis activity as well.
MOMENT-BASED METHOD FOR RANDOM EFFECTS SELECTION IN LINEAR MIXED MODELS
Ahn, Mihye; Lu, Wenbin
2012-01-01
The selection of random effects in linear mixed models is an important yet challenging problem in practice. We propose a robust and unified framework for automatically selecting random effects and estimating covariance components in linear mixed models. A moment-based loss function is first constructed for estimating the covariance matrix of random effects. Two types of shrinkage penalties, a hard thresholding operator and a new sandwich-type soft-thresholding penalty, are then imposed for sparse estimation and random effects selection. Compared with existing approaches, the new procedure does not require any distributional assumption on the random effects and error terms. We establish the asymptotic properties of the resulting estimator in terms of its consistency in both random effects selection and variance component estimation. Optimization strategies are suggested to tackle the computational challenges involved in estimating the sparse variance-covariance matrix. Furthermore, we extend the procedure to incorporate the selection of fixed effects as well. Numerical results show promising performance of the new approach in selecting both random and fixed effects and, consequently, improving the efficiency of estimating model parameters. Finally, we apply the approach to a data set from the Amsterdam Growth and Health study. PMID:23105913
Application of random effects to the study of resource selection by animals.
Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L
2006-07-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
Selective advantage for sexual replication with random haploid fusion
NASA Astrophysics Data System (ADS)
Tannenbaum, Emmanuel
2008-03-01
This talk develops a simplified set of models describing asexual and sexual replication in unicellular diploid organisms. The models assume organisms whose genomes consist of two chromosomes, where each chromosome is assumed to be functional if and only if it is equal to some master sequence. The fitness of an organism is determined by the number of functional chromosomes in its genome. For a population replicating asexually, a cell replicates both of its chromosomes, and then divides and splits its genetic material evenly between the two cells. For a population replicating sexually, a given cell first divides into two haploids, which enter a haploid pool. Within the haploid pool, haploids fuse into diploids, which then divide via the normal mitotic process. When the cost for sex is small, as measured by the ratio of the characteristic haploid fusion time to the characteristic growth time, we find that sexual replication with random haploid fusion leads to a greater mean fitness for the population than a purely asexual strategy. The results of this talk are consistent with previous studies suggesting that sex is favored at intermediate mutation rates, for slowly replicating organisms, and at high population densities.
Acceptance sampling using judgmental and randomly selected samples
Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl
2010-09-01
We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.
Drugs in oral fluid in randomly selected drivers.
Drummer, Olaf H; Gerostamoulos, Dimitri; Chu, Mark; Swann, Philip; Boorman, Martin; Cairns, Ian
2007-08-01
There were 13,176 roadside drug tests performed in the first year of the random drug-testing program conducted in the state of Victoria. Drugs targeted in the testing were methamphetamines and Delta(9)-tetrahydrocannabinol (THC). On-site screening was conducted by the police using DrugWipe, while the driver was still in the vehicle and if positive, a second test on collected oral fluid, using the Rapiscan, was performed in a specially outfitted "drug bus" located adjacent to the testing area. Oral fluid on presumptive positive cases was sent to the laboratory for confirmation with limits of quantification of 5, 5, and 2 ng/mL for methamphetamine (MA), methylenedioxy-methamphetamine (MDMA), and THC, respectively. Recovery experiments conducted in the laboratory showed quantitative recovery of analytes from the collector. When oral fluid could not be collected, blood was taken from the driver and sent to the laboratory for confirmation. These roadside tests gave 313 positive cases following GC-MS confirmation. These comprised 269, 118, and 87 cases positive to MA, MDMA, and THC, respectively. The median oral concentrations (undiluted) of MA, MDMA, and THC was 1136, 2724, and 81 ng/mL. The overall drug positive rate was 2.4% of the screened population. This rate was highest in drivers of cars (2.8%). The average age of drivers detected with a positive drug reading was 28 years. Large vehicle (trucks over 4.5 t) drivers were older; on average at 38 years. Females accounted for 19% of all positives, although none of the positive truck drivers were female. There was one false positive to cannabis when the results of both on-site devices were considered and four to methamphetamines.
Statistical considerations of the random selection process in a drug testing program
Burtis, C.A.; Owings, J.H.; Leete, R.S. Jr.
1987-01-01
In a prospective drug testing program, individuals whose job classifications have been defined as sensitive are placed in a selection pool. On a periodic basis, individuals are chosen from this pool for drug testing. Random selection is a fair and impartial approach. A random selection process generates a Poisson distribution of probabilities that can be used to predict how many times an individual will be selected during a specific time interval. This information can be used to model the selection part of a drug testing program to determine whether specific conditions of testing are met. For example, the probability of being selected a given number of times during the testing period can be minimized or maximized by varying the frequency of the sampling process. Consequently, the Poisson distribution and the mathematics governing it can be used to structure a drug testing program to meet the needs and dictates of any given situation.
Random Forest (RF) Wrappers for Waveband Selection and Classification of Hyperspectral Data.
Poona, Nitesh Keshavelal; van Niekerk, Adriaan; Nadel, Ryan Leslie; Ismail, Riyad
2016-02-01
Hyperspectral data collected using a field spectroradiometer was used to model asymptomatic stress in Pinus radiata and Pinus patula seedlings infected with the pathogen Fusarium circinatum. Spectral data were analyzed using the random forest algorithm. To improve the classification accuracy of the model, subsets of wavebands were selected using three feature selection algorithms: (1) Boruta; (2) recursive feature elimination (RFE); and (3) area under the receiver operating characteristic curve of the random forest (AUC-RF). Results highlighted the robustness of the above feature selection methods when used in conjunction with the random forest algorithm for analyzing hyperspectral data. Overall, the Boruta feature selection algorithm provided the best results. When discriminating F. circinatum stress in Pinus radiata seedlings, Boruta selected wavebands (n = 69) yielded the best overall classification accuracies (training error of 17.00%, independent test error of 17.00% and an AUC value of 0.91). Classification results were, however, significantly lower for P. patula seedlings, with a training error of 24.00%, independent test error of 38.00%, and an AUC value of 0.65. A hybrid selection method that utilizes combinations of wavebands selected from the three feature selection algorithms was also tested. The hybrid method showed an improvement in classification accuracies for P. patula, and no improvement for P. radiata. The results of this study provide impetus towards implementing a hyperspectral framework for detecting stress within nursery environments.
Kronborg, O; Madsen, P
1975-01-01
The results of highly selective vagotomy without drainage and selective vagotomy with pyloroplasty for duodenal ulcer were compared in a randomized, controlled trial of a series of 100 patients. The frequency of dumping, diarrhoea, and epigastric fullness was significantly lower after highly selective (6, 6, and 8 percent) than after selective vagotomy (30, 20, and 28 percent) one year after the operations. Recurrent and persisting duodenal ulcers appearing from one to four years after the operations were significantly more frequent after highly selective (22 percent) than after selective vagotomy (8 percent). No significant relationships were found between recurrent ulceration and gastric acid secretion measurements after the two operations. The Hollander response was early positive in 28 percent and late positive in 30 percent of the patients subjected to highly selective vagotomy, while the corresponding figures after selective vagotomy were 26 and 32 percent. The overall clinical results of the two operations were not different according to the classification of Visick. Excluding the patients with recurrence resulted in significantly better clinical results after highly selective vagotomy. PMID:1093947
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...
ERIC Educational Resources Information Center
Beretvas, S. Natasha; Murphy, Daniel L.
2013-01-01
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or...
This report is a description of field work and data analysis results comparing a design comparable to systematic site selection with one based on random selection of sites. The report is expected to validate the use of random site selection in the bioassessment program for the O...
Adaptive consensus of scale-free multi-agent system by randomly selecting links
NASA Astrophysics Data System (ADS)
Mou, Jinping; Ge, Huafeng
2016-06-01
This paper investigates an adaptive consensus problem for distributed scale-free multi-agent systems (SFMASs) by randomly selecting links, where the degree of each node follows a power-law distribution. The randomly selecting links are based on the assumption that every agent decides to select links among its neighbours according to the received data with a certain probability. Accordingly, a novel consensus protocol with the range of the received data is developed, and each node updates its state according to the protocol. By the iterative method and Cauchy inequality, the theoretical analysis shows that all errors among agents converge to zero, and in the meanwhile, several criteria of consensus are obtained. One numerical example shows the reliability of the proposed methods.
NASA Astrophysics Data System (ADS)
Hasuike, Takashi; Katagiri, Hideki
2010-10-01
This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.
Continuous-Time Mean-Variance Portfolio Selection with Random Horizon
Yu, Zhiyong
2013-12-15
This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.
Selection pressures give composite correlated random walks Lévy walk characteristics.
Reynolds, A M
2013-09-01
Composite correlated random walks have been posited as a strong alternative to Lévy walks as models of multi-scale forager movement patterns. Here it is shown that if plastic then intrinsic composite correlated random walks will, under selection pressures, evolve to resemble optimal Lévy walks when foraging is non-destructive. The fittest composite correlated random walkers are found to be those that come closest to being optimal Lévy walkers. This may explain why such a diverse range of foragers have movement patterns that can be approximated by optimal Lévy walks and shows that the 'Lévy-flight foraging' hypothesis has a broad hinterland. The new findings are consistent with recent observations of mussels Mytilus edulis and the Australian desert ant Melophorus bagoti which suggest that animals approximate a Lévy walk by adopting an intrinsic composite movement strategy with different modes.
Emergence of multilevel selection in the prisoner's dilemma game on coevolving random networks
NASA Astrophysics Data System (ADS)
Szolnoki, Attila; Perc, Matjaž
2009-09-01
We study the evolution of cooperation in the prisoner's dilemma game, whereby a coevolutionary rule is introduced that molds the random topology of the interaction network in two ways. First, existing links are deleted whenever a player adopts a new strategy or its degree exceeds a threshold value; second, new links are added randomly after a given number of game iterations. These coevolutionary processes correspond to the generic formation of new links and deletion of existing links that, especially in human societies, appear frequently as a consequence of ongoing socialization, change of lifestyle or death. Due to the counteraction of deletions and additions of links the initial heterogeneity of the interaction network is qualitatively preserved, and thus cannot be held responsible for the observed promotion of cooperation. Indeed, the coevolutionary rule evokes the spontaneous emergence of a powerful multilevel selection mechanism, which despite the sustained random topology of the evolving network, maintains cooperation across the whole span of defection temptation values.
Skalet, Alison H.; Cevallos, Vicky; Ayele, Berhan; Gebre, Teshome; Zhou, Zhaoxia; Jorgensen, James H.; Zerihun, Mulat; Habte, Dereje; Assefa, Yared; Emerson, Paul M.; Gaynor, Bruce D.; Porco, Travis C.; Lietman, Thomas M.; Keenan, Jeremy D.
2010-01-01
Background It is widely thought that widespread antibiotic use selects for community antibiotic resistance, though this has been difficult to prove in the setting of a community-randomized clinical trial. In this study, we used a randomized clinical trial design to assess whether macrolide resistance was higher in communities treated with mass azithromycin for trachoma, compared to untreated control communities. Methods and Findings In a cluster-randomized trial for trachoma control in Ethiopia, 12 communities were randomized to receive mass azithromycin treatment of children aged 1–10 years at months 0, 3, 6, and 9. Twelve control communities were randomized to receive no antibiotic treatments until the conclusion of the study. Nasopharyngeal swabs were collected from randomly selected children in the treated group at baseline and month 12, and in the control group at month 12. Antibiotic susceptibility testing was performed on Streptococcus pneumoniae isolated from the swabs using Etest strips. In the treated group, the mean prevalence of azithromycin resistance among all monitored children increased from 3.6% (95% confidence interval [CI] 0.8%–8.9%) at baseline, to 46.9% (37.5%–57.5%) at month 12 (p = 0.003). In control communities, azithromycin resistance was 9.2% (95% CI 6.7%–13.3%) at month 12, significantly lower than the treated group (p<0.0001). Penicillin resistance was identified in 0.8% (95% CI 0%–4.2%) of isolates in the control group at 1 year, and in no isolates in the children-treated group at baseline or 1 year. Conclusions This cluster-randomized clinical trial demonstrated that compared to untreated control communities, nasopharyngeal pneumococcal resistance to macrolides was significantly higher in communities randomized to intensive azithromycin treatment. Mass azithromycin distributions were given more frequently than currently recommended by the World Health Organization's trachoma program. Azithromycin use in this setting did
Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.
2012-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
Marcil-Gratton, N; Duchesne, C; St-Germain-Roy, S; Tulandi, T
1988-01-01
The characteristics of 96 women who requested reversal of tubal ligation at two fertility clinics in Montreal were compared with those of 403 randomly selected sterilized women in Quebec. The two groups were found to have a similar socioeconomic profile. In only two respects were the groups significantly different: the women who requested reversal generally had been sterilized at an earlier age and had more complex marital histories. PMID:3355950
Random Drift versus Selection in Academic Vocabulary: An Evolutionary Analysis of Published Keywords
Bentley, R. Alexander
2008-01-01
The evolution of vocabulary in academic publishing is characterized via keyword frequencies recorded in the ISI Web of Science citations database. In four distinct case-studies, evolutionary analysis of keyword frequency change through time is compared to a model of random copying used as the null hypothesis, such that selection may be identified against it. The case studies from the physical sciences indicate greater selection in keyword choice than in the social sciences. Similar evolutionary analyses can be applied to a wide range of phenomena; wherever the popularity of multiple items through time has been recorded, as with web searches, or sales of popular music and books, for example. PMID:18728786
Topology-selective jamming of fully-connected, code-division random-access networks
NASA Technical Reports Server (NTRS)
Polydoros, Andreas; Cheng, Unjeng
1990-01-01
The purpose is to introduce certain models of topology selective stochastic jamming and examine its impact on a class of fully-connected, spread-spectrum, slotted ALOHA-type random access networks. The theory covers dedicated as well as half-duplex units. The dominant role of the spatial duty factor is established, and connections with the dual concept of time selective jamming are discussed. The optimal choices of coding rate and link access parameters (from the users' side) and the jamming spatial fraction are numerically established for DS and FH spreading.
Bentley, R Alexander
2008-08-27
The evolution of vocabulary in academic publishing is characterized via keyword frequencies recorded in the ISI Web of Science citations database. In four distinct case-studies, evolutionary analysis of keyword frequency change through time is compared to a model of random copying used as the null hypothesis, such that selection may be identified against it. The case studies from the physical sciences indicate greater selection in keyword choice than in the social sciences. Similar evolutionary analyses can be applied to a wide range of phenomena; wherever the popularity of multiple items through time has been recorded, as with web searches, or sales of popular music and books, for example.
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2014 CFR
2014-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... Â§ 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2011 CFR
2011-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... Â§ 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2010 CFR
2010-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... Â§ 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2013 CFR
2013-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... Â§ 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2012 CFR
2012-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... Â§ 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each...
Malaria life cycle intensifies both natural selection and random genetic drift.
Chang, Hsiao-Han; Moss, Eli L; Park, Daniel J; Ndiaye, Daouda; Mboup, Souleymane; Volkman, Sarah K; Sabeti, Pardis C; Wirth, Dyann F; Neafsey, Daniel E; Hartl, Daniel L
2013-12-10
Analysis of genome sequences of 159 isolates of Plasmodium falciparum from Senegal yields an extraordinarily high proportion (26.85%) of protein-coding genes with the ratio of nonsynonymous to synonymous polymorphism greater than one. This proportion is much greater than observed in other organisms. Also unusual is that the site-frequency spectra of synonymous and nonsynonymous polymorphisms are virtually indistinguishable. We hypothesized that the complicated life cycle of malaria parasites might lead to qualitatively different population genetics from that predicted from the classical Wright-Fisher (WF) model, which assumes a single random-mating population with a finite and constant population size in an organism with nonoverlapping generations. This paper summarizes simulation studies of random genetic drift and selection in malaria parasites that take into account their unusual life history. Our results show that random genetic drift in the malaria life cycle is more pronounced than under the WF model. Paradoxically, the efficiency of purifying selection in the malaria life cycle is also greater than under WF, and the relative efficiency of positive selection varies according to conditions. Additionally, the site-frequency spectrum under neutrality is also more skewed toward low-frequency alleles than expected with WF. These results highlight the importance of considering the malaria life cycle when applying existing population genetic tools based on the WF model. The same caveat applies to other species with similarly complex life cycles.
Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach
NASA Astrophysics Data System (ADS)
Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar
2010-10-01
To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.
Pan, Xiaoyong; Zhu, Lin; Fan, Yong-Xian; Yan, Junchi
2014-11-13
Protein-RNA interaction plays a very crucial role in many biological processes, such as protein synthesis, transcription and post-transcription of gene expression and pathogenesis of disease. Especially RNAs always function through binding to proteins. Identification of binding interface region is especially useful for cellular pathways analysis and drug design. In this study, we proposed a novel approach for binding sites identification in proteins, which not only integrates local features and global features from protein sequence directly, but also constructed a balanced training dataset using sub-sampling based on submodularity subset selection. Firstly we extracted local features and global features from protein sequence, such as evolution information and molecule weight. Secondly, the number of non-interaction sites is much more than interaction sites, which leads to a sample imbalance problem, and hence biased machine learning model with preference to non-interaction sites. To better resolve this problem, instead of previous randomly sub-sampling over-represented non-interaction sites, a novel sampling approach based on submodularity subset selection was employed, which can select more representative data subset. Finally random forest were trained on optimally selected training subsets to predict interaction sites. Our result showed that our proposed method is very promising for predicting protein-RNA interaction residues, it achieved an accuracy of 0.863, which is better than other state-of-the-art methods. Furthermore, it also indicated the extracted global features have very strong discriminate ability for identifying interaction residues from random forest feature importance analysis.
Optimization of the Dutch Matrix Test by Random Selection of Sentences From a Preselected Subset
Dreschler, Wouter A.
2015-01-01
Matrix tests are available for speech recognition testing in many languages. For an accurate measurement, a steep psychometric function of the speech materials is required. For existing tests, it would be beneficial if it were possible to further optimize the available materials by increasing the function’s steepness. The objective is to show if the steepness of the psychometric function of an existing matrix test can be increased by selecting a homogeneous subset of recordings with the steepest sentence-based psychometric functions. We took data from a previous multicenter evaluation of the Dutch matrix test (45 normal-hearing listeners). Based on half of the data set, first the sentences (140 out of 311) with a similar speech reception threshold and with the steepest psychometric function (≥9.7%/dB) were selected. Subsequently, the steepness of the psychometric function for this selection was calculated from the remaining (unused) second half of the data set. The calculation showed that the slope increased from 10.2%/dB to 13.7%/dB. The resulting subset did not allow the construction of enough balanced test lists. Therefore, the measurement procedure was changed to randomly select the sentences during testing. Random selection may interfere with a representative occurrence of phonemes. However, in our material, the median phonemic occurrence remained close to that of the original test. This finding indicates that phonemic occurrence is not a critical factor. The work highlights the possibility that existing speech tests might be improved by selecting sentences with a steep psychometric function. PMID:25964195
Optimization of the Dutch matrix test by random selection of sentences from a preselected subset.
Houben, Rolph; Dreschler, Wouter A
2015-05-11
Matrix tests are available for speech recognition testing in many languages. For an accurate measurement, a steep psychometric function of the speech materials is required. For existing tests, it would be beneficial if it were possible to further optimize the available materials by increasing the function's steepness. The objective is to show if the steepness of the psychometric function of an existing matrix test can be increased by selecting a homogeneous subset of recordings with the steepest sentence-based psychometric functions. We took data from a previous multicenter evaluation of the Dutch matrix test (45 normal-hearing listeners). Based on half of the data set, first the sentences (140 out of 311) with a similar speech reception threshold and with the steepest psychometric function (≥9.7%/dB) were selected. Subsequently, the steepness of the psychometric function for this selection was calculated from the remaining (unused) second half of the data set. The calculation showed that the slope increased from 10.2%/dB to 13.7%/dB. The resulting subset did not allow the construction of enough balanced test lists. Therefore, the measurement procedure was changed to randomly select the sentences during testing. Random selection may interfere with a representative occurrence of phonemes. However, in our material, the median phonemic occurrence remained close to that of the original test. This finding indicates that phonemic occurrence is not a critical factor. The work highlights the possibility that existing speech tests might be improved by selecting sentences with a steep psychometric function.
Variable selection for covariate-adjusted semiparametric inference in randomized clinical trials
Yuan, Shuai; Zhang, Hao Helen; Davidian, Marie
2013-01-01
Extensive baseline covariate information is routinely collected on participants in randomized clinical trials, and it is well-recognized that a proper covariate-adjusted analysis can improve the efficiency of inference on the treatment effect. However, such covariate adjustment has engendered considerable controversy, as post hoc selection of covariates may involve subjectivity and lead to biased inference, while prior specification of the adjustment may exclude important variables from consideration. Accordingly, how to select covariates objectively to gain maximal efficiency is of broad interest. We propose and study the use of modern variable selection methods for this purpose in the context of a semiparametric framework, under which variable selection in modeling the relationship between outcome and covariates is separated from estimation of the treatment effect, circumventing the potential for selection bias associated with standard analysis of covariance methods. We demonstrate that such objective variable selection techniques combined with this framework can identify key variables and lead to unbiased and efficient inference on the treatment effect. A critical issue in finite samples is validity of estimators of uncertainty, such as standard errors and confidence intervals for the treatment effect. We propose an approach to estimation of sampling variation of estimated treatment effect and show its superior performance relative to that of existing methods. PMID:22733628
Setti, Amanda S; Figueira, Rita C S; Braga, Daniela P A F; Iaconelli, Assumpto; Borges, Edson
2012-04-01
The aim of this prospective randomized study was to determine if the use of intracytoplasmic morphologically selected sperm injection (IMSI) is associated with gender incidence. Couples who underwent IVF-preimplantation genetic screening (PGS) cycles, as a result of advanced maternal age, were randomly allocated into two groups: intracytoplasmic sperm injection (ICSI; n=80) or intracytoplasmic morphologically selected sperm injection (IMSI; n=80). The incidences of genders were compared between ICSI- and IMSI-derived embryos. Considering all the biopsied embryos were characterized as normal for sex chromosome, the results showed that IMSI results in a significantly higher incidence of female embryos as compared with ICSI (65.1% versus 54.0%, respectively, P=0.0277). After analysing only euploid embryos for the eight selected chromosomes, a significantly higher incidence of XX embryos derived from IMSI was also observed compared with ICSI cycles (66.9% versus 52.5%, respectively, P=0.0322). This result was confirmed by logistic regression, which demonstrated a nearly 2-fold increase in euploid XX embryos derived from spermatozoa selected by high magnification (OR 1.83, 95% CI 1.05-3.35, P=0.032). A higher proportion of morphologically normal spermatozoa analysed under high magnification seem to carry the X chromosome. The aim of this study was to determine if the use of intracytoplasmic morphologically selected sperm injection (IMSI) is associated with gender incidence. Couples who underwent IVF with preimplantation genetic screening, as a result of advanced maternal age, were randomly allocated into two groups: intracytoplasmic sperm injection (ICSI; n=80) or intracytoplasmic morphologically selected sperm injection (IMSI; n=80). The incidences of genders were compared between ICSI- and IMSI-derived embryos. Our results showed that a significantly higher incidence of female embryos derived from IMSI compared with ICSI cycles (66.9% versus 52.5%, respectively, P=0
Campbell, Claudia P.; Raubenheimer, David; Badaloo, Asha V.; Gluckman, Peter D.; Martinez, Claudia; Gosby, Alison; Simpson, Stephen J.; Osmond, Clive; Boyne, Michael S.; Forrester, Terrence E.
2016-01-01
Background and objectives: Birthweight differences between kwashiorkor and marasmus suggest that intrauterine factors influence the development of these syndromes of malnutrition and may modulate risk of obesity through dietary intake. We tested the hypotheses that the target protein intake in adulthood is associated with birthweight, and that protein leveraging to maintain this target protein intake would influence energy intake (EI) and body weight in adult survivors of malnutrition. Methodology: Sixty-three adult survivors of marasmus and kwashiorkor could freely compose a diet from foods containing 10, 15 and 25 percentage energy from protein (percentage of energy derived from protein (PEP); Phase 1) for 3 days. Participants were then randomized in Phase 2 (5 days) to diets with PEP fixed at 10%, 15% or 25%. Results: Self-selected PEP was similar in both groups. In the groups combined, selected PEP was 14.7, which differed significantly (P < 0.0001) from the null expectation (16.7%) of no selection. Self-selected PEP was inversely related to birthweight, the effect disappearing after adjusting for sex and current body weight. In Phase 2, PEP correlated inversely with EI (P = 0.002) and weight change from Phase 1 to 2 (P = 0.002). Protein intake increased with increasing PEP, but to a lesser extent than energy increased with decreasing PEP. Conclusions and implications: Macronutrient intakes were not independently related to birthweight or diagnosis. In a free-choice situation (Phase 1), subjects selected a dietary PEP significantly lower than random. Lower PEP diets induce increased energy and decreased protein intake, and are associated with weight gain. PMID:26817484
Size-selection initiation model extended to include shape and random factors
Trenholme, J B; Feit, M D; Rubenchik, A M
2005-11-02
The Feit-Rubenchik size-selection damage model has been extended in a number of ways. More realistic thermal deposition profiles have been added. Non-spherical shapes (rods and plates) have been considered, with allowance for their orientation dependence. Random variations have been taken into account. An explicit form for the change of absorptivity with precursor size has been added. A simulation tool called GIDGET has been built to allow adjustment of the many possible parameters in order to fit experimental data of initiation density as a function of fluence and pulse duration. The result is a set of constraints on the possible properties of initiation precursors.
Selective of informative metabolites using random forests based on model population analysis.
Huang, Jian-Hua; Yan, Jun; Wu, Qing-Hua; Duarte Ferro, Miguel; Yi, Lun-Zhao; Lu, Hong-Mei; Xu, Qing-Song; Liang, Yi-Zeng
2013-12-15
One of the main goals of metabolomics studies is to discover informative metabolites or biomarkers, which may be used to diagnose diseases and to find out pathology. Sophisticated feature selection approaches are required to extract the information hidden in such complex 'omics' data. In this study, it is proposed a new and robust selective method by combining random forests (RF) with model population analysis (MPA), for selecting informative metabolites from three metabolomic datasets. According to the contribution to the classification accuracy, the metabolites were classified into three kinds: informative, no-informative, and interfering metabolites. Based on the proposed method, some informative metabolites were selected for three datasets; further analyses of these metabolites between healthy and diseased groups were then performed, showing by T-test that the P values for all these selected metabolites were lower than 0.05. Moreover, the informative metabolites identified by the current method were demonstrated to be correlated with the clinical outcome under investigation. The source codes of MPA-RF in Matlab can be freely downloaded from http://code.google.com/p/my-research-list/downloads/list. PMID:24209380
Polynomial order selection in random regression models via penalizing adaptively the likelihood.
Corrales, J D; Munilla, S; Cantet, R J C
2015-08-01
Orthogonal Legendre polynomials (LP) are used to model the shape of additive genetic and permanent environmental effects in random regression models (RRM). Frequently, the Akaike (AIC) and the Bayesian (BIC) information criteria are employed to select LP order. However, it has been theoretically shown that neither AIC nor BIC is simultaneously optimal in terms of consistency and efficiency. Thus, the goal was to introduce a method, 'penalizing adaptively the likelihood' (PAL), as a criterion to select LP order in RRM. Four simulated data sets and real data (60,513 records, 6675 Colombian Holstein cows) were employed. Nested models were fitted to the data, and AIC, BIC and PAL were calculated for all of them. Results showed that PAL and BIC identified with probability of one the true LP order for the additive genetic and permanent environmental effects, but AIC tended to favour over parameterized models. Conversely, when the true model was unknown, PAL selected the best model with higher probability than AIC. In the latter case, BIC never favoured the best model. To summarize, PAL selected a correct model order regardless of whether the 'true' model was within the set of candidates.
Balaban, Basak; Yakin, Kayhan; Alatas, Cengiz; Oktem, Ozgur; Isiklar, Aycan; Urman, Bulent
2011-05-01
Recent evidence shows that the selection of spermatozoa based on the analysis of morphology under high magnification (×6000) may have a positive impact on embryo development in cases with severe male factor infertility and/or previous implantation failures. The objective of this prospective randomized study was to compare the clinical outcome of 87 intracytoplasmic morphologically selected sperm injection (IMSI) cycles with 81 conventional intracytoplasmic sperm injection (ICSI) cycles in an unselected infertile population. IMSI did not provide a significant improvement in the clinical outcome compared with ICSI although there were trends for higher implantation (28.9% versus 19.5%), clinical pregnancy (54.0% versus 44.4%) and live birth rates (43.7% versus 38.3%) in the IMSI group. However, severe male factor patients benefited from the IMSI procedure as shown by significantly higher implantation rates compared with their counterparts in the ICSI group (29.6% versus 15.2%, P=0.01). These results suggest that IMSI may improve IVF success rates in a selected group of patients with male factor infertility. New technological developments enable the real time examination of motile spermatozoa with an inverted light microscope equipped with high-power differential interference contrast optics, enhanced by digital imaging. High magnification (over ×6000) provides the identification of spermatozoa with a normal nucleus and nuclear content. Intracytoplasmic injection of spermatozoa selected according to fine nuclear morphology under high magnification may improve the clinical outcome in cases with severe male factor infertility.
Mohr, David C; Spring, Bonnie; Freedland, Kenneth E; Beckner, Victoria; Arean, Patricia; Hollon, Steven D; Ockene, Judith; Kaplan, Robert
2009-01-01
The randomized controlled trial (RCT) provides critical support for evidence-based practice using psychological interventions. The control condition is the principal method of removing the influence of unwanted variables in RCTs. There is little agreement or consistency in the design and construction of control conditions. Because control conditions have variable effects, the results of RCTs can depend as much on control condition selection as on the experimental intervention. The aim of this paper is to present a framework for the selection and design of control conditions for these trials. Threats to internal validity arising from modern RCT methodology are reviewed and reconsidered. The strengths and weaknesses of several categories of control conditions are examined, including the ones that are under experimental control, the ones that are under the control of clinical service providers, and no-treatment controls. Considerations in the selection of control conditions are discussed and several recommendations are proposed. The aim of this paper is to begin to define principles by which control conditions can be selected or developed in a manner that can assist both investigators and grant reviewers.
Sadeh, Sadra; Rotter, Stefan
2014-01-01
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.
Sadeh, Sadra; Rotter, Stefan
2014-01-01
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704
Stingo, Francesco C.; Vannucci, Marina
2011-01-01
Motivation: Discriminant analysis is an effective tool for the classification of experimental units into groups. Here, we consider the typical problem of classifying subjects according to phenotypes via gene expression data and propose a method that incorporates variable selection into the inferential procedure, for the identification of the important biomarkers. To achieve this goal, we build upon a conjugate normal discriminant model, both linear and quadratic, and include a stochastic search variable selection procedure via an MCMC algorithm. Furthermore, we incorporate into the model prior information on the relationships among the genes as described by a gene–gene network. We use a Markov random field (MRF) prior to map the network connections among genes. Our prior model assumes that neighboring genes in the network are more likely to have a joint effect on the relevant biological processes. Results: We use simulated data to assess performances of our method. In particular, we compare the MRF prior to a situation where independent Bernoulli priors are chosen for the individual predictors. We also illustrate the method on benchmark datasets for gene expression. Our simulation studies show that employing the MRF prior improves on selection accuracy. In real data applications, in addition to identifying markers and improving prediction accuracy, we show how the integration of existing biological knowledge into the prior model results in an increased ability to identify genes with strong discriminatory power and also aids the interpretation of the results. Contact: marina@rice.edu PMID:21159623
Pornography in Usenet: a study of 9,800 randomly selected images.
Mehta, M D
2001-12-01
This paper builds on an earlier study by Mehta and Plaza, from 1997, by analyzing 9,800 randomly selected images taken from 32 Usenet newsgroups between July 1995 and July 1996. The study concludes that an increasing percentage of pornographic images in Usenet come from commercially oriented sources and that commercial sources are more likely to post explicit images. Pornographic images containing themes that fall under most obscenity statutes are more likely to be posted by noncommercial sources. By examining the themes most commonly found in the sample, it is concluded that the vast majority of images contain legally permissible content. Only a small fraction of images contain pedophilic, bestiality, co-prophilic/urophilic, amputation and mutilation, and necrophilic themes. PMID:11800177
Pornography in Usenet: a study of 9,800 randomly selected images.
Mehta, M D
2001-12-01
This paper builds on an earlier study by Mehta and Plaza, from 1997, by analyzing 9,800 randomly selected images taken from 32 Usenet newsgroups between July 1995 and July 1996. The study concludes that an increasing percentage of pornographic images in Usenet come from commercially oriented sources and that commercial sources are more likely to post explicit images. Pornographic images containing themes that fall under most obscenity statutes are more likely to be posted by noncommercial sources. By examining the themes most commonly found in the sample, it is concluded that the vast majority of images contain legally permissible content. Only a small fraction of images contain pedophilic, bestiality, co-prophilic/urophilic, amputation and mutilation, and necrophilic themes.
Multilabel learning via random label selection for protein subcellular multilocations prediction.
Wang, Xiao; Li, Guo-Zheng
2013-01-01
Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multilocation proteins to multiple proteins with single location, which does not take correlations among different subcellular locations into account. In this paper, a novel method named random label selection (RALS) (multilabel learning via RALS), which extends the simple binary relevance (BR) method, is proposed to learn from multilocation proteins in an effective and efficient way. RALS does not explicitly find the correlations among labels, but rather implicitly attempts to learn the label correlations from data by augmenting original feature space with randomly selected labels as its additional input features. Through the fivefold cross-validation test on a benchmark data set, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark data sets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multilocations of proteins. The prediction web server is available at >http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage. PMID:23929867
Chandonia, John-Marc; Brenner, Steven E.
2004-07-14
The structural genomics project is an international effort to determine the three-dimensional shapes of all important biological macromolecules, with a primary focus on proteins. Target proteins should be selected according to a strategy which is medically and biologically relevant, of good value, and tractable. As an option to consider, we present the Pfam5000 strategy, which involves selecting the 5000 most important families from the Pfam database as sources for targets. We compare the Pfam5000 strategy to several other proposed strategies that would require similar numbers of targets. These include including complete solution of several small to moderately sized bacterial proteomes, partial coverage of the human proteome, and random selection of approximately 5000 targets from sequenced genomes. We measure the impact that successful implementation of these strategies would have upon structural interpretation of the proteins in Swiss-Prot, TrEMBL, and 131 complete proteomes (including 10 of eukaryotes) from the Proteome Analysis database at EBI. Solving the structures of proteins from the 5000 largest Pfam families would allow accurate fold assignment for approximately 68 percent of all prokaryotic proteins (covering 59 percent of residues) and 61 percent of eukaryotic proteins (40 percent of residues). More fine-grained coverage which would allow accurate modeling of these proteins would require an order of magnitude more targets. The Pfam5000 strategy may be modified in several ways, for example to focus on larger families, bacterial sequences, or eukaryotic sequences; as long as secondary consideration is given to large families within Pfam, coverage results vary only slightly. In contrast, focusing structural genomics on a single tractable genome would have only a limited impact in structural knowledge of other proteomes: a significant fraction (about 30-40 percent of the proteins, and 40-60 percent of the residues) of each proteome is classified in small
A novel, efficient, randomized selection trial comparing combinations of drug therapy for ALS.
Gordon, Paul H; Cheung, Ying-Kuen; Levin, Bruce; Andrews, Howard; Doorish, Carolyn; Macarthur, Robert B; Montes, Jacqueline; Bednarz, Kate; Florence, Julaine; Rowin, Julie; Boylan, Kevin; Mozaffar, Tahseen; Tandan, Rup; Mitsumoto, Hiroshi; Kelvin, Elizabeth A; Chapin, John; Bedlack, Richard; Rivner, Michael; McCluskey, Leo F; Pestronk, Alan; Graves, Michael; Sorenson, Eric J; Barohn, Richard J; Belsh, Jerry M; Lou, Jau-Shin; Levine, Todd; Saperstein, David; Miller, Robert G; Scelsa, Stephen N
2008-08-01
Combining agents with different mechanisms of action may be necessary for meaningful results in treating ALS. The combinations of minocycline-creatine and celecoxib-creatine have additive effects in the murine model. New trial designs are needed to efficiently screen the growing number of potential neuroprotective agents. Our objective was to assess two drug combinations in ALS using a novel phase II trial design. We conducted a randomized, double-blind selection trial in sequential pools of 60 patients. Participants received minocycline (100 mg)-creatine (10 g) twice daily or celecoxib (400 mg)-creatine (10 g) twice daily for six months. The primary objective was treatment selection based on which combination best slowed deterioration in the ALS Functional Rating Scale-Revised (ALSFRS-R); the trial could be stopped after one pool if the difference between the two arms was adequately large. At trial conclusion, each arm was compared to a historical control group in a futility analysis. Safety measures were also examined. After the first patient pool, the mean six-month decline in ALSFRS-R was 5.27 (SD=5.54) in the celecoxib-creatine group and 6.47 (SD=9.14) in the minocycline-creatine group. The corresponding decline was 5.82 (SD=6.77) in the historical controls. The difference between the two sample means exceeded the stopping criterion. The null hypothesis of superiority was not rejected in the futility analysis. Skin rash occurred more frequently in the celecoxib-creatine group. In conclusion, the celecoxib-creatine combination was selected as preferable to the minocycline-creatine combination for further evaluation. This phase II design was efficient, leading to treatment selection after just 60 patients, and can be used in other phase II trials to assess different agents.
A novel, efficient, randomized selection trial comparing combinations of drug therapy for ALS
GORDON, PAUL H.; CHEUNG, YING-KUEN; LEVIN, BRUCE; ANDREWS, HOWARD; DOORISH, CAROLYN; MACARTHUR, ROBERT B.; MONTES, JACQUELINE; BEDNARZ, KATE; FLORENCE, JULAINE; ROWIN, JULIE; BOYLAN, KEVIN; MOZAFFAR, TAHSEEN; TANDAN, RUP; MITSUMOTO, HIROSHI; KELVIN, ELIZABETH A.; CHAPIN, JOHN; BEDLACK, RICHARD; RIVNER, MICHAEL; MCCLUSKEY, LEO F.; PESTRONK, ALAN; GRAVES, MICHAEL; SORENSON, ERIC J.; BAROHN, RICHARD J.; BELSH, JERRY M.; LOU, JAU-SHIN; LEVINE, TODD; SAPERSTEIN, DAVID; MILLER, ROBERT G.; SCELSA, STEPHEN N.
2015-01-01
Combining agents with different mechanisms of action may be necessary for meaningful results in treating ALS. The combinations of minocycline-creatine and celecoxib-creatine have additive effects in the murine model. New trial designs are needed to efficiently screen the growing number of potential neuroprotective agents. Our objective was to assess two drug combinations in ALS using a novel phase II trial design. We conducted a randomized, double-blind selection trial in sequential pools of 60 patients. Participants received minocycline (100 mg)-creatine (10 g) twice daily or celecoxib (400 mg)-creatine (10 g) twice daily for six months. The primary objective was treatment selection based on which combination best slowed deterioration in the ALS Functional Rating Scale-Revised (ALSFRS-R); the trial could be stopped after one pool if the difference between the two arms was adequately large. At trial conclusion, each arm was compared to a historical control group in a futility analysis. Safety measures were also examined. After the first patient pool, the mean six-month decline in ALSFRS-R was 5.27 (SD=5.54) in the celecoxib-creatine group and 6.47 (SD=9.14) in the minocycline-creatine group. The corresponding decline was 5.82 (SD=6.77) in the historical controls. The difference between the two sample means exceeded the stopping criterion. The null hypothesis of superiority was not rejected in the futility analysis. Skin rash occurred more frequently in the celecoxib-creatine group. In conclusion, the celecoxib-creatine combination was selected as preferable to the minocycline-creatine combination for further evaluation. This phase II design was efficient, leading to treatment selection after just 60 patients, and can be used in other phase II trials to assess different agents. PMID:18608093
Rosenblum, Michael
2014-01-01
It is a challenge to design randomized trials when it is suspected that a treatment may benefit only certain subsets of the target population. In such situations, trial designs have been proposed that modify the population enrolled based on an interim analysis, in a preplanned manner. For example, if there is early evidence during the trial that the treatment only benefits a certain subset of the population, enrollment may then be restricted to this subset. At the end of such a trial, it is desirable to draw inferences about the selected population. We focus on constructing confidence intervals for the average treatment effect in the selected population. Confidence interval methods that fail to account for the adaptive nature of the design may fail to have the desired coverage probability. We provide a new procedure for constructing confidence intervals having at least 95% coverage probability, uniformly over a large class Q of possible data generating distributions. Our method involves computing the minimum factor c by which a standard confidence interval must be expanded in order to have, asymptotically, at least 95% coverage probability, uniformly over Q. Computing the expansion factor c is not trivial, since it is not a priori clear, for a given decision rule, which data generating distribution leads to the worst-case coverage probability. We give an algorithm that computes c, and prove an optimality property for the resulting confidence interval procedure. PMID:23553577
Task-Dependent Band-Selection of Hyperspectral Images by Projection-Based Random Forests
NASA Astrophysics Data System (ADS)
Hänsch, R.; Hellwich, O.
2016-06-01
The automatic classification of land cover types from hyperspectral images is a challenging problem due to (among others) the large amount of spectral bands and their high spatial and spectral correlation. The extraction of meaningful features, that enables a subsequent classifier to distinguish between different land cover classes, is often limited to a subset of all available data dimensions which is found by band selection techniques or other methods of dimensionality reduction. This work applies Projection-Based Random Forests to hyperspectral images, which not only overcome the need of an explicit feature extraction, but also provide mechanisms to automatically select spectral bands that contain original (i.e. non-redundant) as well as highly meaningful information for the given classification task. The proposed method is applied to four challenging hyperspectral datasets and it is shown that the effective number of spectral bands can be considerably limited without loosing too much of classification performance, e.g. a loss of 1 % accuracy if roughly 13 % of all available bands are used.
Application of random coherence order selection in gradient-enhanced multidimensional NMR
NASA Astrophysics Data System (ADS)
Bostock, Mark J.; Nietlispach, Daniel
2016-03-01
Development of multidimensional NMR is essential to many applications, for example in high resolution structural studies of biomolecules. Multidimensional techniques enable separation of NMR signals over several dimensions, improving signal resolution, whilst also allowing identification of new connectivities. However, these advantages come at a significant cost. The Fourier transform theorem requires acquisition of a grid of regularly spaced points to satisfy the Nyquist criterion, while frequency discrimination and acquisition of a pure phase spectrum require acquisition of both quadrature components for each time point in every indirect (non-acquisition) dimension, adding a factor of 2 N -1 to the number of free- induction decays which must be acquired, where N is the number of dimensions. Compressed sensing (CS) ℓ 1-norm minimisation in combination with non-uniform sampling (NUS) has been shown to be extremely successful in overcoming the Nyquist criterion. Previously, maximum entropy reconstruction has also been used to overcome the limitation of frequency discrimination, processing data acquired with only one quadrature component at a given time interval, known as random phase detection (RPD), allowing a factor of two reduction in the number of points for each indirect dimension (Maciejewski et al. 2011 PNAS 108 16640). However, whilst this approach can be easily applied in situations where the quadrature components are acquired as amplitude modulated data, the same principle is not easily extended to phase modulated (P-/N-type) experiments where data is acquired in the form exp (iωt) or exp (-iωt), and which make up many of the multidimensional experiments used in modern NMR. Here we demonstrate a modification of the CS ℓ 1-norm approach to allow random coherence order selection (RCS) for phase modulated experiments; we generalise the nomenclature for RCS and RPD as random quadrature detection (RQD). With this method, the power of RQD can be extended
Selective episiotomy vs. implementation of a non episiotomy protocol: a randomized clinical trial
2014-01-01
Background World Health Organization (WHO) recommends that the episiotomy rate should be around 10%, which is already a reality in many European countries. Currently the use of episiotomy should be restricted and physicians are encouraged to use their clinical judgment to decide when the procedure is necessary. There is no clinical evidence corroborating any indication of episiotomy, so until the present moment it is not yet known whether episiotomy is indeed necessary in any context of obstetric practice. Objectives To compare maternal and perinatal outcomes in women undergoing a protocol of not performing episiotomy versus selective episiotomy. Methods/Design An open label randomized clinical trial will be conducted including laboring women with term pregnancy, maximum dilation of 8 cm, live fetus in cephalic vertex presentation. Women with bleeding disorders of pregnancy, indication for caesarean section and those without capacity to consent and without legal guardians will be excluded. Primary outcomes will be frequency of episiotomy, delivery duration, frequency of spontaneous lacerations and perineal trauma, frequency of instrumental delivery, postpartum blood loss, need for perineal suturing, number of sutures, Apgar scores at one and five minutes, need for neonatal resuscitation and pH in cord blood. As secondary outcomes frequency complications of perineal suturing, postpartum perineal pain, maternal satisfaction, neonatal morbidity and admission newborn in NICU will be assessed. Women will be invited to participate and those who agree will sign the consent form and will be then assigned to a protocol of not conducting episiotomy (experimental group) or to a group that episiotomy is performed selectively according to the judgment of the provider of care delivery (control Group). The present study was approved by IMIP’s Research Ethics Committee. Trial Registration Clinical Trials Register under the number and was registered in ClinicalTrials.gov under
Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul
2015-01-01
Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis. PMID:25898019
Scott, J.C.
1990-01-01
Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.
Müller, Andreas; Heiden, Barbara; Herbig, Britta; Poppe, Franziska; Angerer, Peter
2016-04-01
This study aimed to develop, implement, and evaluate an occupational health intervention that is based on the theoretical model of selection, optimization, and compensation (SOC). We conducted a stratified randomized controlled intervention with 70 nurses of a community hospital in Germany (94% women; mean age 43.7 years). Altogether, the training consisted of 6 sessions (16.5 hours) over a period of 9 months. The training took place in groups of 6-8 employees. Participants were familiarized with the SOC model and developed and implemented a personal project based on SOC to cope effectively with 1 important job demand or to activate a job resource. Consistent with our hypotheses, we observed a meaningful trend that the proposed SOC training enhanced mental well-being, particularly in employees with a strong commitment to the intervention. While highly committed training participants reported higher levels of job control at follow-up, the effects were not statistical significant. Additional analyses of moderation effects showed that the training is particularly effective to enhance mental well-being when job control is low. Contrary to our assumptions, perceived work ability was not improved by the training. Our study provides first indications that SOC training might be a promising approach to occupational health and stress prevention. Moreover, it identifies critical success factors of occupational interventions based on SOC. However, additional studies are needed to corroborate the effectiveness of SOC trainings in the occupational contexts. PMID:26322438
Schueller, Stephen M; Leykin, Yan; Pérez-Stable, Eliseo J; Muñoz, Ricardo F
2013-01-30
To address health problems that have a major impact on global health requires research designs that go beyond randomized controlled trials. One such design, the participant preference trial, provides additional information in an ecologically valid manner, once intervention efficacy has been demonstrated. The current study presents illustrative data from a participant preference trial of an internet-based smoking cessation intervention. Participants (N=7763) from 124 countries accessed the intervention and were allowed to choose from nine different site components to aid their quit attempt. Of consenting participants, 36.7% completed at least one follow-up assessment. Individuals with depression were more likely to choose a mood management module and participants who smoked a higher number of cigarettes were more likely to choose a cigarette counter and a nicotine replacement therapy guide. Furthermore, depressed participants selecting the mood management component were more likely to report at least one successful 7 day quit (37.2% vs. 22.2%) in the 12 months following the intervention. Thus, participants with depressive symptoms appear to make choices on the basis of their needs and to benefit from these decisions. This suggests that providing the ability to customize previously validated resources may be a successful way to widely disseminate interventions.
Prediction of Protein Cleavage Site with Feature Selection by Random Forest
Li, Bi-Qing; Cai, Yu-Dong; Feng, Kai-Yan; Zhao, Gui-Jun
2012-01-01
Proteinases play critical roles in both intra and extracellular processes by binding and cleaving their protein substrates. The cleavage can either be non-specific as part of degradation during protein catabolism or highly specific as part of proteolytic cascades and signal transduction events. Identification of these targets is extremely challenging. Current computational approaches for predicting cleavage sites are very limited since they mainly represent the amino acid sequences as patterns or frequency matrices. In this work, we developed a novel predictor based on Random Forest algorithm (RF) using maximum relevance minimum redundancy (mRMR) method followed by incremental feature selection (IFS). The features of physicochemical/biochemical properties, sequence conservation, residual disorder, amino acid occurrence frequency, secondary structure and solvent accessibility were utilized to represent the peptides concerned. Here, we compared existing prediction tools which are available for predicting possible cleavage sites in candidate substrates with ours. It is shown that our method makes much more reliable predictions in terms of the overall prediction accuracy. In addition, this predictor allows the use of a wide range of proteinases. PMID:23029276
The Kilkenny Health Project: food and nutrient intakes in randomly selected healthy adults.
Gibney, M J; Moloney, M; Shelley, E
1989-03-01
1. Sixty healthy subjects aged 35-44 years (thirty men and thirty women) were randomly selected from electoral registers to participate in a dietary survey using the 7 d weighed-intake method during June-August 1985. 2. Energy intake (MJ/d) was 12.5 for men and 8.4 for women. Fat contributed 36.0 and 39.1% of the total energy intake of men and women respectively. When this was adjusted to exclude energy derived from alcoholic beverages, the corresponding values were 38.8 and 39.7% respectively. The major sources of dietary fat (%) were spreadable fats (28), meat (23), milk (12) and biscuits and cakes (11). 3. The subjects were divided into low- and high-fat groups both on the relative intake of fat (less than 35% or greater than 40% dietary energy from fat) and on the absolute intake of fat (greater than or less than 120 g fat/d). By either criterion, high-fat consumers had lower than average intakes of low-fat, high-carbohydrate foods such as potatoes, bread, fruit and table sugar, and higher intakes of milk, butter and confectionery products. Meat intake was higher among high-fat eaters only when a high-fat diet was defined as a percentage of energy. PMID:2706219
A compilation of partial sequences of randomly selected cDNA clones from the rat incisor.
Matsuki, Y; Nakashima, M; Amizuka, N; Warshawsky, H; Goltzman, D; Yamada, K M; Yamada, Y
1995-01-01
The formation of tooth organs is regulated by a series of developmental programs. We have initiated a genome project with the ultimate goal of identifying novel genes important for tooth development. As an initial approach, we constructed a unidirectional cDNA library from the non-calcified portion of incisors of 3- to 4-week-old rats, sequenced cDNA clones, and classified their sequences by homology search through the GenBank data base and the PIR protein data base. Here, we report partial DNA sequences obtained by automated DNA sequencing on 400 cDNA clones randomly selected from the library. Of the sequences determined, 51% represented sequences of new genes that were not related to any previously reported gene. Twenty-six percent of the clones strongly matched genes and proteins in the data bases, including amelogenin, alpha 1(I) and alpha 2(I) collagen chains, osteonectin, and decorin. Nine percent of clones revealed partial sequence homology to known genes such as transcription factors and cell surface receptors. A significant number of the previously identified genes were expressed redundantly and were found to encode extracellular matrix proteins. Identification and cataloging of cDNA clones in these tissues are the first step toward identification of markers expressed in a tissue- or stage-specific manner, as well as the genetic linkage study of tooth anomalies. Further characterization of the clones described in this paper should lead to the discovery of novel genes important for tooth development. PMID:7876422
The Kilkenny Health Project: food and nutrient intakes in randomly selected healthy adults.
Gibney, M J; Moloney, M; Shelley, E
1989-03-01
1. Sixty healthy subjects aged 35-44 years (thirty men and thirty women) were randomly selected from electoral registers to participate in a dietary survey using the 7 d weighed-intake method during June-August 1985. 2. Energy intake (MJ/d) was 12.5 for men and 8.4 for women. Fat contributed 36.0 and 39.1% of the total energy intake of men and women respectively. When this was adjusted to exclude energy derived from alcoholic beverages, the corresponding values were 38.8 and 39.7% respectively. The major sources of dietary fat (%) were spreadable fats (28), meat (23), milk (12) and biscuits and cakes (11). 3. The subjects were divided into low- and high-fat groups both on the relative intake of fat (less than 35% or greater than 40% dietary energy from fat) and on the absolute intake of fat (greater than or less than 120 g fat/d). By either criterion, high-fat consumers had lower than average intakes of low-fat, high-carbohydrate foods such as potatoes, bread, fruit and table sugar, and higher intakes of milk, butter and confectionery products. Meat intake was higher among high-fat eaters only when a high-fat diet was defined as a percentage of energy.
Lloyd, Mark C; Cunningham, Jessica J; Bui, Marilyn M; Gillies, Robert J; Brown, Joel S; Gatenby, Robert A
2016-06-01
that at least some of the molecular heterogeneity in cancer cells in tumors is governed by predictable regional variations in environmental selection forces, arguing against the assumption that cancer cells can evolve toward a local fitness maximum by random accumulation of mutations. Cancer Res; 76(11); 3136-44. ©2016 AACR. PMID:27009166
ERIC Educational Resources Information Center
Norris, Susan L.; Moher, David; Reeves, Barnaby C.; Shea, Beverley; Loke, Yoon; Garner, Sarah; Anderson, Laurie; Tugwell, Peter; Wells, George
2013-01-01
Background: Selective outcome and analysis reporting (SOR and SAR) occur when only a subset of outcomes measured and analyzed in a study is fully reported, and are an important source of potential bias. Key methodological issues: We describe what is known about the prevalence and effects of SOR and SAR in both randomized controlled trials (RCTs)…
ERIC Educational Resources Information Center
Cullen, Julie Berry; Jacob, Brian A.
2007-01-01
In this paper, we examine whether expanded access to sought-after schools can improve academic achievement. The setting we study is the "open enrollment" system in the Chicago Public Schools (CPS). We use lottery data to avoid the critical issue of non-random selection of students into schools. Our analysis sample includes nearly 450 lotteries for…
ERIC Educational Resources Information Center
Thomas, Henry B.; Kaplan, E. Joseph
A national survey was conducted of randomly selected chief student personnel officers as listed in the 1979 "Education Directory of Colleges and Universities." The survey addressed specific institutional demographics, policy-making authority, reporting structure, and areas of responsibility of the administrators. Over 93 percent of the respondents…
Chiorini, J A; Yang, L; Safer, B; Kotin, R M
1995-01-01
To further define the canonical binding site for the P5-promoted Rep proteins of the adeno-associated virus, a modified random oligonucleotide selection procedure was performed, using purified recombinant Rep protein. These results may explain the effects of Rep on cellular gene expression. PMID:7474165
Addison, Nigel; Burgess, Gary
2002-03-01
Manual handling activities have long been recognized as major contributors to occupational injury and ill health. Following a series of consultative documents, the Manual Handling Operations Regulations (MHORs) and their associated Guidance came into force in the UK in January 1993. More than 5 yr on, an investigation was performed to evaluate response to this legislation amongst a random selection of small businesses within a business district of Shropshire, England. A postal questionnaire was sent to 100 companies employing 5-50 workers. Responses were obtained from 80 companies, ranging from retailing to metals/engineering. Although all of the companies are likely to perform activities requiring manual handling assessments under the MHORs, many claimed never to have heard of the legislation (38%) and almost half (46%) had not performed an assessment. Compliance varied significantly by business type, with the nine companies engaged in metals/engineering reporting significantly better compliance (P < 0.05). Of the 43 companies who claimed to have undertaken assessments, 73% (32 companies) indicated that manual handling activities had been changed, with all claiming to have reduced lifting activities, a significant proportion (75%) making improvements to the working environment and over half (59%) reducing the weight of loads. Many of the companies who indicated full compliance with the legislation (21 companies) stated that the benefits to their businesses outweighed the cost of compliance. Additional factors analysed by the study include source of legislative awareness, personnel performing assessments, employee training and method used, and reasons for non-compliance with the MHORs. PMID:12074024
NASA Astrophysics Data System (ADS)
Kovaleva, I.; Kovalev, O.; Smurov, I.
Discretegrid model of heat transfer in granular porous mediumto describe the processes of selective laser melting of powdersis developed. The thermal conductivity in this mediumis performed through thecontact surfaces between the particles. The calculation method of morphology of random packing layer of powder considering the adhesive interaction between the particles is proposed. The internal structure of the obtained loose powder layer is a granular medium where spherical particles of different sizes are arranged in contact with each other randomly. Analytical models of powder balling process and formation of the remelted track are proposed.
Sakkinen, P A; Severson, R K; Ross, J A; Robison, L L
1995-01-01
The purpose of this analysis was to evaluate the degree of matching in 95 individually matched pairs from a case-control study of childhood leukemia that used random-digit dialing to select control subjects. Both geographic proximity (of each case subject to his or her matched control subject) and differences in socioeconomic status were evaluated. The median distance between matched pairs was 3.2 km. There were no significant differences in distance between matched pairs by urban/rural status and geographic location. For studies of childhood cancer drawn from pediatric referral centers, random-digit dialing appears to provide a suitable control group. PMID:7702122
NASA Astrophysics Data System (ADS)
Goury, Olivier; Amsallem, David; Bordas, Stéphane Pierre Alain; Liu, Wing Kam; Kerfriden, Pierre
2016-08-01
In this paper, we present new reliable model order reduction strategies for computational micromechanics. The difficulties rely mainly upon the high dimensionality of the parameter space represented by any load path applied onto the representative volume element. We take special care of the challenge of selecting an exhaustive snapshot set. This is treated by first using a random sampling of energy dissipating load paths and then in a more advanced way using Bayesian optimization associated with an interlocked division of the parameter space. Results show that we can insure the selection of an exhaustive snapshot set from which a reliable reduced-order model can be built.
Wang, Yongping; Speier, Jacqueline S; Engram-Pearl, Jessica; Wilson, Robert B
2014-01-01
RNA interference (RNAi) is a mechanism for interfering with gene expression through the action of small, non-coding RNAs. We previously constructed a short-hairpin-loop RNA (shRNA) encoding library that is random at the nucleotide level [1]. In this library, the stems of the hairpin are completely complementary. To improve the potency of initial hits, and therefore signal-to-noise ratios in library screening, as well as to simplify hit-sequence retrieval by PCR, we constructed a second-generation library in which we introduced random mismatches between the two halves of the stem of each hairpin, on a random template background. In a screen for shRNAs that protect an interleukin-3 (IL3) dependent cell line from IL3 withdrawal, our second-generation library yielded hit sequences with significantly higher potencies than those from the first-generation library in the same screen. Our method of random mutagenesis was effective for a random template and is likely suitable, therefore, for any DNA template of interest. The improved potency of our second-generation library expands the range of possible unbiased screens for small-RNA therapeutics and biologic tools.
The basic science and mathematics of random mutation and natural selection.
Kleinman, Alan
2014-12-20
The mutation and natural selection phenomenon can and often does cause the failure of antimicrobial, herbicidal, pesticide and cancer treatments selection pressures. This phenomenon operates in a mathematically predictable behavior, which when understood leads to approaches to reduce and prevent the failure of the use of these selection pressures. The mathematical behavior of mutation and selection is derived using the principles given by probability theory. The derivation of the equations describing the mutation and selection phenomenon is carried out in the context of an empirical example.
Leandri, R D; Gachet, A; Pfeffer, J; Celebi, C; Rives, N; Carre-Pigeon, F; Kulski, O; Mitchell, V; Parinaud, J
2013-09-01
Intracytoplasmic morphologically selected sperm injection (IMSI), by selecting spermatozoa at high magnification improves the outcome of intracytoplasmic sperm injection (ICSI) mainly after several failures. However, only few monocentric randomized studies are available and they do not analyse results as a function of sperm characteristics. In 255 couples attempting their first assisted reproductive technology (ART) attempt for male infertility (motile sperm count <1×10⁶ after sperm selection, but at least 3×10⁶ spermatozoa per ejaculate to allow a detailed analysis of sperm characteristics), a prospective randomized trial was performed to compare the clinical outcomes of IMSI and ICSI and to evaluate the influence of sperm characteristics on these outcomes. IMSI did not provide any significant improvement in the clinical outcomes compared with ICSI neither for implantation (24% vs. 23%), nor clinical pregnancy (31% vs. 33%) nor live birth rates (27% vs. 30%). Moreover, the results of IMSI were similar to the ICSI ones whatever the degree of sperm DNA fragmentation, nuclear immaturity and sperm morphology. These results show that IMSI instead of ICSI has no advantage in the first ART attempts. However, this does not rule out IMSI completely and more randomized trials must be performed especially regarding patients carrying severe teratozoospermia, or high sperm DNA fragmentation levels or having previous ICSI failures.
Selecting a Sample for Your Experiment: A Non-Random Stratified Sampling Approach
ERIC Educational Resources Information Center
Tipton, Elizabeth
2012-01-01
The purpose of this paper is to develop a more general method for sample recruitment in experiments that is purposive (not random) and that results in a sample that is compositionally similar to the generalization population. This work builds on Tipton et al. (2011) by offering solutions to a larger class of problems than the non-overlapping…
Prevalence and Severity of College Student Bereavement Examined in a Randomly Selected Sample
ERIC Educational Resources Information Center
Balk, David E.; Walker, Andrea C.; Baker, Ardith
2010-01-01
The authors used stratified random sampling to assess the prevalence and severity of bereavement in college undergraduates, providing an advance over findings that emerge from convenience sampling methods or from anecdotal observations. Prior research using convenience sampling indicated that 22% to 30% of college students are within 12 months of…
ERIC Educational Resources Information Center
Carter, Ashley J. R.
2002-01-01
Presents a hands-on activity on the phenomenon of genetic drift in populations that reinforces the random nature of drift and demonstrates the effect of the population size on the mean frequency of an allele over a few generations. Includes materials for the demonstration, procedures, and discussion topics. (KHR)
Espinosa, Avelina; Bai, Chunyan Y.
2016-01-01
Protein evolution is not a random process. Views which attribute randomness to molecular change, deleterious nature to single-gene mutations, insufficient geological time, or population size for molecular improvements to occur, or invoke “design creationism” to account for complexity in molecular structures and biological processes, are unfounded. Scientific evidence suggests that natural selection tinkers with molecular improvements by retaining adaptive peptide sequence. We used slot-machine probabilities and ion channels to show biological directionality on molecular change. Because ion channels reside in the lipid bilayer of cell membranes, their residue location must be in balance with the membrane's hydrophobic/philic nature; a selective “pore” for ion passage is located within the hydrophobic region. We contrasted the random generation of DNA sequence for KcsA, a bacterial two-transmembrane-domain (2TM) potassium channel, from Streptomyces lividans, with an under-selection scenario, the “jackprot,” which predicted much faster evolution than by chance. We wrote a computer program in JAVA APPLET version 1.0 and designed an online interface, The Jackprot Simulation http://faculty.rwu.edu/cbai/JackprotSimulation.htm, to model a numerical interaction between mutation rate and natural selection during a scenario of polypeptide evolution. Winning the “jackprot,” or highest-fitness complete-peptide sequence, required cumulative smaller “wins” (rewarded by selection) at the first, second, and third positions in each of the 161 KcsA codons (“jackdons” that led to “jackacids” that led to the “jackprot”). The “jackprot” is a didactic tool to demonstrate how mutation rate coupled with natural selection suffices to explain the evolution of specialized proteins, such as the complex six-transmembrane (6TM) domain potassium, sodium, or calcium channels. Ancestral DNA sequences coding for 2TM-like proteins underwent nucleotide
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Przekop, Adam
2005-01-01
An investigation of the effect of basis selection on geometric nonlinear response prediction using a reduced-order nonlinear modal simulation is presented. The accuracy is dictated by the selection of the basis used to determine the nonlinear modal stiffness. This study considers a suite of available bases including bending modes only, bending and membrane modes, coupled bending and companion modes, and uncoupled bending and companion modes. The nonlinear modal simulation presented is broadly applicable and is demonstrated for nonlinear quasi-static and random acoustic response of flat beam and plate structures with isotropic material properties. Reduced-order analysis predictions are compared with those made using a numerical simulation in physical degrees-of-freedom to quantify the error associated with the selected modal bases. Bending and membrane responses are separately presented to help differentiate the bases.
NASA Astrophysics Data System (ADS)
Ma, Jun; Xu, Ying; Wang, Chunni; Jin, Wuyin
2016-11-01
Regular spatial patterns could be observed in spatiotemporal systems far from equilibrium states. Artificial networks with different topologies are often designed to reproduce the collective behaviors of nodes (or neurons) which the local kinetics of node is described by kinds of oscillator models. It is believed that the self-organization of network much depends on the bifurcation parameters and topology connection type. Indeed, the boundary effect is every important on the pattern formation of network. In this paper, a regular network of Hindmarsh-Rose neurons is designed in a two-dimensional square array with nearest-neighbor connection type. The neurons on the boundary are excited with random stimulus. It is found that spiral waves, even a pair of spiral waves could be developed in the network under appropriate coupling intensity. Otherwise, the spatial distribution of network shows irregular states. A statistical variable is defined to detect the collective behavior by using mean field theory. It is confirmed that regular pattern could be developed when the synchronization degree is low. The potential mechanism could be that random perturbation on the boundary could induce coherence resonance-like behavior thus spiral wave could be developed in the network.
Instrument selection for randomized controlled trials: why this and not that?
Records, Kathie; Keller, Colleen; Ainsworth, Barbara; Permana, Paska
2012-01-01
A fundamental linchpin for obtaining rigorous findings in quantitative research involves the selection of survey instruments. Psychometric recommendations are available for the processes for scale development and testing and guidance for selection of established scales. These processes are necessary to address the validity link between the phenomena under investigation, the empirical measures and, ultimately, the theoretical ties between these and the world views of the participants. Detailed information is most often provided about study design and protocols, but far less frequently is a detailed theoretical explanation provided for why specific instruments are chosen. Guidance to inform choices is often difficult to find when scales are needed for specific cultural, ethnic, or racial groups. This paper details the rationale underlying instrument selection for measurement of the major processes (intervention, mediator and moderator variables, outcome variables) in an ongoing study of postpartum Latinas, Madres para la Salud [Mothers for Health]. The rationale underpinning our choices includes a discussion of alternatives, when appropriate. These exemplars may provide direction for other intervention researchers who are working with specific cultural, racial, or ethnic groups or for other investigators who are seeking to select the 'best' instrument. Thoughtful consideration of measurement and articulation of the rationale underlying our choices facilitates the maintenance of rigor within the study design and improves our ability to assess study outcomes.
Huang, J; Blackwell, T K; Kedes, L; Weintraub, H
1996-01-01
A method has been developed for selecting functional enhancer/promoter sites from random DNA sequences in higher eukaryotic cells. Of sequences that were thus selected for transcriptional activation by the muscle-specific basic helix-loop-helix protein MyoD, only a subset are similar to the preferred in vitro binding consensus, and in the same promoter context an optimal in vitro binding site was inactive. Other sequences with full transcriptional activity instead exhibit sequence preferences that, remarkably, are generally either identical or very similar to those found in naturally occurring muscle-specific promoters. This first systematic examination of the relation between DNA binding and transcriptional activation by basic helix-loop-helix proteins indicates that binding per se is necessary but not sufficient for transcriptional activation by MyoD and implies a requirement for other DNA sequence-dependent interactions or conformations at its binding site. PMID:8668207
Code to generate random identifiers and select QA/QC samples
Mehnert, Edward
1992-01-01
SAMPLID is a PC-based, FORTRAN-77 code which generates unique numbers for identification of samples, selection of QA/QC samples, and generation of labels. These procedures are tedious, but using a computer code such as SAMPLID can increase efficiency and reduce or eliminate errors and bias. The algorithm, used in SAMPLID, for generation of pseudorandom numbers is free of statistical flaws present in commonly available algorithms.
Engen, Steinar; Sæther, Bernt-Erik
2016-06-01
Here we analyze how dispersal, genetic drift, and adaptation to the local environment affect the geographical differentiation of a quantitative character through natural selection using a spatial dynamic model for the evolution of the distribution of mean breeding values in space and time. The variation in optimal phenotype is described by local Ornstein-Uhlenbeck processes with a given spatial autocorrelation. Selection and drift are assumed to be governed by phenotypic variation within areas with a given mean breeding value and constant additive genetic variance. Between such neighboring areas there will be white noise variation in mean breeding values, while the variation at larger distances has a spatial structure and a spatial scale that we investigate. The model is analyzed by solving balance equations for the stationary distribution of mean breeding values. We also present scaling results for the spatial autocovariance function for mean breeding values as well as that for the covariance between mean breeding value and the optimal phenotype expressing local adaption. Our results show in particular how these spatial scales depend on population density. For large densities the spatial scale of fluctuations in mean breeding values have similarities with corresponding results in population dynamics, where the effect of migration on spatial scales may be large if the local strength of density regulation is small. In our evolutionary model strength of density regulation corresponds to strength of local selection so that weak local selection may produce large spatial scales of autocovariances. Genetic drift and stochastic migration are shown to act through the population size within a characteristic area with much smaller variation in optimal phenotypes than in the whole population.
Dolan, C V; Boomsma, D I
1998-05-01
Percentages of extremely concordant and extremely discordant sib pairs are calculated that maximize the power to detect a quantitative trait locus (QTL) under a variety of circumstances using the EDAC test. We assume a large fixed number of randomly sampled sib pairs, such as one would hope to find in the large twin registries, and limited resources to genotype a certain number of selected sib pairs. Our aim is to investigate whether optimal selection can be achieved when prior knowledge concerning the QTL gene action, QTL allele frequency, QTL effect size, and background (residual) sib correlation is limited or absent. To this end we calculate the best selection percentages for a large number of models, which differ in QTL gene action allele frequency, background correlation, and QTL effect size. By averaging these percentages over gene action, over allele frequency, over gene action, and over allele frequencies, we arrive at general recommendations concerning selection percentages. The soundness of these recommendations is subsequently in a number of test cases. PMID:9670595
Kim, Min-Soo; Lee, Jong Seok; Nam, Sang Beom; Kang, Hyo Jong; Kim, Ji Eun
2015-08-01
Size selection of the laryngeal mask airway (LMA) Classic based on actual body weight remains a common practice. However, ideal body weight might allow for a better size selection in obese patients. The purpose of our study was to compare the utility of ideal body weight and actual body weight when choosing the appropriate size of the LMA Classic by a randomized clinical trial. One hundred patients with age 20 to 70 yr, body mass index ≥25 kg/m(2), and the difference between LMA sizes based on actual weight and ideal weight were allocated to insert the LMA Classic using either actual body weight or ideal body weight in a weight-based formula for size selection. After insertion of the device, several variables including insertion parameters, sealing function, fiberoptic imaging, and complications were investigated. The insertion success rate at the first attempt was lower in the actual weight group (82%) than in the ideal weight group (96%), even it did not show significant difference. The ideal weight group had significantly shorter insertion time and easier placement. However, fiberoptic views were significantly better in the actual weight group. Intraoperative complications, sore throat in the recovery room, and dysphonia at postoperative 24 hr occurred significantly less often in the ideal weight group than in the actual weight group. It is suggested that the ideal body weight may be beneficial to the size selection of the LMA Classic in overweight patients (Clinical Trial Registry, NCT 01843270).
Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks
Huang, Rimao; Qiu, Xuesong; Rui, Lanlan
2011-01-01
Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789
Benefits of Selected Physical Exercise Programs in Detention: A Randomized Controlled Study
Battaglia, Claudia; di Cagno, Alessandra; Fiorilli, Giovanni; Giombini, Arrigo; Fagnani, Federica; Borrione, Paolo; Marchetti, Marco; Pigozzi, Fabio
2013-01-01
The aim of the study was to determine which kind of physical activity could be useful to inmate populations to improve their health status and fitness levels. A repeated measure design was used to evaluate the effects of two different training protocols on subjects in a state of detention, tested pre- and post-experimental protocol.Seventy-five male subjects were enrolled in the studyand randomly allocated to three groups: the cardiovascular plus resistance training protocol group (CRT) (n = 25; mean age 30.9 ± 8.9 years),the high-intensity strength training protocol group (HIST) (n = 25; mean age 33.9 ± 6.8 years), and a control group (C) (n = 25; mean age 32.9 ± 8.9 years) receiving no treatment. All subjects underwent a clinical assessmentandfitness tests. MANOVA revealed significant multivariate effects on group (p < 0.01) and group-training interaction (p < 0.05). CRT protocol resulted the most effective protocol to reach the best outcome in fitness tests. Both CRT and HIST protocols produced significant gains in the functional capacity (cardio-respiratory capacity and cardiovascular disease risk decrease) of incarcerated males. The significant gains obtained in functional capacity reflect the great potential of supervised exercise interventions for improving the health status of incarcerated people. PMID:24185842
Benefits of selected physical exercise programs in detention: a randomized controlled study.
Battaglia, Claudia; di Cagno, Alessandra; Fiorilli, Giovanni; Giombini, Arrigo; Fagnani, Federica; Borrione, Paolo; Marchetti, Marco; Pigozzi, Fabio
2013-11-01
The aim of the study was to determine which kind of physical activity could be useful to inmate populations to improve their health status and fitness levels. A repeated measure design was used to evaluate the effects of two different training protocols on subjects in a state of detention, tested pre- and post-experimental protocol.Seventy-five male subjects were enrolled in the studyand randomly allocated to three groups: the cardiovascular plus resistance training protocol group (CRT) (n = 25; mean age 30.9 ± 8.9 years),the high-intensity strength training protocol group (HIST) (n = 25; mean age 33.9 ± 6.8 years), and a control group (C) (n = 25; mean age 32.9 ± 8.9 years) receiving no treatment. All subjects underwent a clinical assessmentandfitness tests. MANOVA revealed significant multivariate effects on group (p < 0.01) and group-training interaction (p < 0.05). CRT protocol resulted the most effective protocol to reach the best outcome in fitness tests. Both CRT and HIST protocols produced significant gains in the functional capacity (cardio-respiratory capacity and cardiovascular disease risk decrease) of incarcerated males. The significant gains obtained in functional capacity reflect the great potential of supervised exercise interventions for improving the health status of incarcerated people. PMID:24185842
Berger, Vance W
2015-08-01
Recently a great deal of attention has been paid to conflicts of interest in medical research, and the Institute of Medicine has called for more research into this important area. One research question that has not received sufficient attention concerns the mechanisms of action by which conflicts of interest can result in biased and/or flawed research. What discretion do conflicted researchers have to sway the results one way or the other? We address this issue from the perspective of selective inertia, or an unnatural selection of research methods based on which are most likely to establish the preferred conclusions, rather than on which are most valid. In many cases it is abundantly clear that a method that is not being used in practice is superior to the one that is being used in practice, at least from the perspective of validity, and that it is only inertia, as opposed to any serious suggestion that the incumbent method is superior (or even comparable), that keeps the inferior procedure in use, to the exclusion of the superior one. By focusing on these flawed research methods we can go beyond statements of potential harm from real conflicts of interest, and can more directly assess actual (not potential) harm.
Berger, Vance W.
2014-01-01
Recently a great deal of attention has been paid to conflicts of interest in medical research, and the Institute of Medicine has called for more research into this important area. One research question that has not received sufficient attention concerns the mechanisms of action by which conflicts of interest can result in biased and/or flawed research. What discretion do conflicted researchers have to sway the results one way or the other? We address this issue from the perspective of selective inertia, or an unnatural selection of research methods based on which are most likely to establish the preferred conclusions, rather than on which are most valid. In many cases it is abundantly clear that a method that is not being used in practice is superior to the one that is being used in practice, at least from the perspective of validity, and that it is only inertia, as opposed to any serious suggestion that the incumbent method is superior (or even comparable), that keeps the inferior procedure in use, to the exclusion of the superior one. By focusing on these flawed research methods we can go beyond statements of potential harm from real conflicts of interest, and can more directly assess actual (not potential) harm. PMID:25150846
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Przekop, Adam
2004-01-01
The goal of this investigation is to further develop nonlinear modal numerical simulation methods for prediction of geometrically nonlinear response due to combined thermal-acoustic loadings. As with any such method, the accuracy of the solution is dictated by the selection of the modal basis, through which the nonlinear modal stiffness is determined. In this study, a suite of available bases are considered including (i) bending modes only; (ii) coupled bending and companion modes; (iii) uncoupled bending and companion modes; and (iv) bending and membrane modes. Comparison of these solutions with numerical simulation in physical degrees-of-freedom indicates that inclusion of any membrane mode variants (ii - iv) in the basis affects the bending displacement and stress response predictions. The most significant effect is on the membrane displacement, where it is shown that only the type (iv) basis accurately predicts its behavior. Results are presented for beam and plate structures in the thermally pre-buckled regime.
2013-01-01
will use the ‘multiple comparisons with the best’ approach for data analyses, as this strategy allows practical considerations of ease of use and tolerability to guide selection of the preparation for future studies. Discussion Data from this protocol will be used to develop a randomized, controlled trial of nitrates to prevent osteoporotic fractures. Trial registration ClinicalTrials.gov Identifier: NCT01387672. Controlled-Trials.com: ISRCTN08860742. PMID:24010992
Panzacchi, Manuela; Van Moorter, Bram; Strand, Olav; Saerens, Marco; Kivimäki, Ilkka; St Clair, Colleen C; Herfindal, Ivar; Boitani, Luigi
2016-01-01
The loss, fragmentation and degradation of habitat everywhere on Earth prompts increasing attention to identifying landscape features that support animal movement (corridors) or impedes it (barriers). Most algorithms used to predict corridors assume that animals move through preferred habitat either optimally (e.g. least cost path) or as random walkers (e.g. current models), but neither extreme is realistic. We propose that corridors and barriers are two sides of the same coin and that animals experience landscapes as spatiotemporally dynamic corridor-barrier continua connecting (separating) functional areas where individuals fulfil specific ecological processes. Based on this conceptual framework, we propose a novel methodological approach that uses high-resolution individual-based movement data to predict corridor-barrier continua with increased realism. Our approach consists of two innovations. First, we use step selection functions (SSF) to predict friction maps quantifying corridor-barrier continua for tactical steps between consecutive locations. Secondly, we introduce to movement ecology the randomized shortest path algorithm (RSP) which operates on friction maps to predict the corridor-barrier continuum for strategic movements between functional areas. By modulating the parameter Ѳ, which controls the trade-off between exploration and optimal exploitation of the environment, RSP bridges the gap between algorithms assuming optimal movements (when Ѳ approaches infinity, RSP is equivalent to LCP) or random walk (when Ѳ → 0, RSP → current models). Using this approach, we identify migration corridors for GPS-monitored wild reindeer (Rangifer t. tarandus) in Norway. We demonstrate that reindeer movement is best predicted by an intermediate value of Ѳ, indicative of a movement trade-off between optimization and exploration. Model calibration allows identification of a corridor-barrier continuum that closely fits empirical data and demonstrates that RSP
2013-01-01
Background One of the main topics in the development of quantitative structure-property relationship (QSPR) predictive models is the identification of the subset of variables that represent the structure of a molecule and which are predictors for a given property. There are several automated feature selection methods, ranging from backward, forward or stepwise procedures, to further elaborated methodologies such as evolutionary programming. The problem lies in selecting the minimum subset of descriptors that can predict a certain property with a good performance, computationally efficient and in a more robust way, since the presence of irrelevant or redundant features can cause poor generalization capacity. In this paper an alternative selection method, based on Random Forests to determine the variable importance is proposed in the context of QSPR regression problems, with an application to a manually curated dataset for predicting standard enthalpy of formation. The subsequent predictive models are trained with support vector machines introducing the variables sequentially from a ranked list based on the variable importance. Results The model generalizes well even with a high dimensional dataset and in the presence of highly correlated variables. The feature selection step was shown to yield lower prediction errors with RMSE values 23% lower than without feature selection, albeit using only 6% of the total number of variables (89 from the original 1485). The proposed approach further compared favourably with other feature selection methods and dimension reduction of the feature space. The predictive model was selected using a 10-fold cross validation procedure and, after selection, it was validated with an independent set to assess its performance when applied to new data and the results were similar to the ones obtained for the training set, supporting the robustness of the proposed approach. Conclusions The proposed methodology seemingly improves the prediction
Barcella, William; Iorio, Maria De; Baio, Gianluca; Malone-Lee, James
2016-04-15
Lower urinary tract symptoms can indicate the presence of urinary tract infection (UTI), a condition that if it becomes chronic requires expensive and time consuming care as well as leading to reduced quality of life. Detecting the presence and gravity of an infection from the earliest symptoms is then highly valuable. Typically, white blood cell (WBC) count measured in a sample of urine is used to assess UTI. We consider clinical data from 1341 patients in their first visit in which UTI (i.e. WBC ≥ 1) is diagnosed. In addition, for each patient, a clinical profile of 34 symptoms was recorded. In this paper, we propose a Bayesian nonparametric regression model based on the Dirichlet process prior aimed at providing the clinicians with a meaningful clustering of the patients based on both the WBC (response variable) and possible patterns within the symptoms profiles (covariates). This is achieved by assuming a probability model for the symptoms as well as for the response variable. To identify the symptoms most associated to UTI, we specify a spike and slab base measure for the regression coefficients: this induces dependence of symptoms selection on cluster assignment. Posterior inference is performed through Markov Chain Monte Carlo methods. PMID:26536840
Yamada, Ryosuke; Higo, Tatsutoshi; Yoshikawa, Chisa; China, Hideyasu; Yasuda, Masahiro; Ogino, Hiroyasu
2015-01-01
Haloperoxidases are useful oxygenases involved in halogenation of a range of water-insoluble organic compounds and can be used without additional high-cost cofactors. In particular, organic solvent-stable haloperoxidases are desirable for enzymatic halogenations in the presence of organic solvents. In this study, we adopted a directed evolution approach by error-prone polymerase chain reaction to improve the organic solvent-stability of the homodimeric BPO-A1 haloperoxidase from Streptomyces aureofaciens. Among 1,000 mutant BPO-A1 haloperoxidases, an organic solvent-stable mutant OST48 with P123L and P241A mutations and a high active mutant OST959 with H53Y and G162R mutations were selected. The residual activity of mutant OST48 after incubation in 40% (v/v) 1-propanol for 1 h was 1.8-fold higher than that of wild-type BPO-A1. In addition, the OST48 mutant showed higher stability in methanol, ethanol, dimethyl sulfoxide, and N,N-dimethylformamide than wild-type BPO-A1 haloperoxidase. Moreover, after incubation at 80°C for 1 h, the residual activity of mutant OST959 was 4.6-fold higher than that of wild-type BPO-A1. Based on the evaluation of single amino acid-substituted mutant models, stabilization of the hydrophobic core derived from P123L mutation and increased numbers of hydrogen bonds derived from G162R mutation led to higher organic solvent-stability and thermostability, respectively.
2014-01-01
Background Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Results Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. Conclusion The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification. PMID:24418292
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and
NASA Astrophysics Data System (ADS)
Hariharan, Siddharth; Tirodkar, Siddhesh; Bhattacharya, Avik
2016-02-01
Urban area classification is important for monitoring the ever increasing urbanization and studying its environmental impact. Two NASA JPL's UAVSAR datasets of L-band (wavelength: 23 cm) were used in this study for urban area classification. The two datasets used in this study are different in terms of urban area structures, building patterns, their geometric shapes and sizes. In these datasets, some urban areas appear oriented about the radar line of sight (LOS) while some areas appear non-oriented. In this study, roll invariant polarimetric SAR decomposition parameters were used to classify these urban areas. Random Forest (RF), which is an ensemble decision tree learning technique, was used in this study. RF performs parameter subset selection as a part of its classification procedure. In this study, parameter subsets were obtained and analyzed to infer scattering mechanisms useful for urban area classification. The Cloude-Pottier α, the Touzi dominant scattering amplitude αs1 and the anisotropy A were among the top six important parameters selected for both the datasets. However, it was observed that these parameters were ranked differently for the two datasets. The urban area classification using RF was compared with the Support Vector Machine (SVM) and the Maximum Likelihood Classifier (MLC) for both the datasets. RF outperforms SVM by 4% and MLC by 12% in Dataset 1. It also outperforms SVM and MLC by 3.5% and 11% respectively in Dataset 2.
2015-01-01
Background Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. Results This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction
Elhence, Priti; Chaudhary, Rajendra K.; Nityanand, Soniya
2014-01-01
Background Cross-match-compatible platelets are used for the management of thrombocytopenic patients who are refractory to transfusions of randomly selected platelets. Data supporting the effectiveness of platelets that are compatible according to cross-matching with a modified antigen capture enzyme-linked immunosorbent assay (MAC-ELISA or MACE) are limited. This study aimed to determine the effectiveness of cross-match-compatible platelets in an unselected group of refractory patients. Materials and methods One hundred ABO compatible single donor platelet transfusions given to 31 refractory patients were studied. Patients were defined to be refractory if their 24-hour corrected count increment (CCI) was <5×109/L following two consecutive platelet transfusions. Platelets were cross-matched by MACE and the CCI was determined to monitor the effectiveness of platelet transfusions. Results The clinical sensitivity, specificity, positive predictive value and negative predictive value of the MACE-cross-matched platelets for post-transfusion CCI were 88%, 54.6%, 39.3% and 93.2%, respectively. The difference between adequate and inadequate post-transfusion 24-hour CCI for MACE cross-matched-compatible vs incompatible single donor platelet transfusions was statistically significant (p=0.000). The 24-hour CCI (mean±SD) was significantly higher for cross-match-compatible platelets (9,250±026.6) than for incompatible ones (6,757.94±2,656.5) (p<0.0001). Most of the incompatible cross-matches (73.2%) were due to anti-HLA antibodies, alone (55.3% of cases) or together with anti-platelet glycoprotein antibodies (17.9%). Discussion The clinical sensitivity and negative predictive value of platelet cross-matching by MACE were high in this study and such tests may, therefore, be used to select compatible platelets for refractory patients. A high negative predictive value demonstrates the greater chance of an adequate response with cross-matched-compatible platelets. PMID
Mar, Philip L; Raj, Vidya; Black, Bonnie K; Biaggioni, Italo; Shibao, Cyndya A; Paranjape, Sachin Y; Dupont, William D; Robertson, David; Raj, Satish R
2014-01-01
Background Selective serotonin reuptake inhibitors (SSRIs) are often prescribed in patients with postural tachycardia syndrome (POTS), and act at synaptic terminals to increase monoamine neurotransmitters. We hypothesized that they act to increase blood pressure (BP) and attenuate reflex tachycardia, thereby improving symptoms. Acute hemodynamic profiles after SSRI administration in POTS patients have not previously been reported. Methods Patients with POTS (n=39; F=37, 39 ±9 years) underwent a randomized crossover trial with sertraline 50mg and placebo. Heart rate (HR), systolic, diastolic, and mean BP were measured with the patient seated and standing for 10 minutes prior to drug or placebo administration, and then hourly for 4 hours. The primary endpoint was standing HR at 4 hours. Results At 4 hours, standing HR and systolic BP were not significantly different between sertraline and placebo. Seated systolic (106±12 mmHg vs. 101±8 mmHg; P=0.041), diastolic (72±8 mmHg vs. 69±8 mmHg; P=0.022), and mean BP (86±9 mmHg vs. 81±9 mmHg; P=0.007) were significantly higher after sertraline administration than placebo. At 4 hours, symptoms were worse with sertraline than placebo. Conclusions Sertraline had a modest pressor effect in POTS patients, but this did not translate into a reduced HR or improved symptoms. PMID:24227635
Biglar, Mahmood; Soltani, Khadijeh; Nabati, Farzaneh; Bazl, Roya; Mojab, Faraz; Amanlou, Massoud
2012-01-01
Helicobacter pylori (H. pylori) infection leads to different clinical and pathological outcomes in humans, including chronic gastritis, peptic ulcer disease and gastric neoplasia and even gastric cancer and its eradiation dependst upon multi-drug therapy. The most effective therapy is still unknown and prompts people to make great efforts to find better and more modern natural or synthetic anti-H. pylori agents. In this report 21 randomly selected herbal methanolic extracts were evaluated for their effect on inhibition of Jack-bean urease using the indophenol method as described by Weatherburn. The inhibition potency was measured by UV spectroscopy technique at 630 nm which attributes to released ammonium. Among these extracts, five showed potent inhibitory activities with IC50 ranges of 18-35 μg/mL. These plants are Matricaria disciforme (IC50:35 μg/mL), Nasturtium officinale (IC50:18 μg/mL), Punica granatum (IC50:30 μg/mL), Camelia sinensis (IC50:35 μg/mL), Citrus aurantifolia (IC50:28 μg/mL). PMID:24250509
Biglar, Mahmood; Soltani, Khadijeh; Nabati, Farzaneh; Bazl, Roya; Mojab, Faraz; Amanlou, Massoud
2012-01-01
Helicobacter pylori (H. pylori) infection leads to different clinical and pathological outcomes in humans, including chronic gastritis, peptic ulcer disease and gastric neoplasia and even gastric cancer and its eradiation dependst upon multi-drug therapy. The most effective therapy is still unknown and prompts people to make great efforts to find better and more modern natural or synthetic anti-H. pylori agents. In this report 21 randomly selected herbal methanolic extracts were evaluated for their effect on inhibition of Jack-bean urease using the indophenol method as described by Weatherburn. The inhibition potency was measured by UV spectroscopy technique at 630 nm which attributes to released ammonium. Among these extracts, five showed potent inhibitory activities with IC50 ranges of 18-35 μg/mL. These plants are Matricaria disciforme (IC50:35 μg/mL), Nasturtium officinale (IC50:18 μg/mL), Punica granatum (IC50:30 μg/mL), Camelia sinensis (IC50:35 μg/mL), Citrus aurantifolia (IC50:28 μg/mL). PMID:24250509
ERIC Educational Resources Information Center
Newton, Nicola C.; Conrod, Patricia J.; Slade, Tim; Carragher, Natacha; Champion, Katrina E.; Barrett, Emma L.; Kelly, Erin V.; Nair, Natasha K.; Stapinski, Lexine; Teesson, Maree
2016-01-01
Background: This study investigated the long-term effectiveness of Preventure, a selective personality-targeted prevention program, in reducing the uptake of alcohol, harmful use of alcohol, and alcohol-related harms over a 3-year period. Methods: A cluster randomized controlled trial was conducted to assess the effectiveness of Preventure.…
Pan, Xiao-Yong; Shen, Hong-Bin
2009-01-01
B-factor is highly correlated with protein internal motion, which is used to measure the uncertainty in the position of an atom within a crystal structure. Although the rapid progress of structural biology in recent years makes more accurate protein structures available than ever, with the avalanche of new protein sequences emerging during the post-genomic Era, the gap between the known protein sequences and the known protein structures becomes wider and wider. It is urgent to develop automated methods to predict B-factor profile from the amino acid sequences directly, so as to be able to timely utilize them for basic research. In this article, we propose a novel approach, called PredBF, to predict the real value of B-factor. We firstly extract both global and local features from the protein sequences as well as their evolution information, then the random forests feature selection is applied to rank their importance and the most important features are inputted to a two-stage support vector regression (SVR) for prediction, where the initial predicted outputs from the 1(st) SVR are further inputted to the 2nd layer SVR for final refinement. Our results have revealed that a systematic analysis of the importance of different features makes us have deep insights into the different contributions of features and is very necessary for developing effective B-factor prediction tools. The two-layer SVR prediction model designed in this study further enhanced the robustness of predicting the B-factor profile. As a web server, PredBF is freely available at: http://www.csbio.sjtu.edu.cn/bioinf/PredBF for academic use.
Pischedda, Alison; Friberg, Urban; Stewart, Andrew D.; Miller, Paige M.; Rice, William R.
2015-01-01
The effective population size (Ne) is a fundamental parameter in population genetics that influences the rate of loss of genetic diversity. Sexual selection has the potential to reduce Ne by causing the sex-specific distributions of individuals that successfully reproduce to diverge. To empirically estimate the effect of sexual selection on Ne, we obtained fitness distributions for males and females from an outbred, laboratory-adapted population of Drosophila melanogaster. We observed strong sexual selection in this population (the variance in male reproductive success was ∼14 times higher than that for females), but found that sexual selection had only a modest effect on Ne, which was 75% of the census size. This occurs because the substantial random offspring mortality in this population diminishes the effects of sexual selection on Ne, a result that necessarily applies to other high fecundity species. The inclusion of this random offspring mortality creates a scaling effect that reduces the variance/mean ratios for male and female reproductive success and causes them to converge. Our results demonstrate that measuring reproductive success without considering offspring mortality can underestimate Ne and overestimate the genetic consequences of sexual selection. Similarly, comparing genetic diversity among different genomic components may fail to detect strong sexual selection. PMID:26374275
Menze, Bjoern H; Kelm, B Michael; Masuch, Ralf; Himmelreich, Uwe; Bachert, Peter; Petrich, Wolfgang; Hamprecht, Fred A
2009-01-01
Background Regularized regression methods such as principal component or partial least squares regression perform well in learning tasks on high dimensional spectral data, but cannot explicitly eliminate irrelevant features. The random forest classifier with its associated Gini feature importance, on the other hand, allows for an explicit feature elimination, but may not be optimally adapted to spectral data due to the topology of its constituent classification trees which are based on orthogonal splits in feature space. Results We propose to combine the best of both approaches, and evaluated the joint use of a feature selection based on a recursive feature elimination using the Gini importance of random forests' together with regularized classification methods on spectral data sets from medical diagnostics, chemotaxonomy, biomedical analytics, food science, and synthetically modified spectral data. Here, a feature selection using the Gini feature importance with a regularized classification by discriminant partial least squares regression performed as well as or better than a filtering according to different univariate statistical tests, or using regression coefficients in a backward feature elimination. It outperformed the direct application of the random forest classifier, or the direct application of the regularized classifiers on the full set of features. Conclusion The Gini importance of the random forest provided superior means for measuring feature relevance on spectral data, but – on an optimal subset of features – the regularized classifiers might be preferable over the random forest classifier, in spite of their limitation to model linear dependencies only. A feature selection based on Gini importance, however, may precede a regularized linear classification to identify this optimal subset of features, and to earn a double benefit of both dimensionality reduction and the elimination of noise from the classification task. PMID:19591666
Gearhart, Tricia L; Montelaro, Ronald C; Schurdak, Mark E; Pilcher, Chris D; Rinaldo, Charles R; Kodadek, Thomas; Park, Yongseok; Islam, Kazi; Yurko, Raymond; Marques, Ernesto T A; Burke, Donald S
2016-08-01
Non-biological synthetic oligomers can serve as ligands for antibodies. We hypothesized that a random combinatorial library of synthetic poly-N-substituted glycine oligomers, or peptoids, could represent a random "shape library" in antigen space, and that some of these peptoids would be recognized by the antigen-binding pocket of disease-specific antibodies. We synthesized and screened a one bead one compound combinatorial library of peptoids, in which each bead displayed an 8-mer peptoid with ten possible different amines at each position (10(8) theoretical variants). By screening one million peptoid/beads we found 112 (approximately 1 in 10,000) that preferentially bound immunoglobulins from human sera known to be positive for anti-HIV antibodies. Reactive peptoids were then re-synthesized and rigorously evaluated in plate-based ELISAs. Four peptoids showed very good, and one showed excellent, properties for establishing a sero-diagnosis of HIV. These results demonstrate the feasibility of constructing sero-diagnostic assays for infectious diseases from libraries of random molecular shapes. In this study we sought a proof-of-principle that we could identify a potential diagnostic antibody ligand biomarker for an infectious disease in a random combinatorial library of 100 million peptoids. We believe that this is the first evidence that it is possible to develop sero-diagnostic assays - for any infectious disease - based on screening random libraries of non-biological molecular shapes. PMID:27182050
Gardner, M D; Scott, R
1980-01-01
The results of analysis of blood specimens from randomly selected adults aged 19-88 years in the new town of Cumbernauld were used to establish age- and sex-related reference ranges by the centile method (central 95%) for plasma calcium, phosphate, total protein, albumin, globulins, urea, creatinine, and urate. The possible existence of a subpopulation with a higher reference range for urea is mooted. PMID:7400337
Ziehe, Martin; Gregorius, Hans-Rolf
1981-01-01
Population genetic models, such as differential viability selection between the sexes and differential multiplicative fecundity contributions of the sexes, are considered for a single multiallelic locus. These selection models usually produce deviations of the zygotic genotype frequencies from Hardy-Weinberg proportions. The deviations are investigated (with special emphasis put on equilibrium states) to quantify the effect of selective asymmetry in the two sexes. For many selection regimes, the present results demonstrate a strong affinity of zygotic genotype frequencies for Hardy-Weinberg proportions after two generations, at the latest. It is shown that the deviations of genotypic equilibria from the corresponding Hardy-Weinberg proportions can be expressed and estimated by means of selection components of only that sex with the lower selection intensity. This corresponds to the well-known fact that viability selection acting in only one sex yields Hardy-Weinberg equilibria. PMID:17249085
Surdina, A V; Rassokhin, T I; Golovin, A V; Spiridonova, V A; Kraal, B; Kopylov, A M
2008-06-01
In E. coli cells ribosomal small subunit biogenesis is regulated by RNA-protein interactions involving protein S7. S7 initiates the subunit assembly interacting with 16S rRNA. During shift-down of rRNA synthesis level, free S7 inhibits self-translation by interacting with 96 nucleotides long specific region of streptomycin (str) mRNA between cistrons S12 and S7 (intercistron). Many bacteria do not have the extended intercistron challenging development of specific approaches for searching putative mRNA regulatory regions, which are able to interact with proteins. The paper describes application of SERF approach (Selection of Random RNA Fragments) to reveal regulatory regions of str mRNA. Set of random DNA fragments has been generated from str operon by random hydrolysis and then transcribed into RNA; the fragments being able to bind protein S7 (serfamers) have been selected by iterative rounds. S7 binds to single serfamer, 109 nucleotide long (RNA109), derived from the intercistron. After multiple copying and selection, the intercistronic mutant (RNA109) has been isolated; it has enhanced affinity to S7. RNA109 binds to the protein better than authentic intercistronic str mRNA; apparent dissociation constants are 26 +/- 5 and 60 +/- 8 nM, respectively. Location of S7 binding site on the mRNA, as well as putative mode of regulation of coupled translation of S12 and S7 cistrons have been hypothesized.
Tazzyman, Samuel J; Iwasa, Yoh
2010-06-01
The variation in color pattern between populations of the poison-dart frog Oophaga pumilio across the Bocas del Toro archipelago in Panama is suggested to be due to sexual selection, as two other nonsexually selecting Dendrobatid species found in the same habitat and range do not exhibit this variation. We theoretically test this assertion using a quantitative genetic sexual selection model incorporating aposematic coloration and random drift. We find that sexual selection could cause the observed variation via a novel process we call "coupled drift." Within our model, for certain parameter values, sexual selection forces frog color to closely follow the evolution of female preference. Any between-population variation in preference due to genetic drift is passed on to color. If female preference in O. pumilio is strongly affected by drift, whereas color in the nonsexually selecting Dendrobatid species is not, coupled drift will cause increased between-population phenotypic variation. However, with different parameter values, coupled drift will result in between-population variation in color being suppressed compared to its neutral value, or in little or no effect. We suggest that coupled drift is a novel theoretical process that could have a role linking sexual selection with speciation both in O. pumilio, and perhaps more generally. PMID:20015236
Tazzyman, Samuel J; Iwasa, Yoh
2010-06-01
The variation in color pattern between populations of the poison-dart frog Oophaga pumilio across the Bocas del Toro archipelago in Panama is suggested to be due to sexual selection, as two other nonsexually selecting Dendrobatid species found in the same habitat and range do not exhibit this variation. We theoretically test this assertion using a quantitative genetic sexual selection model incorporating aposematic coloration and random drift. We find that sexual selection could cause the observed variation via a novel process we call "coupled drift." Within our model, for certain parameter values, sexual selection forces frog color to closely follow the evolution of female preference. Any between-population variation in preference due to genetic drift is passed on to color. If female preference in O. pumilio is strongly affected by drift, whereas color in the nonsexually selecting Dendrobatid species is not, coupled drift will cause increased between-population phenotypic variation. However, with different parameter values, coupled drift will result in between-population variation in color being suppressed compared to its neutral value, or in little or no effect. We suggest that coupled drift is a novel theoretical process that could have a role linking sexual selection with speciation both in O. pumilio, and perhaps more generally.
Miceli, R M; DeGraaf, M E; Fischer, H D
1994-01-01
Libraries of random peptides can be screened to identify species which interact with antibodies or receptors. Similarly, maps of native molecular interactions can frequently be deduced by screening a limited set of peptide fragments derived from sequences within a native antigen or ligand. However, the existence of cross-reactive sequences that mimic original epitopes and the limited replaceability of amino acid residues suggest that the sequence space accessible by a receptor can be much broader. Definition of this space is of particular importance where structural information is required for peptidomimetic or drug design. We have used a two-stage selection scheme to expand the sequence space accessible by a phage display library and to define peptide epitopes of the anti-FLAG octapeptide monoclonal M2 antibody. Affinity selection of a primary library of 2 x 10(6) random decapeptides identified a non-contiguous core of three residues in the binding motif Tyr-Lys-Xaa-Xaa-Asp. A second stage library with 2 x 10(7) individual clones bearing the core motif but with the remaining flanking and internal residues re-randomized permitted access to a broader sequence space represented in a library equivalent to several orders of magnitude larger. Data here demonstrate that extended access to binding sequence space permitted by multi-stage screening of phage display libraries can reveal not only essential residues required for ligand binding, but also the ligand structural range permitted within the receptor binding pocket.
Worrilow, K.C.; Eid, S.; Woodhouse, D.; Perloe, M.; Smith, S.; Witmyer, J.; Ivani, K.; Khoury, C.; Ball, G.D.; Elliot, T.; Lieberman, J.
2013-01-01
STUDY QUESTION Does the selection of sperm for ICSI based on their ability to bind to hyaluronan improve the clinical pregnancy rates (CPR) (primary end-point), implantation (IR) and pregnancy loss rates (PLR)? SUMMARY ANSWER In couples where ≤65% of sperm bound hyaluronan, the selection of hyaluronan-bound (HB) sperm for ICSI led to a statistically significant reduction in PLR. WHAT IS KNOWN AND WHAT THIS PAPER ADDS HB sperm demonstrate enhanced developmental parameters which have been associated with successful fertilization and embryogenesis. Sperm selected for ICSI using a liquid source of hyaluronan achieved an improvement in IR. A pilot study by the primary author demonstrated that the use of HB sperm in ICSI was associated with improved CPR. The current study represents the single largest prospective, multicenter, double-blinded and randomized controlled trial to evaluate the use of hyaluronan in the selection of sperm for ICSI. DESIGN Using the hyaluronan binding assay, an HB score was determined for the fresh or initial (I-HB) and processed or final semen specimen (F-HB). Patients were classified as >65% or ≤65% I-HB and stratified accordingly. Patients with I-HB scores ≤65% were randomized into control and HB selection (HYAL) groups whereas patients with I-HB >65% were randomized to non-participatory (NP), control or HYAL groups, in a ratio of 2:1:1. The NP group was included in the >65% study arm to balance the higher prevalence of patients with I-HB scores >65%. In the control group, oocytes received sperm selected via the conventional assessment of motility and morphology. In the HYAL group, HB sperm meeting the same visual criteria were selected for injection. Patient participants and clinical care providers were blinded to group assignment. PARTICIPANTS AND SETTING Eight hundred two couples treated with ICSI in 10 private and hospital-based IVF programs were enrolled in this study. Of the 484 patients stratified to the I-HB > 65% arm, 115
NASA Astrophysics Data System (ADS)
Goudarzi, Nasser
2016-04-01
In this work, two new and powerful chemometrics methods are applied for the modeling and prediction of the 19F chemical shift values of some fluorinated organic compounds. The radial basis function-partial least square (RBF-PLS) and random forest (RF) are employed to construct the models to predict the 19F chemical shifts. In this study, we didn't used from any variable selection method and RF method can be used as variable selection and modeling technique. Effects of the important parameters affecting the ability of the RF prediction power such as the number of trees (nt) and the number of randomly selected variables to split each node (m) were investigated. The root-mean-square errors of prediction (RMSEP) for the training set and the prediction set for the RBF-PLS and RF models were 44.70, 23.86, 29.77, and 23.69, respectively. Also, the correlation coefficients of the prediction set for the RBF-PLS and RF models were 0.8684 and 0.9313, respectively. The results obtained reveal that the RF model can be used as a powerful chemometrics tool for the quantitative structure-property relationship (QSPR) studies.
NASA Astrophysics Data System (ADS)
Choo, Yen; Klug, Aaron
1994-11-01
We have used two selection techniques to study sequence-specific DNA recognition by the zinc finger, a small, modular DNA-binding minidomain. We have chosen zinc fingers because they bind as independent modules and so can be linked together in a peptide designed to bind a predetermined DNA site. In this paper, we describe how a library of zinc fingers displayed on the surface of bacteriophage enables selection of fingers capable of binding to given DNA triplets. The amino acid sequences of selected fingers which bind the same triplet are compared to examine how sequence-specific DNA recognition occurs. Our results can be rationalized in terms of coded interactions between zinc fingers and DNA, involving base contacts from a few α-helical positions. In the paper following this one, we describe a complementary technique which confirms the identity of amino acids capable of DNA sequence discrimination from these positions.
Hatanaka, Takaaki; Ohzono, Shinji; Park, Mirae; Sakamoto, Kotaro; Tsukamoto, Shogo; Sugita, Ryohei; Ishitobi, Hiroyuki; Mori, Toshiyuki; Ito, Osamu; Sorajo, Koichi; Sugimura, Kazuhisa; Ham, Sihyun; Ito, Yuji
2012-12-14
Phage display system is a powerful tool to design specific ligands for target molecules. Here, we used disulfide-constrained random peptide libraries constructed with the T7 phage display system to isolate peptides specific to human IgA. The binding clones (A1-A4) isolated by biopanning exhibited clear specificity to human IgA, but the synthetic peptide derived from the A2 clone exhibited a low specificity/affinity (K(d) = 1.3 μm). Therefore, we tried to improve the peptide using a partial randomized phage display library and mutational studies on the synthetic peptides. The designed Opt-1 peptide exhibited a 39-fold higher affinity (K(d) = 33 nm) than the A2 peptide. An Opt-1 peptide-conjugated column was used to purify IgA from human plasma. However, the recovered IgA fraction was contaminated with other proteins, indicating nonspecific binding. To design a peptide with increased binding specificity, we examined the structural features of Opt-1 and the Opt-1-IgA complex using all-atom molecular dynamics simulations with explicit water. The simulation results revealed that the Opt-1 peptide displayed partial helicity in the N-terminal region and possessed a hydrophobic cluster that played a significant role in tight binding with IgA-Fc. However, these hydrophobic residues of Opt-1 may contribute to nonspecific binding with other proteins. To increase binding specificity, we introduced several mutations in the hydrophobic residues of Opt-1. The resultant Opt-3 peptide exhibited high specificity and high binding affinity for IgA, leading to successful isolation of IgA without contamination.
2016-01-01
Purpose: The goal of this study was to characterize the difficulty index of the items in the skills test components of the class I and II Korean emergency medical technician licensing examination (KEMTLE), which requires examinees to select items randomly. Methods: The results of 1,309 class I KEMTLE examinations and 1,801 class II KEMTLE examinations in 2013 were subjected to analysis. Items from the basic and advanced skills test sections of the KEMTLE were compared to determine whether some were significantly more difficult than others. Results: In the class I KEMTLE, all 4 of the items on the basic skills test showed significant variation in difficulty index (P<0.01), as well as 4 of the 5 items on the advanced skills test (P<0.05). In the class II KEMTLE, 4 of the 5 items on the basic skills test showed significantly different difficulty index (P<0.01), as well as all 3 of the advanced skills test items (P<0.01). Conclusion: In the skills test components of the class I and II KEMTLE, the procedure in which examinees randomly select questions should be revised to require examinees to respond to a set of fixed items in order to improve the reliability of the national licensing examination. PMID:26883810
Rebulla, Paolo; Morelati, Fernanda; Revelli, Nicoletta; Villa, Maria Antonietta; Paccapelo, Cinzia; Nocco, Angela; Greppi, Noemi; Marconi, Maurizio; Cortelezzi, Agostino; Fracchiolla, Nicola; Martinelli, Giovanni; Deliliers, Giorgio Lambertenghi
2004-04-01
In 1999, we implemented an automated platelet cross-matching (XM) programme to select compatible platelets from the local inventory for patients refractory to random donor platelets. In this study, we evaluated platelet count increments in 40 consecutive refractory patients (8.3% of 480 consecutive platelet recipients) given 569 cross-match-negative platelets between April 1999 and December 2001. XM was performed automatically with a commercially available immunoadherence assay. Pre-, 1- and 24-h post-transfusion platelet counts (mean +/- SD) for the 569 XM-negative platelet transfusions containing 302 +/- 71 x 109 platelets were 7.7 +/- 5.5, 32.0 +/- 21.0 and 16.8 +/- 15.5 x 109/l respectively. Increments were significantly higher (P < 0.05, t-test) than those observed in the same patients given 303 random platelet pools (dose = 318 +/- 52 x 109 platelets) during the month before refractoriness was detected, when pre-, 1- and 24-h post-transfusion counts were 7.0 +/- 8.6, 15.9 +/- 16.1 and 9.6 +/- 12.8 x 109/l respectively. The cost of the platelet XM disposable kit per transfusion to produce 1-h post-transfusion platelet count increments >10 x 109/l was euro 447. This programme enabled the rapid selection of effective platelets for refractory patients, from the local inventory. PMID:15015974
Rebulla, Paolo; Morelati, Fernanda; Revelli, Nicoletta; Villa, Maria Antonietta; Paccapelo, Cinzia; Nocco, Angela; Greppi, Noemi; Marconi, Maurizio; Cortelezzi, Agostino; Fracchiolla, Nicola; Martinelli, Giovanni; Deliliers, Giorgio Lambertenghi
2004-04-01
In 1999, we implemented an automated platelet cross-matching (XM) programme to select compatible platelets from the local inventory for patients refractory to random donor platelets. In this study, we evaluated platelet count increments in 40 consecutive refractory patients (8.3% of 480 consecutive platelet recipients) given 569 cross-match-negative platelets between April 1999 and December 2001. XM was performed automatically with a commercially available immunoadherence assay. Pre-, 1- and 24-h post-transfusion platelet counts (mean +/- SD) for the 569 XM-negative platelet transfusions containing 302 +/- 71 x 109 platelets were 7.7 +/- 5.5, 32.0 +/- 21.0 and 16.8 +/- 15.5 x 109/l respectively. Increments were significantly higher (P < 0.05, t-test) than those observed in the same patients given 303 random platelet pools (dose = 318 +/- 52 x 109 platelets) during the month before refractoriness was detected, when pre-, 1- and 24-h post-transfusion counts were 7.0 +/- 8.6, 15.9 +/- 16.1 and 9.6 +/- 12.8 x 109/l respectively. The cost of the platelet XM disposable kit per transfusion to produce 1-h post-transfusion platelet count increments >10 x 109/l was euro 447. This programme enabled the rapid selection of effective platelets for refractory patients, from the local inventory.
Kandaswamy, Krishna Kumar; Pugalenthi, Ganesan; Kalies, Kai-Uwe; Hartmann, Enno; Martinetz, Thomas
2013-01-21
The extracellular matrix (ECM) is a major component of tissues of multicellular organisms. It consists of secreted macromolecules, mainly polysaccharides and glycoproteins. Malfunctions of ECM proteins lead to severe disorders such as marfan syndrome, osteogenesis imperfecta, numerous chondrodysplasias, and skin diseases. In this work, we report a random forest approach, EcmPred, for the prediction of ECM proteins from protein sequences. EcmPred was trained on a dataset containing 300 ECM and 300 non-ECM and tested on a dataset containing 145 ECM and 4187 non-ECM proteins. EcmPred achieved 83% accuracy on the training and 77% on the test dataset. EcmPred predicted 15 out of 20 experimentally verified ECM proteins. By scanning the entire human proteome, we predicted novel ECM proteins validated with gene ontology and InterPro. The dataset and standalone version of the EcmPred software is available at http://www.inb.uni-luebeck.de/tools-demos/Extracellular_matrix_proteins/EcmPred.
Yamaguchi, Takeshi; Mukai, Hirofumi
2012-12-01
Changes in Ki-67 may be a useful predictor of efficacy for preoperative therapy in breast cancer. This randomized Phase II trial will compare standard preoperative chemotherapy comprising paclitaxel and trastuzumab with Ki-67 index guided preoperative chemotherapy in patients with human epidermal growth factor receptor 2-positive operable breast cancer. In the Ki-67 index guided therapy, paclitaxel and trastuzumab were administered initially and the Ki-67 index is evaluated from biopsied specimens after 2 weeks of preoperative chemotherapy. The subsequent chemotherapy regimen is modified according to changes in the Ki-67 index from the start of therapy. If the Ki-67 index is reduced as expected, paclitaxel and trastuzumab are continued. If the Ki-67 index is not reduced as expected, the chemotherapy regimen is changed to epirubicin, cyclophosphamide and trastuzumab. The primary endpoint is the rate of pathological complete response. The secondary endpoints are the objective response rate, disease-free survival and overall survival. Two hundred patients were planned to be accrued.
Sugden, Nicole A.; Moulson, Margaret C.
2015-01-01
Psychological and developmental research have been critiqued for the lack of diversity of research samples. Because differences in culture, race, and ethnicity can influence participant behavior, limited diversity limits the generalizability of the findings. These differences may also impact how participants behave in response to recruitment attempts, which suggests that recruitment itself may be leveraged to increase sample diversity. The goal of the current study was to determine what factors, within a recruitment interaction, could be leveraged to increase success and diversity when recruiting families with children for developmental research. Study 1 found three factors influenced success: (1) recruitment was more successful when other potential participants were also interested (i.e., recruiters were busy), (2) recruiters of particular races were more successful than recruiters of other races, and (3) differences in success were related to what the recruiter said to engage the potential participant (i.e., the script). The latter two factors interacted, suggesting some recruiters were using less optimal scripts. To improve success rates, study 2 randomly assigned scripts to recruiters and encouraged them to recruit more vigorously during busy periods. Study 2 found that two factors influenced success: (1) some scripts were more successful than others and (2) we were more successful at recruiting non-White potential participants than White participants. These two interacted, with some scripts being more successful with White and other scripts being more successful with non-White families. This intervention significantly increased recruitment success rate by 8.1% and the overall number of families recruited by 15.3%. These findings reveal that empirically evaluating and tailoring recruitment efforts based on the most successful strategies is effective in boosting diversity through increased participation of children from non-White families. PMID:25972829
Sugden, Nicole A; Moulson, Margaret C
2015-01-01
Psychological and developmental research have been critiqued for the lack of diversity of research samples. Because differences in culture, race, and ethnicity can influence participant behavior, limited diversity limits the generalizability of the findings. These differences may also impact how participants behave in response to recruitment attempts, which suggests that recruitment itself may be leveraged to increase sample diversity. The goal of the current study was to determine what factors, within a recruitment interaction, could be leveraged to increase success and diversity when recruiting families with children for developmental research. Study 1 found three factors influenced success: (1) recruitment was more successful when other potential participants were also interested (i.e., recruiters were busy), (2) recruiters of particular races were more successful than recruiters of other races, and (3) differences in success were related to what the recruiter said to engage the potential participant (i.e., the script). The latter two factors interacted, suggesting some recruiters were using less optimal scripts. To improve success rates, study 2 randomly assigned scripts to recruiters and encouraged them to recruit more vigorously during busy periods. Study 2 found that two factors influenced success: (1) some scripts were more successful than others and (2) we were more successful at recruiting non-White potential participants than White participants. These two interacted, with some scripts being more successful with White and other scripts being more successful with non-White families. This intervention significantly increased recruitment success rate by 8.1% and the overall number of families recruited by 15.3%. These findings reveal that empirically evaluating and tailoring recruitment efforts based on the most successful strategies is effective in boosting diversity through increased participation of children from non-White families.
Davitkov, Perica; Chandar, Apoorva Krishna; Hirsch, Amy; Compan, Anita; Silveira, Marina G.; Anthony, Donald D.; Smith, Suzanne; Gideon, Clare; Bonomo, Robert A.; Falck-Ytter, Yngve
2016-01-01
, pragmatic randomized controlled trials are necessary for guidance beyond just acquisition costs and to make evidence-based formulary selections when multiple effective treatments are available. (Clinicaltrials.gov registration: NCT02113631). PMID:27741230
Baron, Jean-Pierre; Tully, Thomas; Le Galliard, Jean-François
2010-10-01
When environmental conditions exert sex-specific selection on offspring, mothers should benefit from biasing their sex allocation towards the sex with the highest fitness in a given environment. Yet, studies show mixed support for such adaptive strategies in vertebrates, which may be due to mechanistic constraints and/or weak selection on facultative sex allocation. In an attempt to disentangle these alternatives, we quantified sex-specific fitness returns and sex allocation (sex ratio and sex-specific mass at birth) according to maternal factors (body size, age, birth date, and litter size), habitat, and year in a viviparous snake with genotypic sex determination. We used data on 106 litters from 19 years of field survey in two nearby habitats occupied by the meadow viper Vipera ursinii ursinii in south-eastern France. Maternal reproductive investment and habitat quality had no differential effects on the growth and survival of sons and daughters. Sex ratio at birth was balanced despite a slight female-biased mortality before birth. No sexual mass dimorphism between offspring was evident. Sex allocation was almost random apart for a trend towards more male-biased litters as females grew older, which could be explained by an inbreeding avoidance strategy. Thus, a weak selection for facultative sex allocation seems sufficient to explain the almost equal sex allocation in the meadow viper.
Iwamoto, Takahiro; Watanabe, Yoshiki; Sakamoto, Youichi; Suzuki, Toshiyasu; Yamago, Shigeru
2011-06-01
[n]Cycloparaphenylenes (n = 8-13, CPPs) were synthesized, and their physical properties were systematically investigated. [8] and [12]CPPs were selectively prepared from the reaction of 4,4'-bis(trimethylstannyl)biphenyl and 4,4''-bis(trimethylstannyl)terphenyl, respectively, with Pt(cod)Cl(2) (cod = 1,5-cyclooctadiene) through square-shaped tetranuclear platinum intermediates. A mixture of [8]-[13]CPPs was prepared in good combined yields by mixing biphenyl and terphenyl precursors with platinum sources. Products were easily separated and purified by using gel permeation chromatography. In (1)H NMR spectra, the proton of the CPPs shifts to a lower field as n increased due to an anisotropic effect from the nearby PP moieties. Although the UV-vis spectra were rather insensitive to the size of the CPPs, the fluorescence spectra changed significantly in relation to their size. A larger Stokes shift was observed for the smaller CPPs. Redox properties of the CPPs were measured for the first time by using cyclic voltammetry, and the smaller CPPs had lower oxidation potentials. The results are consistent with the HOMO energies of CPPs, of which the smaller CPPs had higher energies. PMID:21542589
Leucht, Stefan
2004-03-01
The pharmacological profiles of the atypical antipsychotics, clozapine, olanzapine, quetiapine and risperidone, all show a combined serotonin (5-HT2) and dopamine type-2 (D2) receptor antagonism. Amisulpride, a highly selective dopamine D2/D3 receptor antagonist that binds preferentially to receptors in the mesolimbic system, is also an 'atypical' antipsychotic despite having a different receptor-affinity profile. A meta-analysis of 18 clinical trials was undertaken to compare the efficacy and safety of amisulpride with conventional antipsychotics. The improvement in mental state was assessed using the Brief Psychiatric Rating Scale (BPRS) or the Scale for the Assessment of Negative Symptoms (SANS). In a pooled analysis of 10 studies of acutely ill patients, amisulpride was significantly more effective than conventional neuroleptics with regard to improvement of global symptoms. Amisulpride is, to date, the only atypical antipsychotic for which several studies on patients suffering predominantly from negative symptoms have been published. In four such studies, amisulpride was significantly superior to placebo. Three small studies with conventional neuroleptics as a comparator showed only a trend in favour of amisulpride in this regard. Amisulpride was associated with fewer extrapyramidal side-effects and fewer drop-outs due to adverse events than conventional neuroleptics. These results clearly show that amisulpride is an 'atypical' antipsychotic, and they cast some doubt on the notion that combined 5-HT2-D2 antagonism is the only reason for the high efficacy against negative symptoms and fewer extrapyramidal side-effects.
NASA Technical Reports Server (NTRS)
White, D. H.
1980-01-01
A general theory is presented for the origin of a self-replicating chemical system, termed an autogen, which is capable of both crude replication and translation (protein synthesis). The theory requires the availability of free energy and monomers to the system, a significant background low-yield synthesis of kinetically stable oligopeptides and oligonucleotides, the localization of the oligomers, crude oligonucleotide selectivity of amino acids during oligopeptide synthesis, crude oligonucleotide replication, and two short peptide families which catalyze replication and translation, to produce a localized group of at least one copy each of two protogenes and two protoenzymes. The model posits a process of random oligomerization, followed by the random nucleation of functional components and the rapid autocatalytic growth of the functioning autogen to macroscopic amounts, to account for the origin of the first self-replicating system. Such a process contains steps of such high probability and short time periods that it is suggested that the emergence of an autogen in a laboratory experiment of reasonable time scale may be possible.
Stelzl, U; Spahn, C M; Nierhaus, K H
2000-04-25
Two-thirds of the 54 proteins of the Escherichia coli ribosome interact directly with the rRNAs, but the rRNA binding sites of only a very few proteins are known. We present a method (selection of random RNA fragments; SERF) that can identify the minimal binding region for proteins within ribonucleo-protein complexes such as the ribosome. The power of the method is exemplified with the ribosomal proteins L4 and L6. Binding sequences are identified for both proteins and characterized by phosphorothioate footprinting. Surprisingly, the binding region of L4, a 53-nt rRNA fragment of domain I of 23S rRNA, can simultaneously and independently bind L24, one of the two assembly initiator proteins of the large subunit.
Nasr Esfahani, Mohammad Hossein; Deemeh, Mohammad Reza; Tavalaee, Marziyeh; Sekhavati, Mohammad Hadi; Gourabi, Hamid
2016-01-01
Background Selection of sperm for intra-cytoplasmic sperm injection (ICSI) is usually considered as the ultimate technique to alleviate male-factor infertility. In routine ICSI, selection is based on morphology and viability which does not necessarily preclude the chance injection of DNA-damaged or apoptotic sperm into the oocyte. Sperm with high negative surface electrical charge, named “Zeta potential”, are mature and more likely to have intact chromatin. In addition, X-bearing spermatozoa carry more negative charge. Therefore, we aimed to compare the clinical outcomes of Zeta procedure with routine sperm selection in infertile men candidate for ICSI. Materials and Methods From a total of 203 ICSI cycles studied, 101 cycles were allocated to density gradient centrifugation (DGC)/Zeta group and the remaining 102 were included in the DGC group in this prospective study. Clinical outcomes were com- pared between the two groups. The ratios of Xand Y bearing sperm were assessed by fluorescence in situ hybridization (FISH) and quantitative polymerase chain reaction (qPCR) methods in 17 independent semen samples. Results In the present double-blind randomized clinical trial, a significant increase in top quality embryos and pregnancy rate were observed in DGC/Zeta group compared to DGC group. Moreover, sex ratio (XY/XX) at birth significantly was lower in the DGC/Zeta group compared to DGC group despite similar ratio of X/Y bearings sper- matozoa following Zeta selection. Conclusion Zeta method not only improves the percentage of top embryo quality and pregnancy outcome but also alters the sex ratio compared to the conventional DGC method, despite no significant change in the ratio of Xand Ybearing sperm population (Registration number: IRCT201108047223N1). PMID:27441060
NASA Astrophysics Data System (ADS)
Yu, Muxi; Fang, Yichen; Wang, Zongwei; Pan, Yue; Li, Ming; Cai, Yimao; Huang, Ru
2016-05-01
In this paper, we propose a TaOx resistive switching random access memory (RRAM) device with operation-polarity-dependent self-selection effect by introducing highly doped silicon (Si) electrode, which is promising for large-scale integration. It is observed that with highly doped Si as the bottom electrode (BE), the RRAM devices show non-linear (>103) I-V characteristic during negative Forming/Set operation and linear behavior during positive Forming/Set operation. The underling mechanisms for the linear and non-linear behaviors at low resistance states of the proposed device are extensively investigated by varying operation modes, different metal electrodes, and Si doping type. Experimental data and theoretical analysis demonstrate that the operation-polarity-dependent self-selection effect in our devices originates from the Schottky barrier between the TaOx layer and the interfacial SiOx formed by reaction between highly doped Si BE and immigrated oxygen ions in the conductive filament area.
Shukla, Girja S.; Krag, David N.
2010-01-01
Novel phage-displayed random linear dodecapeptide (X12) and cysteine-constrained decapeptide (CX10C) libraries constructed in fusion to the amino-terminus of P99 β-lactamase molecules were used for identifying β-lactamase-linked cancer cell-specific ligands. The size and quality of both libraries were comparable to the standards of other reported phage display systems. Using the single-round panning method based on phage DNA recovery, we identified severalβ-lactamase fusion peptides that specifically bind to live human breast cancer MDA-MB-361 cells. The β-lactamase fusion to the peptides helped in conducting the enzyme activity-based clone normalization and cell-binding screening in a very time- and cost-efficient manner. The methods were suitable for 96-well readout as well as microscopic imaging. The success of the biopanning was indicated by the presence of ~40% cancer cell-specific clones among recovered phages. One of the binding clones appeared multiple times. The cancer cell-binding fusion peptides also shared several significant motifs. This opens a new way of preparing and selecting phage display libraries. The cancer cell-specific β-lactamase-linked affinity reagents selected from these libraries can be used for any application that requires a reporter for tracking the ligand molecules. Furthermore, these affinity reagents have also a potential for their direct use in the targeted enzyme prodrug therapy of cancer. PMID:19751096
Wiener, J; Itokazu, G; Nathan, C; Kabins, S A; Weinstein, R A
1995-04-01
A randomized, double-blind, placebo-controlled trial of selective decontamination of the oropharynx and gastrointestinal tract was conducted on 61 intubated patients in a medical-surgical intensive care unit (ICU) to determine the impact on nosocomial pneumonia, other infections, and emergence of colonization or infection with antibiotic-resistant bacteria. Over 8 months, 30 patients received an oral paste and solution containing polymyxin, gentamicin, and nystatin; 31 patients received a placebo paste and solution. At study entry, patients in both groups were seriously ill (mean acute physiologic score, 27.2), frequently had pulmonary infiltrates (73.8%), and were likely to be receiving systemic antibiotics (86.9%). There were no differences between study patients and control patients in these characteristics or in frequency of any nosocomial infection (50% vs. 55%), nosocomial pneumonia (27% vs. 26%), febrile days (2.3 vs. 2.0), duration of antibiotic therapy (14.0 vs. 13.4), or mortality rates (37% vs. 48%). There was no difference in infections caused by antibiotic-resistant gram-negative bacilli, although a trend towards more frequent infection with gentamicin-resistant enterococci was found for study patients. Selective decontamination did not appear to be effective in our very ill medical-surgical ICU patients, although the number of patients in our trial was sufficient to detect only a 50% or greater reduction in pneumonia rates.
Jimenez, Roland; Hauser, Robert A.; Factor, Stewart A.; Burke, Joshua; Mandri, Daniel; Castro‐Gayol, Julio C.
2015-01-01
ABSTRACT Background Tardive dyskinesia is a persistent movement disorder induced by chronic neuroleptic exposure. NBI‐98854 is a novel, highly selective, vesicular monoamine transporter 2 inhibitor. We present results of a randomized, 6‐week, double‐blind, placebo‐controlled, dose‐titration study evaluating the safety, tolerability, and efficacy of NBI‐98854 for the treatment of tardive dyskinesia. Methods Male and female adult subjects with moderate or severe tardive dyskinesia were included. NBI‐98854 or placebo was given once per day starting at 25 mg and then escalated by 25 mg to a maximum of 75 mg based on dyskinesia and tolerability assessment. The primary efficacy endpoint was the change in Abnormal Involuntary Movement Scale from baseline at week 6 scored by blinded, central video raters. The secondary endpoint was the Clinical Global Impression of Change—Tardive Dyskinesia score assessed by the blinded investigator. Results Two hundred five potential subjects were screened, and 102 were randomized; 76% of NBI‐98854 subjects and 80% of placebo subjects reached the maximum allowed dose. Abnormal Involuntary Movement Scale scores for NBI‐98854 compared with placebo were significantly reduced (p = 0.0005). Active drug was also superior on the Clinical Global Impression of Change—Tardive Dyskinesia (p < 0.0001). Treatment‐emergent adverse event rates were 49% in the NBI‐98854 and 33% in the placebo subjects. The most common adverse events (active vs. placebo) were fatigue and headache (9.8% vs. 4.1%) and constipation and urinary tract infection (3.9% vs. 6.1%). No clinically relevant changes in safety assessments were noted. Conclusion NBI‐98854 significantly improved tardive dyskinesia and was well tolerated in patients. These results support the phase 3 clinical trials of NBI‐98854 now underway. © 2015 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder
Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design
ERIC Educational Resources Information Center
Wagler, Amy; Wagler, Ron
2014-01-01
Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…
2013-01-01
Background Artemisinin-based combination therapy is currently recommended by the World Health Organization as first-line treatment of uncomplicated malaria. Recommendations were adapted in 2010 regarding rescue treatment in case of treatment failure. Instead of quinine monotherapy, it should be combined with an antibiotic with antimalarial properties; alternatively, another artemisinin-based combination therapy may be used. However, for informing these policy changes, no clear evidence is yet available. The need to provide the policy makers with hard data on the appropriate rescue therapy is obvious. We hypothesize that the efficacy of the same artemisinin-based combination therapy used as rescue treatment is as efficacious as quinine + clindamycin or an alternative artemisinin-based combination therapy, without the risk of selecting drug resistant strains. Design We embed a randomized, open label, three-arm clinical trial in a longitudinal cohort design following up children with uncomplicated malaria until they are malaria parasite free for 4 weeks. The study is conducted in both the Democratic Republic of Congo and Uganda and performed in three steps. In the first step, the pre-randomized controlled trial (RCT) phase, children aged 12 to 59 months with uncomplicated malaria are treated with the recommended first-line drug and constitute a cohort that is passively followed up for 42 days. If the patients experience an uncomplicated malaria episode between days 14 and 42 of follow-up, they are randomized either to quinine + clindamycin, or an alternative artemisinin-based combination therapy, or the same first-line artemisinin-based combination therapy to be followed up for 28 additional days. If between days 14 and 28 the patients experience a recurrent parasitemia, they are retreated with the recommended first-line regimen and actively followed up for another 28 additional days (step three; post-RCT phase). The same methodology is followed for each subsequent
Monnet, Céline; Jorieux, Sylvie; Urbain, Rémi; Fournier, Nathalie; Bouayadi, Khalil; De Romeuf, Christophe; Behrens, Christian K.; Fontayne, Alexandre; Mondon, Philippe
2015-01-01
Despite the reasonably long half-life of immunoglogulin G (IgGs), market pressure for higher patient convenience while conserving efficacy continues to drive IgG half-life improvement. IgG half-life is dependent on the neonatal Fc receptor (FcRn), which among other functions, protects IgG from catabolism. FcRn binds the Fc domain of IgG at an acidic pH ensuring that endocytosed IgG will not be degraded in lysosomal compartments and will then be released into the bloodstream. Consistent with this mechanism of action, several Fc-engineered IgG with increased FcRn affinity and conserved pH dependency were designed and resulted in longer half-life in vivo in human FcRn-transgenic mice (hFcRn), cynomolgus monkeys, and recently in healthy humans. These IgG variants were usually obtained by in silico approaches or directed mutagenesis in the FcRn-binding site. Using random mutagenesis, combined with a pH-dependent phage display selection process, we isolated IgG variants with improved FcRn-binding, which exhibited longer in vivo half-life in hFcRn mice. Interestingly, many mutations enhancing Fc/FcRn interaction were located at a distance from the FcRn-binding site validating our random molecular approach. Directed mutagenesis was then applied to generate new variants to further characterize our IgG variants and the effect of the mutations selected. Since these mutations are distributed over the whole Fc sequence, binding to other Fc effectors, such as complement C1q and FcγRs, was dramatically modified, even by mutations distant from these effectors’ binding sites. Hence, we obtained numerous IgG variants with increased FcRn-binding and different binding patterns to other Fc effectors, including variants without any effector function, providing distinct “fit-for-purpose” Fc molecules. We therefore provide evidence that half-life and effector functions should be optimized simultaneously as mutations can have unexpected effects on all Fc receptors that are critical
ERIC Educational Resources Information Center
Ben-Ari, Morechai
2004-01-01
The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…
2010-01-01
Background Depression is a frequently observed and disabling condition in primary care, mainly treated by Primary Care Physicians with antidepressant drugs. Psychological interventions are recommended as first-line treatment by the most authoritative international guidelines but few evidences are available on their efficacy and effectiveness for mild depression. Methods/Design This multi-center randomized controlled trial was conducted in 9 Italian centres with the aim to compare the efficacy of Inter-Personal Counseling, a brief structured psychological intervention, to that of Selective Serotonin Reuptake Inhibitors. Patients with depressive symptoms referred by Primary Care Physicians to psychiatric consultation-liaison services were eligible for the study if they met the DSM-IV criteria for major depression, had a score ≥13 on the 21-item Hamilton Depression Rating Scale, and were at their first or second depressive episode. The primary outcome was remission of depressive symptoms at 2-months, defined as a HDRS score ≤ 7. Secondary outcome measures were improvement in global functioning and recurrence of depressive symptoms at 12-months. Patients who did not respond to Inter-Personal Counseling or Selective Serotonin Reuptake Inhibitors at 2-months received augmentation with the other treatment. Discussion This trial addresses some of the shortcomings of existing trials targeting major depression in primary care by evaluating the comparative efficacy of a brief psychological intervention that could be easily disseminated, by including a sample of patients with mild/moderate depression and by using different outcome measures. Trial registration Australian New Zealand Clinical Trials Registry ACTRN12608000479303 PMID:21108824
NASA Astrophysics Data System (ADS)
Gregovich, A.; Feinberg, J. M.; Steffen, A.; Sternberg, R. S.
2014-12-01
Stone tools are one of the most enduring forms of ancient human behavior available to anthropologists. The geologic materials that comprise stone tools are a reflection of the rocks that were available locally or through trade, as are the intended use of the tools and the knapping technology needed to produce them. Investigation of the rock magnetic and geochemical characteristics of the artifacts and the geological source materials provides a baseline to explore these past behaviors. This study uses rock magnetic properties to explore the raw material selection criteria involved in the production of obsidian tools in the region around Valles Caldera in northern New Mexico. Obsidian is locally abundant and was traded by tribes across the central United States. Here we compare the rock magnetic properties of a sample of obsidian projectile points (N =25) that have been geochemically sourced to the Cerro Toledo obsidian flow with geological samples collected from four sites within the same flow (N =135). This collection of archaeological artifacts, albeit small, contains representatives of at least 8 different point styles that were used over 6000 years from the Archaic into the Late Prehistoric. Bulk rock hysteresis parameters (Mr, Ms, Bc, and Bcr) and low-field susceptibility (Χ) measurements show that the projectile points generally contain a lower concentration of magnetic minerals than the geologic samples. For example, the artifacts' median Ms value is 2.9 x 10-3 Am2kg-1, while that of the geological samples is 6.5 x 10-3 Am2kg-1. The concentration of magnetic minerals in obsidian is a proxy for the concentration of microlites in general, and this relationship suggests that although obsidian was locally abundant, toolmakers employed non-random selection criteria resulting in generally lower concentrations of microlites in their obsidian tools.
Schoff, P.K.; Johnson, C.M.; Schotthoefer, A.M.; Murphy, J.E.; Lieske, C.; Cole, R.A.; Johnson, L.B.; Beasley, V.R.
2003-01-01
Skeletal malformation rates for several frog species were determined in a set of randomly selected wetlands in the north-central USA over three consecutive years. In 1998, 62 sites yielded 389 metamorphic frogs, nine (2.3%) of which had skeletal or eye malformations. A subset of the original sites was surveyed in the following 2 yr. In 1999, 1,085 metamorphic frogs were collected from 36 sites and 17 (1.6%) had skeletal or eye malformations, while in 2000, examination of 1,131 metamorphs yielded 16 (1.4%) with skeletal or eye malformations. Hindlimb malformations predominated in all three years, but other abnormalities, involving forelimb, eye, and pelvis were also found. Northern leopard frogs (Rana pipiens) constituted the majority of collected metamorphs as well as most of the malformed specimens. However, malformations were also noted in mink frogs (R. septentrionalis), wood frogs (R. sylvatica), and gray tree frogs (Hyla spp.). The malformed specimens were found in clustered sites in all three years but the cluster locations were not the same in any year. The malformation rates reported here are higher than the 0.3% rate determined for metamorphic frogs collected from similar sites in Minnesota in the 1960s, and thus, appear to represent an elevation of an earlier baseline malformation rate.
Dudeck, Oliver Bulla, Karsten; Wieners, Gero; Ruehl, Ricarda; Ulrich, Gerd; Amthauer, Holger; Ricke, Jens; Pech, Maciej
2011-02-15
The purpose of this study was compare embolization of the gastroduodenal artery (GDA) using standard pushable coils with the Interlock detachable coil (IDC), a novel fibered mechanically detachable long microcoil, in patients scheduled for selective internal radiotherapy (SIRT). Fifty patients (31 male and 19 female; median age 66.6 {+-} 8.1 years) were prospectively randomized for embolization using either standard coils or IDCs. Procedure time, radiation dose, number of embolization devices, complications, and durability of vessel occlusion at follow-up angiography were recorded. The procedures differed significantly in time (14:32 {+-} 5:56 min for standard coils vs. 2:13 {+-} 1:04 min for IDCs; p < 0.001); radiation dose for coil deployment (2479 {+-} 1237 cGycm Superscript-Two for standard coils vs. 275 {+-} 268 cGycm Superscript-Two for IDCs; p < 0.001); and vessel occlusion (17:18 {+-} 6:39 min for standard coils vs. 11:19 {+-} 7:54 min for IDCs; p = 0.002). A mean of 6.2 {+-} 1.8 coils (n = 27) were used in the standard coil group, and 1.3 {+-} 0.9 coils (p < 0.0001) were used in the IDC group (n = 23) because additional pushable coils were required to achieve GDA occlusion in 4 patients. In 2 patients, the IDC could not be deployed through a Soft-VU catheter. One standard coil dislodged in the hepatic artery and was retrieved. Vessel reperfusion was noted in only 1 patient in the standard coil group. Controlled embolization of the GDA with fibered IDCs was achieved more rapidly than with pushable coils. However, vessel occlusion may not be obtained using a single device only, and the use of sharply angled guiding catheters hampered coil pushability.
Schenke, Frederike; Federlin, Marianne; Hiller, Karl-Anton; Moder, Daniel; Schmalz, Gottfried
2012-04-01
Among the materials used for luting indirect restorations, growing interest has been directed towards the use of self-adhesive resin cements. The aim of this prospective randomized controlled clinical trial was to evaluate the clinical performance of the self-adhesive resin cement RelyX Unicem (RXU) for luting partial ceramic crowns (PCCs). In addition, the influence of selective enamel etching prior to luting (RXU+E) was assessed. Two-year results are reported. Thirty-four patients (68 PCCs) had originally received the intended treatment at baseline (BL). Twenty-nine patients (14 male, 15 female) with a total of 58 PCCs participated in the 2-year recall. In each patient, one PCC had been placed with RXU, one PCC with RXU+E. Restorations were evaluated at BL and 24 months after placement using modified United States Public Health Service criteria for postoperative hypersensitivity, anatomic form, marginal adaptation, marginal discoloration, surface texture and recurrent caries. Additionally, the "percentage failure" within the 2-year recall period for all restorations (n = 68) was calculated according to ADA Program Guidelines. Target value for acceptability of each procedure was <5% failure within 24 m. For statistical analysis of the data, the chi-square test was applied (α = 0.05). The median patient age was 41 years (24-59 years). Median PBI was 8% (5-10%). Twenty-two RXU PCCs were placed in molars, seven in premolars. Twenty-one RXU+E PCCs were placed in molars, eight in premolars. Statistically significant changes were observed for marginal adaptation (MA) and marginal discoloration (MD) between BL and 2 years but not between the two groups (RXU, RXU+E). Percentage of alfa values at BL for MA (RXU, 97% and RXU+E, 100%) and for MD (RXU, 97% and RXU+E, 97%) decreased to RXU, 14% and RXU+E, 28% for MA and to RXU, 50% and RXU+E, 59% for MD after 24 months. Within the observation period, three failures were recorded with RXU (5.1% failure), one
Mullen, Lewis; Stamp, Robin C; Fox, Peter; Jones, Eric; Ngo, Chau; Sutcliffe, Christopher J
2010-01-01
In this study, the unit cell approach, which has previously been demonstrated as a method of manufacturing porous components suitable for use as orthopedic implants, has been further developed to include randomized structures. These random structures may aid the bone in-growth process because of their similarity in appearance to trabecular bone and are shown to carry legacy properties that can be related back to the original unit cell on which they are ultimately based. In addition to this, it has been shown that randomization improves the mechanical properties of regular unit cell structures, resulting in anticipated improvements to both implant functionality and longevity. The study also evaluates the effect that a post process sinter cycle has on the components, outlines the improved mechanical properties that are attainable, and also the changes in both the macro and microstructure that occur.
Generating random density matrices
NASA Astrophysics Data System (ADS)
Życzkowski, Karol; Penson, Karol A.; Nechita, Ion; Collins, Benoît
2011-06-01
We study various methods to generate ensembles of random density matrices of a fixed size N, obtained by partial trace of pure states on composite systems. Structured ensembles of random pure states, invariant with respect to local unitary transformations are introduced. To analyze statistical properties of quantum entanglement in bi-partite systems we analyze the distribution of Schmidt coefficients of random pure states. Such a distribution is derived in the case of a superposition of k random maximally entangled states. For another ensemble, obtained by performing selective measurements in a maximally entangled basis on a multi-partite system, we show that this distribution is given by the Fuss-Catalan law and find the average entanglement entropy. A more general class of structured ensembles proposed, containing also the case of Bures, forms an extension of the standard ensemble of structureless random pure states, described asymptotically, as N → ∞, by the Marchenko-Pastur distribution.
Randomization and sampling issues
Geissler, P.H.
1996-01-01
The need for randomly selected routes and other sampling issues have been debated by the Amphibian electronic discussion group. Many excellent comments have been made, pro and con, but we have not reached consensus yet. This paper brings those comments together and attempts a synthesis. I hope that the resulting discussion will bring us closer to a consensus.
Ouyang, Fubing; Chen, Yicong; Zhao, Yuhui; Dang, Ge; Liang, Jiahui; Zeng, Jinsheng
2016-01-01
Background and Purpose Recent randomized controlled trials have demonstrated consistent effectiveness of endovascular treatment (EVT) for acute ischemic stroke, leading to update on stroke management guidelines. We conducted this meta-analysis to assess the efficacy and safety of EVT overall and in subgroups stratified by age, baseline stroke severity, brain imaging feature, and anesthetic type. Methods Published randomized controlled trials comparing EVT and standard medical care alone were evaluated. The measured outcomes were 90-day functional independence (modified Rankin Scale ≤2), all-cause mortality, and symptomatic intracranial hemorrhage. Results Nine trials enrolling 2476 patients were included (1338 EVT, 1138 standard medical care alone). For patients with large vessel occlusions confirmed by noninvasive vessel imaging, EVT yielded improved functional outcome (pooled odds ratio [OR], 2.02; 95% confidence interval [CI], 1.64–2.50), lower mortality (OR, 0.75; 95% CI, 0.58–0.97), and similar symptomatic intracranial hemorrhage rate (OR, 1.12; 95% CI, 0.72–1.76) compared with standard medical care. A higher proportion of functional independence was seen in patients with terminus intracranial artery occlusion (±M1) (OR, 3.16; 95% CI, 1.64–6.06), baseline Alberta Stroke Program Early CT score of 8–10 (OR, 2.11; 95% CI, 1.25–3.57) and age ≤70 years (OR, 3.01; 95% CI, 1.73–5.24). EVT performed under conscious sedation had better functional outcomes (OR, 2.08; 95% CI, 1.47–2.96) without increased risk of symptomatic intracranial hemorrhage or short-term mortality compared with general anesthesia. Conclusions Vessel-imaging proven large vessel occlusion, a favorable scan, and younger age are useful predictors to identify anterior circulation stroke patients who may benefit from EVT. Conscious sedation is feasible and safe in EVT based on available data. However, firm conclusion on the choice of anesthetic types should be drawn from more
Downey, Lois; Engelberg, Ruth A; Standish, Leanna J; Kozak, Leila; Lafferty, William E
2009-01-01
Improving end-of-life care is a priority in the United States, but assigning priorities for standard care services requires evaluations using appropriate study design and appropriate outcome indicators. A recent randomized controlled trial with terminally ill patients produced no evidence of benefit from massage or guided meditation, when evaluated with measures of global quality of life or pain distress over the course of patient participation. However, reanalysis using a more targeted outcome, surrogates' assessment of patients' benefit from the study intervention, suggested significant gains from massage-the treatment patients gave their highest preassignment preference ratings. The authors conclude that adding a menu of complementary therapies as part of standard end-of-life care may yield significant benefit, that patient preference is an important predictor of outcome, and that modifications in trial design may be appropriate for end-of-life studies.
May, Damon H.; Navarro, Sandi L.; Ruczinski, Ingo; Hogan, Jason; Ogata, Yuko; Schwarz, Yvonne; Levy, Lisa; Holzman, Ted; McIntosh, Martin W.; Lampe, Johanna W.
2013-01-01
Metabolomic profiles were used to characterize the effects of consuming a high-phytochemical diet compared to a diet devoid of fruits and vegetables in a randomized trial and cross-sectional study. In the trial, 8 h fasting urine from healthy men (n=5) and women (n=5) was collected after a 2-week randomized, controlled trial of 2 diet periods: a diet rich in cruciferous vegetables, citrus and soy (F&V), and a fruit- and vegetable-free (basal) diet. Among the ions found to differentiate the diets, 176 were putatively annotated with compound identifications, with 46 supported by MS/MS fragment evidence. Metabolites more abundant in the F&V diet included markers of dietary intervention (e.g., crucifers, citrus and soy), fatty acids and niacin metabolites. Ions more abundant in the basal diet included riboflavin, several acylcarnitines, and amino acid metabolites. In the cross-sectional study, we compared participants based on tertiles of crucifers, citrus and soy from 3 d food records (3DFR; n=36) and food frequency questionnaires (FFQ; n=57); intake was separately divided into tertiles of total fruit and vegetable intake for FFQ. As a group, ions individually differential between the experimental diets differentiated the observational study participants. However, only 4 ions were significant individually, differentiating the third vs. first tertile of crucifer, citrus and soy intake based on 3FDR. One of these was putatively annotated: proline betaine, a marker of citrus consumption. There were no ions significantly distinguishing tertiles by FFQ. Metabolomics assessment of controlled dietary interventions provides a more accurate and stronger characterization of diet than observational data. PMID:23657156
Pitton, Michael B. Kloeckner, Roman; Ruckes, Christian; Wirth, Gesine M.; Eichhorn, Waltraud; Wörns, Marcus A.; Weinmann, Arndt; Schreckenberger, Mathias; Galle, Peter R.; Otto, Gerd; Dueber, Christoph
2015-04-15
PurposeTo prospectively compare SIRT and DEB-TACE for treating hepatocellular carcinoma (HCC).MethodsFrom 04/2010–07/2012, 24 patients with histologically proven unresectable N0, M0 HCCs were randomized 1:1 to receive SIRT or DEB-TACE. SIRT could be repeated once in case of recurrence; while, TACE was repeated every 6 weeks until no viable tumor tissue was detected by MRI or contraindications prohibited further treatment. Patients were followed-up by MRI every 3 months; the final evaluation was 05/2013.ResultsBoth groups were comparable in demographics (SIRT: 8males/4females, mean age 72 ± 7 years; TACE: 10males/2females, mean age 71 ± 9 years), initial tumor load (1 patient ≥25 % in each group), and BCLC (Barcelona Clinic Liver Cancer) stage (SIRT: 12×B; TACE 1×A, 11×B). Median progression-free survival (PFS) was 180 days for SIRT versus 216 days for TACE patients (p = 0.6193) with a median TTP of 371 days versus 336 days, respectively (p = 0.5764). Median OS was 592 days for SIRT versus 788 days for TACE patients (p = 0.9271). Seven patients died in each group. Causes of death were liver failure (n = 4 SIRT group), tumor progression (n = 4 TACE group), cardiovascular events, and inconclusive (n = 1 in each group).ConclusionsNo significant differences were found in median PFS, OS, and TTP. The lower rate of tumor progression in the SIRT group was nullified by a greater incidence of liver failure. This pilot study is the first prospective randomized trial comparing SIRT and TACE for treating HCC, and results can be used for sample size calculations of future studies.
Satoh, Toyomi; Tsuda, Hitoshi; Kanato, Keisuke; Nakamura, Kenichi; Shibata, Taro; Takano, Masashi; Baba, Tsukasa; Ishikawa, Mitsuya; Ushijima, Kimio; Yaegashi, Nobuo; Yoshikawa, Hiroyuki
2015-06-01
Fertility-sparing treatment has been accepted as a standard treatment for epithelial ovarian cancer in stage IA non-clear cell histology grade 1/grade 2. In order to expand an indication of fertility-sparing treatment, we have started a non-randomized confirmatory trial for stage IA clear cell histology and stage IC unilateral non-clear cell histology grade 1/grade 2. The protocol-defined fertility-sparing surgery is optimal staging laparotomy including unilateral salpingo-oophorectomy, omentectomy, peritoneal cytology and pelvic and para-aortic lymph node dissection or biopsy. After fertility-sparing surgery, four to six cycles of adjuvant chemotherapy with paclitaxel and carboplatin are administered. We plan to enroll 250 patients with an indication of fertility-sparing surgery, and then the primary analysis is to be conducted for 63 operated patients with pathologically confirmed stage IA clear cell histology and stage IC unilateral non-clear cell histology grade 1/grade 2. The primary endpoint is 5-year overall survival. Secondary endpoints are other survival endpoints and factors related to reproduction. This trial has been registered at the UMIN Clinical Trials Registry as UMIN000013380. PMID:26059697
Satoh, Toyomi; Tsuda, Hitoshi; Kanato, Keisuke; Nakamura, Kenichi; Shibata, Taro; Takano, Masashi; Baba, Tsukasa; Ishikawa, Mitsuya; Ushijima, Kimio; Yaegashi, Nobuo; Yoshikawa, Hiroyuki
2015-06-01
Fertility-sparing treatment has been accepted as a standard treatment for epithelial ovarian cancer in stage IA non-clear cell histology grade 1/grade 2. In order to expand an indication of fertility-sparing treatment, we have started a non-randomized confirmatory trial for stage IA clear cell histology and stage IC unilateral non-clear cell histology grade 1/grade 2. The protocol-defined fertility-sparing surgery is optimal staging laparotomy including unilateral salpingo-oophorectomy, omentectomy, peritoneal cytology and pelvic and para-aortic lymph node dissection or biopsy. After fertility-sparing surgery, four to six cycles of adjuvant chemotherapy with paclitaxel and carboplatin are administered. We plan to enroll 250 patients with an indication of fertility-sparing surgery, and then the primary analysis is to be conducted for 63 operated patients with pathologically confirmed stage IA clear cell histology and stage IC unilateral non-clear cell histology grade 1/grade 2. The primary endpoint is 5-year overall survival. Secondary endpoints are other survival endpoints and factors related to reproduction. This trial has been registered at the UMIN Clinical Trials Registry as UMIN000013380.
NASA Technical Reports Server (NTRS)
Messaro. Semma; Harrison, Phillip
2010-01-01
Ares I Zonal Random vibration environments due to acoustic impingement and combustion processes are develop for liftoff, ascent and reentry. Random Vibration test criteria for Ares I Upper Stage pyrotechnic components are developed by enveloping the applicable zonal environments where each component is located. Random vibration tests will be conducted to assure that these components will survive and function appropriately after exposure to the expected vibration environments. Methodology: Random Vibration test criteria for Ares I Upper Stage pyrotechnic components were desired that would envelope all the applicable environments where each component was located. Applicable Ares I Vehicle drawings and design information needed to be assessed to determine the location(s) for each component on the Ares I Upper Stage. Design and test criteria needed to be developed by plotting and enveloping the applicable environments using Microsoft Excel Spreadsheet Software and documenting them in a report Using Microsoft Word Processing Software. Conclusion: Random vibration liftoff, ascent, and green run design & test criteria for the Upper Stage Pyrotechnic Components were developed by using Microsoft Excel to envelope zonal environments applicable to each component. Results were transferred from Excel into a report using Microsoft Word. After the report is reviewed and edited by my mentor it will be submitted for publication as an attachment to a memorandum. Pyrotechnic component designers will extract criteria from my report for incorporation into the design and test specifications for components. Eventually the hardware will be tested to the environments I developed to assure that the components will survive and function appropriately after exposure to the expected vibration environments.
Pech, Maciej Kraetsch, Annett; Wieners, Gero; Redlich, Ulf; Gaffke, Gunnar; Ricke, Jens; Dudeck, Oliver
2009-05-15
The Amplatzer Vascular Plug II (AVP II) is a novel device for transcatheter vessel occlusion, for which only limited comparative data exist. Embolotherapy of the gastroduodenal artery (GDA) is essential before internal radiotherapy (SIRT) in order to prevent radiation-induced peptic ulcerations due to migration of yttrium-90 microspheres. The purpose of this study was to compare the vascular anatomical limitations, procedure time, effectiveness, and safety of embolization of the GDA with coils versus the AVP II. Fifty patients stratified for SIRT were prospectively randomized for embolization of the GDA with either coils or the AVP II. The angle between the aorta and the celiac trunk, diameter of the GDA, fluoroscopy time and total time for embolization, number of embolization devices, complications, and durability of vessel occlusion at follow-up angiography for SIRT were recorded. A t-test was used for statistical analysis. Embolizations with either coils or the AVP II were technically feasible in all but two patients scheduled for embolization of the GDA with the AVP II. In both cases the plug could not be positioned due to the small celiac trunk outlet angles of 17{sup o} and 21{sup o}. The mean diameter of the GDA was 3.7 mm (range, 2.2-4.8 mm) for both groups. The procedures differed significantly in fluoroscopy time (7.8 min for coils vs. 2.6 min for the AVP II; P < 0.001) and embolization time (23.1 min for coils vs. 8.8 min for the AVP II; P < 0.001). A mean of 6.0 {+-} 3.2 coils were used for GDA embolization, while no more than one AVP II was needed for successful vessel occlusion (P < 0.001). One coil migration occurred during coil embolization, whereas no procedural complication was encountered with the use of the AVP II. Vessel reperfusion was noted in only one patient, in whom coil embolization was performed. In conclusion, embolization of the GDA with the AVP II is safe, easy, rapid, and highly effective; only an extremely sharp-angled celiac trunk
NASA Astrophysics Data System (ADS)
Tapiero, Charles S.; Vallois, Pierre
2016-11-01
The premise of this paper is that a fractional probability distribution is based on fractional operators and the fractional (Hurst) index used that alters the classical setting of random variables. For example, a random variable defined by its density function might not have a fractional density function defined in its conventional sense. Practically, it implies that a distribution's granularity defined by a fractional kernel may have properties that differ due to the fractional index used and the fractional calculus applied to define it. The purpose of this paper is to consider an application of fractional calculus to define the fractional density function of a random variable. In addition, we provide and prove a number of results, defining the functional forms of these distributions as well as their existence. In particular, we define fractional probability distributions for increasing and decreasing functions that are right continuous. Examples are used to motivate the usefulness of a statistical approach to fractional calculus and its application to economic and financial problems. In conclusion, this paper is a preliminary attempt to construct statistical fractional models. Due to the breadth and the extent of such problems, this paper may be considered as an initial attempt to do so.
Navarro, Sandi L.; Chen, Yu; Li, Lin; Li, Shuying S.; Chang, Jyh-Lurn; Schwarz, Yvonne; King, Irena B.; Potter, John D.; Bigler, Jeannette
2011-01-01
Acetaminophen (APAP) glucuronidation is thought to occur mainly by UDP-glucuronosyltransferases (UGT) in the UGT1A family. Interindividual variation in APAP glucuronidation is attributed in part to polymorphisms in UGT1As. However, evidence suggests that UGT2B15 may also be important. We evaluated, in a controlled feeding trial, whether APAP conjugation differed by UGT1A6 and UGT2B15 genotypes and whether supplementation of known dietary inducers of UGT (crucifers, soy, and citrus) modulated APAP glucuronidation compared with a diet devoid of fruits and vegetables (F&V). Healthy adults (n = 66) received 1000 mg of APAP orally on days 7 and 14 of each 2-week feeding period and collected saliva and urine over 12 h. Urinary recovery of the percentage of the APAP dose as free APAP was higher (P = 0.02), and the percentage as APAP glucuronide (APAPG) was lower (P = 0.004) in women. The percentage of APAP was higher among UGT1A6*1/*1 genotypes, relative to *1/*2 and *2/*2 genotypes (P = 0.045). For UGT2B15, the percentage of APAPG decreased (P < 0.0001) and that of APAP sulfate increased (P = 0.002) in an allelic dose-dependent manner across genotypes from *1/*1 to *2/*2. There was a significant diet × UGT2B15 genotype interaction for the APAPG ratio (APAPG/total metabolites × 100) (P = 0.03), with *1/*1 genotypes having an approximately 2-fold higher F&V to basal diet difference in response compared with *1/*2 and *2/*2 genotypes. Salivary APAP maximum concentration (Cmax) was significantly higher in women (P = 0.0003), with F&V (P = 0.003), and among UGT1A6*2/*2 and UGT2B15*1/*2 genotypes (P = 0.02 and 0.002, respectively). APAP half-life was longer in UGT2B15*2/*2 genotypes with F&V (P = 0.009). APAP glucuronidation was significantly influenced by the UGT2B15*2 polymorphism, supporting a role in vivo for UGT2B15 in APAP glucuronidation, whereas the contribution of UGT1A6*2 was modest. Selected F&V known to affect UGT activity led to greater glucuronidation and less
Is random access memory random?
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.
NASA Astrophysics Data System (ADS)
Leonetti, Marco; López, Cefe
2012-06-01
A random laser is formed by a haphazard assembly of nondescript optical scatters with optical gain. Multiple light scattering replaces the optical cavity of traditional lasers and the interplay between gain, scattering and size determines its unique properties. Random lasers studied till recently, consisted of irregularly shaped or polydisperse scatters, with some average scattering strength constant across the gain frequency band. Photonic glasses can sustain scattering resonances that can be placed in the gain window, since they are formed by monodisperse spheres [1]. The unique resonant scattering of this novel material allows controlling the lasing color via the diameter of the particles and their refractive index. Thus a random laser with a priori set lasing peak can be designed [2]. A special pumping scheme that enables to select the number of activated modes in a random laser permits to prepare RLs in two distinct regimes by controlling directionality through the shape of the pump [3]. When pumping is essentially unidirectional, few (barely interacting) modes are turned on that show as sharp, uncorrelated peaks in the spectrum. By increasing angular span of the pump beams, many resonances intervene generating a smooth emission spectrum with a high degree of correlation, and shorter lifetime. These are signs of a phaselocking transition, in which phases are clamped together so that modes oscillate synchronously.
The moral importance of selecting people randomly.
Peterson, Martin
2008-07-01
This article discusses some ethical principles for distributing pandemic influenza vaccine and other indivisible goods. I argue that a number of principles for distributing pandemic influenza vaccine recently adopted by several national governments are morally unacceptable because they put too much emphasis on utilitarian considerations, such as the ability of the individual to contribute to society. Instead, it would be better to distribute vaccine by setting up a lottery. The argument for this view is based on a purely consequentialist account of morality; i.e. an action is right if and only if its outcome is optimal. However, unlike utilitarians I do not believe that alternatives should be ranked strictly according to the amount of happiness or preference satisfaction they bring about. Even a mere chance to get some vaccine matters morally, even if it is never realized. PMID:18445094
The moral importance of selecting people randomly.
Peterson, Martin
2008-07-01
This article discusses some ethical principles for distributing pandemic influenza vaccine and other indivisible goods. I argue that a number of principles for distributing pandemic influenza vaccine recently adopted by several national governments are morally unacceptable because they put too much emphasis on utilitarian considerations, such as the ability of the individual to contribute to society. Instead, it would be better to distribute vaccine by setting up a lottery. The argument for this view is based on a purely consequentialist account of morality; i.e. an action is right if and only if its outcome is optimal. However, unlike utilitarians I do not believe that alternatives should be ranked strictly according to the amount of happiness or preference satisfaction they bring about. Even a mere chance to get some vaccine matters morally, even if it is never realized.
Can randomization be informative?
NASA Astrophysics Data System (ADS)
Pereira, Carlos A. B.; Campos, Thiago F.; Silva, Gustavo M.; Wechsler, Sergio
2012-10-01
In this paper the Pair of Siblings Paradox introduced by Pereira [1] is extended by considering more than two children and more than one child observed for gender. We follow the same lines of Wechsler et al. [2] that generalizes the three prisoners' dilemma, introduced by Gardner [3]. This paper's conjecture is that the Pair of Siblings and the Three Prisoners dilemma are dual paradoxes. Looking at possible likelihoods, the sure (randomized) selection for the former is non informative (informative), the opposite that holds for the latter. This situation is maintained for generalizations. Non informative likelihood here means that prior and posterior are equal.
Nakigozi, Gertrude; Makumbi, Fredrick E.; Bwanika, John Baptist; Atuyambe, Lynn; Reynolds, Steven J.; Kigozi, Godfrey; Nalugoda, Fred; Chang, Larry W.; Kiggundu, Valerian; Serwadda, David; Wawer, Maria J.; Gray, Ronald H.; Kamya, Moses R.
2015-01-01
Background Data are limited on effects of household or community support persons (“care buddies”) on enrolment into and adherence to pre-antiretroviral HIV care. We assessed the impact of care buddies on adherence to HIV clinic appointments, HIV progression and conduct of daily life among pre-ART HIV-infected individuals in Rakai, Uganda. Methods 1209 HIV infected pre-ART patients aged ≥15 years were randomized to standard of care (SOC) (n = 604) or patient-selected care buddy (PSCB) (n= 605) and followed at 6 and 12 months. Outcomes were adherence to clinic visits; HIV disease progression and self-reported conduct of daily life. Incidence and prevalence rate ratios and 95% confidence intervals (95%CI) were used to assess outcomes in the intent-to-treat and as-treated analyses. Results Baseline characteristics were comparable. In the ITT analysis both arms were comparable with respect to adherence to CD4 monitoring visits (adjPRR 0.98, 95%CI 0.93-1.04, p=0.529) and HIV progression (adjPRR=1.00, 95%CI 0.77-1.31, p=0.946). Good conduct of daily life was significantly higher in the PSCB than the SOC arm (adjPRR 1.08, 95%CI 1.03-1.13, p=0.001). More men (61%) compared to women (30%) selected spouses/partners as buddies (p<0.0001.) 22% of PSCB arm participants discontinued use of buddies. Conclusion In pre-ART persons, having care buddies improved the conduct of daily life of the HIV infected patients but had no effect on HIV disease progression and only limited effect on clinic appointment adherence. PMID:26039929
Marofi, Maryam; Sirousfard, Motahareh; Moeini, Mahin; Ghanadi, Alireza
2015-01-01
Background: Pain is the common complication after a surgery. The aim of this study was to evaluate the effect of aromatherapy with Rosa damascena Mill. on the postoperative pain in children. Materials and Methods: In a double-blind, placebo-controlled clinical trial, we selected 64 children of 3–6 years of age through convenient sampling and divided them randomly into two groups. Patients in group A were given inhalation aromatherapy with R. damascena Mill., and in group B, the patients were given almond oil as a placebo. Inhalation aromatherapy was used at the first time of subjects’ arrival to the ward and then at 3, 6, 9, and 12 h afterward. Common palliative treatments to relieve pain were used in both groups. Thirty minutes after aromatherapy, the postoperative pain in children was evaluated with the Toddler Preschooler Postoperative Pain Scale (TPPPS). Data were statistically analyzed using Chi-square test, one-way analysis of variance (ANOVA), and repeated measures ANOVA. Results: There was no significant difference in pain scores at the first time of subjects’ arrival to the ward (before receiving any aromatherapy or palliative care) between the two groups. After each time of aromatherapy and at the end of treatment, the pain score was significantly reduced in the aromatherapy group with R. damascena Mill. compared to the placebo group. Conclusions: According to our results, aromatherapy with R. damascena Mill. can be used in postoperative pain in children, together with other common treatments without any significant side effects. PMID:25878704
Ellis, P; Meldrum, R
2002-02-01
Two hundred thirty-six randomly selected food and milk samples were examined to obtain aerobic colony counts by two dry sheet media methods and a standard Public Health Laboratory Service spiral plate method. Results for 40 samples were outside the limits of detection for one or more of the tested methods and were not considered. (The limits of detection for the spiral plate method were 200 to 1 x 10(8) CFU/ml for the spiral plate method and 100 to 3 x 10(6) CFU/ml for the dry sheet media methods.) The remaining 196 sets of results were analyzed further. When the results from the three methods were compared, correlation coefficients were all >0.80 and slopes and intercepts were close to 1.0 and 0.0, respectively. Mean log values and standard deviations were very similar for all three methods. The results were evaluated according to published UK guidelines for ready-to-eat foods sampled at the point of sale, which include a quality acceptability assessment that is based on aerobic colony counts. Eighty-six percent of the comparable results gave the same verdict with regard to acceptability according to the aerobic colony count guidelines. Both dry sheet media methods were comparable to the spiral plate method and can be recommended for the examination of food.
Moebus, Susanne; Hanisch, Jens Ulrich; Neuhäuser, Markus; Aidelsburger, Pamela; Wasem, Jürgen; Jöckel, Karl-Heinz
2006-01-01
Objective: Metabolic Syndrome (MetSyn) describes a cluster of metabolic disorders and is considered a risk factor for development of cardiovascular disease. Although a high prevalence is commonly assumed in Germany data about the degree of its occurrence in the population and in subgroups are still missing. The aim of this study was to assess the prevalence of the MetSyn according to the NCEP ATP-III (National Cholesterol Education Program Adult Treatment Panel III) criteria in persons aged ≥18 years attending a general practitioner in Germany. Here we describe in detail the methods used and the feasibility of determining the MetSyn in a primary health care setting. Research design and methods: The German-wide cross-sectional study was performed during two weeks in October 2005. Blood samples were analyzed in a central laboratory. Waist circumference and blood pressure were assessed, data on smoking, life style, fasting status, socio-demographic characteristics and core information from non-participants collected. Quality control procedures included telephone-monitoring and random on-site visits. In order to achieve a maximal number of fasting blood samples with a minimal need for follow-up appointments a stepwise approach was developed. Basic descriptive statistics were calculated, the Taylor expansion method used to estimate standard errors needed for calculation of confidence intervals for clustered observations. Results: In total, 1511 randomly selected general practices from 397 out of 438 German cities and administrative districts enrolled 35,869 patients (age range: 18-99, women 61.1%). More than 50,000 blood samples were taken. Fasting blood samples were available for 49% of the participants. Of the participating patients 99.3% returned questionnaires to the GP, only 12% were not filled out completely. The overall prevalence of the MetSyn (NCEP/ATP III 2001) was found to be 19.8%, with men showing higher prevalence rates than women (22.7% respective 18
Pseudo-Random Number Generators
NASA Technical Reports Server (NTRS)
Howell, L. W.; Rheinfurth, M. H.
1984-01-01
Package features comprehensive selection of probabilistic distributions. Monte Carlo simulations resorted to whenever systems studied not amenable to deterministic analyses or when direct experimentation not feasible. Random numbers having certain specified distribution characteristic integral part of simulations. Package consists of collector of "pseudorandom" number generators for use in Monte Carlo simulations.
How to Do Random Allocation (Randomization)
Shin, Wonshik
2014-01-01
Purpose To explain the concept and procedure of random allocation as used in a randomized controlled study. Methods We explain the general concept of random allocation and demonstrate how to perform the procedure easily and how to report it in a paper. PMID:24605197
Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore
2011-12-01
Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.
Random broadcast on random geometric graphs
Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias
2009-01-01
In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.
Quantumness, Randomness and Computability
NASA Astrophysics Data System (ADS)
Solis, Aldo; Hirsch, Jorge G.
2015-06-01
Randomness plays a central role in the quantum mechanical description of our interactions. We review the relationship between the violation of Bell inequalities, non signaling and randomness. We discuss the challenge in defining a random string, and show that algorithmic information theory provides a necessary condition for randomness using Borel normality. We close with a view on incomputablity and its implications in physics.
Nonvolatile random access memory
NASA Technical Reports Server (NTRS)
Wu, Jiin-Chuan (Inventor); Stadler, Henry L. (Inventor); Katti, Romney R. (Inventor)
1994-01-01
A nonvolatile magnetic random access memory can be achieved by an array of magnet-Hall effect (M-H) elements. The storage function is realized with a rectangular thin-film ferromagnetic material having an in-plane, uniaxial anisotropy and inplane bipolar remanent magnetization states. The thin-film magnetic element is magnetized by a local applied field, whose direction is used to form either a 0 or 1 state. The element remains in the 0 or 1 state until a switching field is applied to change its state. The stored information is detcted by a Hall-effect sensor which senses the fringing field from the magnetic storage element. The circuit design for addressing each cell includes transistor switches for providing a current of selected polarity to store a binary digit through a separate conductor overlying the magnetic element of the cell. To read out a stored binary digit, transistor switches are employed to provide a current through a row of Hall-effect sensors connected in series and enabling a differential voltage amplifier connected to all Hall-effect sensors of a column in series. To avoid read-out voltage errors due to shunt currents through resistive loads of the Hall-effect sensors of other cells in the same column, at least one transistor switch is provided between every pair of adjacent cells in every row which are not turned on except in the row of the selected cell.
Directed random walk with random restarts: The Sisyphus random walk
NASA Astrophysics Data System (ADS)
Montero, Miquel; Villarroel, Javier
2016-09-01
In this paper we consider a particular version of the random walk with restarts: random reset events which suddenly bring the system to the starting value. We analyze its relevant statistical properties, like the transition probability, and show how an equilibrium state appears. Formulas for the first-passage time, high-water marks, and other extreme statistics are also derived; we consider counting problems naturally associated with the system. Finally we indicate feasible generalizations useful for interpreting different physical effects.
Image segmentation using random features
NASA Astrophysics Data System (ADS)
Bull, Geoff; Gao, Junbin; Antolovich, Michael
2014-01-01
This paper presents a novel algorithm for selecting random features via compressed sensing to improve the performance of Normalized Cuts in image segmentation. Normalized Cuts is a clustering algorithm that has been widely applied to segmenting images, using features such as brightness, intervening contours and Gabor filter responses. Some drawbacks of Normalized Cuts are that computation times and memory usage can be excessive, and the obtained segmentations are often poor. This paper addresses the need to improve the processing time of Normalized Cuts while improving the segmentations. A significant proportion of the time in calculating Normalized Cuts is spent computing an affinity matrix. A new algorithm has been developed that selects random features using compressed sensing techniques to reduce the computation needed for the affinity matrix. The new algorithm, when compared to the standard implementation of Normalized Cuts for segmenting images from the BSDS500, produces better segmentations in significantly less time.
NASA Technical Reports Server (NTRS)
Erdmann, Michael
1992-01-01
This paper investigates the role of randomization in the solution of robot manipulation tasks. One example of randomization is shown by the strategy of shaking a bin holding a part in order to orient the part in a desired stable state with some high probability. Randomization can be useful for mobile robot navigation and as a means of guiding the design process.
ERIC Educational Resources Information Center
De Boeck, Paul
2008-01-01
It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters…
Quantum random number generation
Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; Zhang, Zhen; Qi, Bing
2016-06-28
Here, quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at amore » high speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less
Random bits, true and unbiased, from atmospheric turbulence.
Marangon, Davide G; Vallone, Giuseppe; Villoresi, Paolo
2014-06-30
Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes.
A random rule model of surface growth
NASA Astrophysics Data System (ADS)
Mello, Bernardo A.
2015-02-01
Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.
Randomization methods in emergency setting trials: a descriptive review
Moe‐Byrne, Thirimon; Oddie, Sam; McGuire, William
2015-01-01
Background Quasi‐randomization might expedite recruitment into trials in emergency care settings but may also introduce selection bias. Methods We searched the Cochrane Library and other databases for systematic reviews of interventions in emergency medicine or urgent care settings. We assessed selection bias (baseline imbalances) in prognostic indicators between treatment groups in trials using true randomization versus trials using quasi‐randomization. Results Seven reviews contained 16 trials that used true randomization and 11 that used quasi‐randomization. Baseline group imbalance was identified in four trials using true randomization (25%) and in two quasi‐randomized trials (18%). Of the four truly randomized trials with imbalance, three concealed treatment allocation adequately. Clinical heterogeneity and poor reporting limited the assessment of trial recruitment outcomes. Conclusions We did not find strong or consistent evidence that quasi‐randomization is associated with selection bias more often than true randomization. High risk of bias judgements for quasi‐randomized emergency studies should therefore not be assumed in systematic reviews. Clinical heterogeneity across trials within reviews, coupled with limited availability of relevant trial accrual data, meant it was not possible to adequately explore the possibility that true randomization might result in slower trial recruitment rates, or the recruitment of less representative populations. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd. PMID:26333419
EDITORIAL: Nano and random lasers Nano and random lasers
NASA Astrophysics Data System (ADS)
Wiersma, Diederik S.; Noginov, Mikhail A.
2010-02-01
The field of extreme miniature sources of stimulated emission represented by random lasers and nanolasers has gone through an enormous development in recent years. Random lasers are disordered optical structures in which light waves are both multiply scattered and amplified. Multiple scattering is a process that we all know very well from daily experience. Many familiar materials are actually disordered dielectrics and owe their optical appearance to multiple light scattering. Examples are white marble, white painted walls, paper, white flowers, etc. Light waves inside such materials perform random walks, that is they are scattered several times in random directions before they leave the material, and this gives it an opaque white appearance. This multiple scattering process does not destroy the coherence of the light. It just creates a very complex interference pattern (also known as speckle). Random lasers can be made of basically any disordered dielectric material by adding an optical gain mechanism to the structure. In practice this can be achieved with, for instance, laser dye that is dissolved in the material and optically excited by a pump laser. Alternative routes to incorporate gain are achieved using rare-earth or transition metal doped solid-state laser materials or direct band gap semiconductors. The latter can potentially be pumped electrically. After excitation, the material is capable of scattering light and amplifying it, and these two ingredients form the basis for a random laser. Random laser emission can be highly coherent, even in the absence of an optical cavity. The reason is that random structures can sustain optical modes that are spectrally narrow. This provides a spectral selection mechanism that, together with gain saturation, leads to coherent emission. A random laser can have a large number of (randomly distributed) modes that are usually strongly coupled. This means that many modes compete for the gain that is available in a random
NASA Astrophysics Data System (ADS)
Cappellini, Valerio; Sommers, Hans-Jürgen; Bruzda, Wojciech; Życzkowski, Karol
2009-09-01
Ensembles of random stochastic and bistochastic matrices are investigated. While all columns of a random stochastic matrix can be chosen independently, the rows and columns of a bistochastic matrix have to be correlated. We evaluate the probability measure induced into the Birkhoff polytope of bistochastic matrices by applying the Sinkhorn algorithm to a given ensemble of random stochastic matrices. For matrices of order N = 2 we derive explicit formulae for the probability distributions induced by random stochastic matrices with columns distributed according to the Dirichlet distribution. For arbitrary N we construct an initial ensemble of stochastic matrices which allows one to generate random bistochastic matrices according to a distribution locally flat at the center of the Birkhoff polytope. The value of the probability density at this point enables us to obtain an estimation of the volume of the Birkhoff polytope, consistent with recent asymptotic results.
Randomness: Quantum versus classical
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2016-05-01
Recent tremendous development of quantum information theory has led to a number of quantum technological projects, e.g. quantum random generators. This development had stimulated a new wave of interest in quantum foundations. One of the most intriguing problems of quantum foundations is the elaboration of a consistent and commonly accepted interpretation of a quantum state. Closely related problem is the clarification of the notion of quantum randomness and its interrelation with classical randomness. In this short review, we shall discuss basics of classical theory of randomness (which by itself is very complex and characterized by diversity of approaches) and compare it with irreducible quantum randomness. We also discuss briefly “digital philosophy”, its role in physics (classical and quantum) and its coupling to the information interpretation of quantum mechanics (QM).
Quantum random number generator
Pooser, Raphael C.
2016-05-10
A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.
Autonomous Byte Stream Randomizer
NASA Technical Reports Server (NTRS)
Paloulian, George K.; Woo, Simon S.; Chow, Edward T.
2013-01-01
Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.
Cortes, Jorge E.; Baccarani, Michele; Guilhot, François; Druker, Brian J.; Branford, Susan; Kim, Dong-Wook; Pane, Fabrizio; Pasquini, Ricardo; Goldberg, Stuart L.; Kalaycio, Matt; Moiraghi, Beatriz; Rowe, Jacob M.; Tothova, Elena; De Souza, Carmino; Rudoltz, Marc; Yu, Richard; Krahnke, Tillmann; Kantarjian, Hagop M.; Radich, Jerald P.; Hughes, Timothy P.
2010-01-01
Purpose To evaluate the safety and efficacy of initial treatment with imatinib mesylate 800 mg/d (400 mg twice daily) versus 400 mg/d in patients with newly diagnosed chronic myeloid leukemia in chronic phase. Patients and Methods A total of 476 patients were randomly assigned 2:1 to imatinib 800 mg (n = 319) or 400 mg (n = 157) daily. The primary end point was the major molecular response (MMR) rate at 12 months. Results At 12 months, differences in MMR and complete cytogenetic response (CCyR) rates were not statistically significant (MMR, 46% v 40%; P = .2035; CCyR, 70% v 66%; P = .3470). However, MMR occurred faster among patients randomly assigned to imatinib 800 mg/d, who had higher rates of MMR at 3 and 6 months compared with those in the imatinib 400-mg/d arm (P = .0035 by log-rank test). CCyR also occurred faster in the 800-mg/d arm (CCyR at 6 months, 57% v 45%; P = .0146). The most common adverse events were edema, gastrointestinal problems, and rash, and all were more common in patients in the 800-mg/d arm. Grades 3 to 4 hematologic toxicity also occurred more frequently in patients receiving imatinib 800 mg/d. Conclusion MMR rates at 1 year were similar with imatinib 800 mg/d and 400 mg/d, but MMR and CCyR occurred earlier in patients treated with 800 mg/d. Continued follow-up is needed to determine the clinical significance of earlier responses on high-dose imatinib. PMID:20008622
NASA Astrophysics Data System (ADS)
Chatterjee, Krishnendu; Doyen, Laurent; Gimbert, Hugo; Henzinger, Thomas A.
We consider two-player zero-sum games on graphs. These games can be classified on the basis of the information of the players and on the mode of interaction between them. On the basis of information the classification is as follows: (a) partial-observation (both players have partial view of the game); (b) one-sided complete-observation (one player has complete observation); and (c) complete-observation (both players have complete view of the game). On the basis of mode of interaction we have the following classification: (a) concurrent (players interact simultaneously); and (b) turn-based (players interact in turn). The two sources of randomness in these games are randomness in transition function and randomness in strategies. In general, randomized strategies are more powerful than deterministic strategies, and randomness in transitions gives more general classes of games. We present a complete characterization for the classes of games where randomness is not helpful in: (a) the transition function (probabilistic transition can be simulated by deterministic transition); and (b) strategies (pure strategies are as powerful as randomized strategies). As consequence of our characterization we obtain new undecidability results for these games.
Correlated randomness and switching phenomena
NASA Astrophysics Data System (ADS)
Stanley, H. E.; Buldyrev, S. V.; Franzese, G.; Havlin, S.; Mallamace, F.; Kumar, P.; Plerou, V.; Preis, T.
2010-08-01
One challenge of biology, medicine, and economics is that the systems treated by these serious scientific disciplines have no perfect metronome in time and no perfect spatial architecture-crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. Further, many of these processes and structures have the remarkable feature of “switching” from one behavior to another as if by magic. The past century has, philosophically, been concerned with placing aside the human tendency to see the universe as a fine-tuned machine. Here we will address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at some of the many spatial and temporal patterns in biology, medicine, and economics and even begin to characterize the switching phenomena that enables a system to pass from one state to another. Inspired by principles developed by A. Nihat Berker and scores of other statistical physicists in recent years, we discuss some applications of correlated randomness to understand switching phenomena in various fields. Specifically, we present evidence from experiments and from computer simulations supporting the hypothesis that water’s anomalies are related to a switching point (which is not unlike the “tipping point” immortalized by Malcolm Gladwell), and that the bubbles in economic phenomena that occur on all scales are not “outliers” (another Gladwell immortalization). Though more speculative, we support the idea of disease as arising from some kind of yet-to-be-understood complex switching phenomenon, by discussing data on selected examples, including heart disease and Alzheimer disease.
Selection of Transformed Plants
NASA Astrophysics Data System (ADS)
Jones, Huw D.; Sparks, Caroline A.
The low frequency and randomness of transgene integration into host cells, combined with the significant challenges of recovering whole plants from those rare events, makes the use of selectable marker genes routine in plant transformation experiments. For research applications that are unlikely to be grown in the field, strong herbicide- or antibiotic resistance is commonly used. Here we use genes conferring resistance to glufosinate herbicides as an example of a selectable marker in wheat transformation by either Agrobacterium or biolistics.
Kronberg, James W.
1993-01-01
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
Kronberg, J.W.
1993-04-20
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
The Generation of Random Equilateral Polygons
NASA Astrophysics Data System (ADS)
Alvarado, Sotero; Calvo, Jorge Alberto; Millett, Kenneth C.
2011-04-01
Freely jointed random equilateral polygons serve as a common model for polymer rings, reflecting their statistical properties under theta conditions. To generate equilateral polygons, researchers employ many procedures that have been proved, or at least are believed, to be random with respect to the natural measure on the space of polygonal knots. As a result, the random selection of equilateral polygons, as well as the statistical robustness of this selection, is of particular interest. In this research, we study the key features of four popular methods: the Polygonal Folding, the Crankshaft Rotation, the Hedgehog, and the Triangle Methods. In particular, we compare the implementation and efficacy of these procedures, especially in regards to the population distribution of polygons in the space of polygonal knots, the distribution of edge vectors, the local curvature, and the local torsion. In addition, we give a rigorous proof that the Crankshaft Rotation Method is ergodic.
Parametric models for samples of random functions
Grigoriu, M.
2015-09-15
A new class of parametric models, referred to as sample parametric models, is developed for random elements that match sample rather than the first two moments and/or other global properties of these elements. The models can be used to characterize, e.g., material properties at small scale in which case their samples represent microstructures of material specimens selected at random from a population. The samples of the proposed models are elements of finite-dimensional vector spaces spanned by samples, eigenfunctions of Karhunen–Loève (KL) representations, or modes of singular value decompositions (SVDs). The implementation of sample parametric models requires knowledge of the probability laws of target random elements. Numerical examples including stochastic processes and random fields are used to demonstrate the construction of sample parametric models, assess their accuracy, and illustrate how these models can be used to solve efficiently stochastic equations.
Accounting for correlation and compliance in cluster randomized trials.
Loeys, T; Vansteelandt, S; Goetghebeur, E
2001-12-30
This paper discusses causal inference with survival data from cluster randomized trials. It is argued that cluster randomization carries the potential for post-randomization exposures which involve differentially selective compliance between treatment arms, even for an all or nothing exposure at the individual level. Structural models can be employed to account for post-randomization exposures, but should not ignore clustering. We show how marginal modelling and random effects models allow to adapt structural estimators to account for clustering. Our findings are illustrated with data from a vitamin A trial for the prevention of infant mortality in the rural plains of Nepal. PMID:11782031
NASA Astrophysics Data System (ADS)
Newman, M. E. J.; Martin, Travis
2014-11-01
Random graph models have played a dominant role in the theoretical study of networked systems. The Poisson random graph of Erdős and Rényi, in particular, as well as the so-called configuration model, have served as the starting point for numerous calculations. In this paper we describe another large class of random graph models, which we call equitable random graphs and which are flexible enough to represent networks with diverse degree distributions and many nontrivial types of structure, including community structure, bipartite structure, degree correlations, stratification, and others, yet are exactly solvable for a wide range of properties in the limit of large graph size, including percolation properties, complete spectral density, and the behavior of homogeneous dynamical systems, such as coupled oscillators or epidemic models.
Matowo, J; Kitau, J; Kaaya, R; Kavishe, R; Wright, A; Kisinza, W; Kleinschmidt, I; Mosha, F; Rowland, M; Protopopoff, N
2015-03-01
Anopheles gambiae s.l. (Diptera: Culicidae) in Muleba, Tanzania has developed high levels of resistance to most insecticides currently advocated for malaria control. The kdr mutation has almost reached fixation in An. gambiae s.s. in Muleba. This change has the potential to jeopardize malaria control interventions carried out in the region. Trends in insecticide resistance were monitored in two intervention villages using World Health Organization (WHO) susceptibility test kits. Additional mechanisms contributing to observed phenotypic resistance were investigated using Centers for Disease Control (CDC) bottle bioassays with piperonylbutoxide (PBO) and S,S,S-tributyl phosphorotrithioate (DEF) synergists. Resistance genotyping for kdr and Ace-1 alleles was conducted using quantitative polymerase chain reaction (qPCR). In both study villages, high phenotypic resistance to several pyrethroids and DDT was observed, with mortality in the range of 12-23%. There was a sharp decrease in mortality in An. gambiae s.l. exposed to bendiocarb (carbamate) from 84% in November 2011 to 31% in December 2012 after two rounds of bendiocarb-based indoor residual spraying (IRS). Anopheles gambiae s.l. remained susceptible to pirimiphos-methyl (organophosphate). Bendiocarb-based IRS did not lead to the reversion of pyrethroid resistance. There was no evidence for selection for Ace-1 resistance alleles. The need to investigate the operational impact of the observed resistance selection on the effectiveness of longlasting insecticidal nets and IRS for malaria control is urgent. PMID:25537754
Matowo, J; Kitau, J; Kaaya, R; Kavishe, R; Wright, A; Kisinza, W; Kleinschmidt, I; Mosha, F; Rowland, M; Protopopoff, N
2015-03-01
Anopheles gambiae s.l. (Diptera: Culicidae) in Muleba, Tanzania has developed high levels of resistance to most insecticides currently advocated for malaria control. The kdr mutation has almost reached fixation in An. gambiae s.s. in Muleba. This change has the potential to jeopardize malaria control interventions carried out in the region. Trends in insecticide resistance were monitored in two intervention villages using World Health Organization (WHO) susceptibility test kits. Additional mechanisms contributing to observed phenotypic resistance were investigated using Centers for Disease Control (CDC) bottle bioassays with piperonylbutoxide (PBO) and S,S,S-tributyl phosphorotrithioate (DEF) synergists. Resistance genotyping for kdr and Ace-1 alleles was conducted using quantitative polymerase chain reaction (qPCR). In both study villages, high phenotypic resistance to several pyrethroids and DDT was observed, with mortality in the range of 12-23%. There was a sharp decrease in mortality in An. gambiae s.l. exposed to bendiocarb (carbamate) from 84% in November 2011 to 31% in December 2012 after two rounds of bendiocarb-based indoor residual spraying (IRS). Anopheles gambiae s.l. remained susceptible to pirimiphos-methyl (organophosphate). Bendiocarb-based IRS did not lead to the reversion of pyrethroid resistance. There was no evidence for selection for Ace-1 resistance alleles. The need to investigate the operational impact of the observed resistance selection on the effectiveness of longlasting insecticidal nets and IRS for malaria control is urgent.
Bekker, Pirow; Dairaghi, Daniel; Seitz, Lisa; Leleti, Manmohan; Wang, Yu; Ertl, Linda; Baumgart, Trageen; Shugarts, Sarah; Lohr, Lisa; Dang, Ton; Miao, Shichang; Zeng, Yibin; Fan, Pingchen; Zhang, Penglie; Johnson, Daniel; Powers, Jay; Jaen, Juan; Charo, Israel; Schall, Thomas J.
2016-01-01
The complement 5a receptor has been an attractive therapeutic target for many autoimmune and inflammatory disorders. However, development of a selective and potent C5aR antagonist has been challenging. Here we describe the characterization of CCX168 (avacopan), an orally administered selective and potent C5aR inhibitor. CCX168 blocked the C5a binding, C5a-mediated migration, calcium mobilization, and CD11b upregulation in U937 cells as well as in freshly isolated human neutrophils. CCX168 retains high potency when present in human blood. A transgenic human C5aR knock-in mouse model allowed comparison of the in vitro and in vivo efficacy of the molecule. CCX168 effectively blocked migration in in vitro and ex vivo chemotaxis assays, and it blocked the C5a-mediated neutrophil vascular endothelial margination. CCX168 was effective in migration and neutrophil margination assays in cynomolgus monkeys. This thorough in vitro and preclinical characterization enabled progression of CCX168 into the clinic and testing of its safety, tolerability, pharmacokinetic, and pharmacodynamic profiles in a Phase 1 clinical trial in 48 healthy volunteers. CCX168 was shown to be well tolerated across a broad dose range (1 to 100 mg) and it showed dose-dependent pharmacokinetics. An oral dose of 30 mg CCX168 given twice daily blocked the C5a-induced upregulation of CD11b in circulating neutrophils by 94% or greater throughout the entire day, demonstrating essentially complete target coverage. This dose regimen is being tested in clinical trials in patients with anti-neutrophil cytoplasmic antibody-associated vasculitis. Trial Registration ISRCTN registry with trial ID ISRCTN13564773. PMID:27768695
Random subspaces in quantum information theory
NASA Astrophysics Data System (ADS)
Hayden, Patrick
2005-03-01
The selection of random unitary transformations plays a role in quantum information theory analogous to the role of random hash functions in classical information theory. Recent applications have included protocols achieving the quantum channel capacity and methods for extending superdense coding from bits to qubits. In addition, the corresponding random subspaces have proved useful for studying the structure of bipartite and multipartite entanglement. In quantum information theory, we're fond of saying that Hilbert space is a big place, the implication being that there's room for the unexpected to occur. The goal of this talk is to further bolster this homespun wisdowm. I'm going to present a number of results in quantum information theory that stem from the initially counterintuitive geometry of high-dimensional vector spaces, where subspaces with highly extremal properties are the norm rather than the exception. Peter Shor has shown, for example, that randomly selected subspaces can be used to send quantum information through a noisy quantum channel at the highest possible rate, that is, the quantum channel capacity. More recently, Debbie Leung, Andreas Winter and I demonstrated that a randomly chosen subspace of a bipartite quantum system will likely contain nothing but nearly maximally entangled states, even if the subspace is nearly as large as the original system in qubit terms. This observation has implications for communication, especially superdense coding.
Hegedüs, Laszlo; Pacini, Furio; Pinchera, Aldo; Leung, Angela M.; Vaisman, Mario; Reiners, Christoph; Wemeau, Jean-Louis; Huysmans, Dyde A.; Harper, William; Rachinsky, Irina; de Souza, Hevelyn Noemberg; Castagna, Maria G.; Antonangeli, Lucia; Braverman, Lewis E.; Corbo, Rossana; Düren, Christian; Proust-Lemoine, Emmanuelle; Marriott, Christopher; Driedger, Albert; Grupe, Peter; Watt, Torquil; Magner, James; Purvis, Annie; Graf, Hans
2014-01-01
Background: Enhanced reduction of multinodular goiter (MNG) can be achieved by stimulation with recombinant human thyrotropin (rhTSH) before radioiodine (131I) therapy. The objective was to compare the long-term efficacy and safety of two low doses of modified release rhTSH (MRrhTSH) in combination with 131I therapy. Methods: In this phase II, single-blinded, placebo-controlled study, 95 patients (57.2±9.6 years old, 85% women, 83% Caucasians) with MNG (median size 96.0 mL; range 31.9–242.2 mL) were randomized to receive placebo (n=32), 0.01 mg MRrhTSH (n=30), or 0.03 mg MRrhTSH (n=33) 24 hours before a calculated 131I activity. Thyroid volume (TV) and smallest cross-sectional area of trachea (SCAT) were measured (by computed tomography scan) at baseline, six months, and 36 months. Thyroid function and quality of life (QoL) was evaluated at three-month and yearly intervals respectively. Results: At six months, TV reduction was enhanced in the 0.03 mg MRrhTSH group (32.9% vs. 23.1% in the placebo group; p=0.03) but not in the 0.01 mg MRrhTSH group. At 36 months, the mean percent TV reduction from baseline was 44±12.7% (SD) in the placebo group, 41±21.0% in the 0.01 mg MRrhTSH group, and 53±18.6% in the 0.03 mg MRrhTSH group, with no statistically significant differences among the groups, p=0.105. In the 0.03 mg MRrhTSH group, the subset of patients with basal 131I uptake <20% had a 24% greater TV reduction at 36 months than the corresponding subset of patients in the placebo group (p=0.01). At 36 months, the largest relative increase in SCAT was observed in the 0.03 mg MRrhTSH group (13.4±23.2%), but this was not statistically different from the increases observed in the placebo or the 0.01 mg MRrhTSH group (p=0.15). Goiter-related symptoms were reduced and QoL improved, without any enhanced benefit from using MRrhTSH. At three years, the prevalence of permanent hypothyroidism was 13%, 33%, and 45% in the placebo, 0.01 mg, and 0.03
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Grants by Random Selection Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings....
NASA Astrophysics Data System (ADS)
Bruzda, Wojciech; Cappellini, Valerio; Sommers, Hans-Jürgen; Życzkowski, Karol
2009-01-01
We define a natural ensemble of trace preserving, completely positive quantum maps and present algorithms to generate them at random. Spectral properties of the superoperator Φ associated with a given quantum map are investigated and a quantum analogue of the Frobenius-Perron theorem is proved. We derive a general formula for the density of eigenvalues of Φ and show the connection with the Ginibre ensemble of real non-symmetric random matrices. Numerical investigations of the spectral gap imply that a generic state of the system iterated several times by a fixed generic map converges exponentially to an invariant state.
NASA Astrophysics Data System (ADS)
Donnelly, Isaac
Random walks on lattices are a well used model for diffusion on continuum. They have been to model subdiffusive systems, systems with forcing and reactions as well as a combination of the three. We extend the traditional random walk framework to the network to obtain novel results. As an example due to the small graph diameter, the early time behaviour of subdiffusive dynamics dominates the observed system which has implications for models of the brain or airline networks. I would like to thank the Australian American Fulbright Association.
How Well Do Random Walks Parallelize?
NASA Astrophysics Data System (ADS)
Efremenko, Klim; Reingold, Omer
A random walk on a graph is a process that explores the graph in a random way: at each step the walk is at a vertex of the graph, and at each step it moves to a uniformly selected neighbor of this vertex. Random walks are extremely useful in computer science and in other fields. A very natural problem that was recently raised by Alon, Avin, Koucky, Kozma, Lotker, and Tuttle (though it was implicit in several previous papers) is to analyze the behavior of k independent walks in comparison with the behavior of a single walk. In particular, Alon et al. showed that in various settings (e.g., for expander graphs), k random walks cover the graph (i.e., visit all its nodes), Ω(k)-times faster (in expectation) than a single walk. In other words, in such cases k random walks efficiently “parallelize” a single random walk. Alon et al. also demonstrated that, depending on the specific setting, this “speedup” can vary from logarithmic to exponential in k.
Clinical Research Methodology 3: Randomized Controlled Trials.
Sessler, Daniel I; Imrey, Peter B
2015-10-01
Randomized assignment of treatment excludes reverse causation and selection bias and, in sufficiently large studies, effectively prevents confounding. Well-implemented blinding prevents measurement bias. Studies that include these protections are called randomized, blinded clinical trials and, when conducted with sufficient numbers of patients, provide the most valid results. Although conceptually straightforward, design of clinical trials requires thoughtful trade-offs among competing approaches-all of which influence the number of patients required, enrollment time, internal and external validity, ability to evaluate interactions among treatments, and cost.
Searching for nodes in random graphs.
Lancaster, David
2011-11-01
We consider the problem of searching for a node on a labeled random graph according to a greedy algorithm that selects a route to the desired node using metric information on the graph. Motivated by peer-to-peer networks two types of random graph are proposed with properties particularly amenable to this kind of algorithm. We derive equations for the probability that the search is successful and also study the number of hops required, finding both numerical and analytic evidence of a transition as the number of links is varied.
Evolutionary dynamics on random structures
Fraser, S.M.; Reidys, C.M. |
1997-04-01
In this paper the authors consider the evolutionary dynamics of populations of sequences, under a process of selection at the phenotypic level of structures. They use a simple graph-theoretic representation of structures which captures well the properties of the mapping between RNA sequences and their molecular structure. Each sequence is assigned to a structure by means of a sequence-to-structure mapping. The authors make the basic assumption that every fitness landscape can be factorized through the structures. The set of all sequences that map into a particular random structure can then be modeled as a random graph in sequence space, the so-called neutral network. They analyze in detail how an evolving population searches for new structures, in particular how they switch from one neutral network to another. They verify that transitions occur directly between neutral networks, and study the effects of different population sizes and the influence of the relatedness of the structures on these transitions. In fitness landscapes where several structures exhibit high fitness, the authors then study evolutionary paths on the structural level taken by the population during its search. They present a new way of expressing structural similarities which are shown to have relevant implications for the time evolution of the population.
Selective Influence through Conditional Independence.
ERIC Educational Resources Information Center
Dzhafarov, Ehtibar N.
2003-01-01
Presents a generalization and improvement for the definition proposed by E. Dzhafarov (2001) for selectiveness in the dependence of several random variables on several (sets of) external factors. This generalization links the notion of selective influence with that of conditional independence. (SLD)
Taming random lasers through active spatial control of the pump.
Bachelard, N; Andreasen, J; Gigan, S; Sebbah, P
2012-07-20
Active control of the spatial pump profile is proposed to exercise control over random laser emission. We demonstrate numerically the selection of any desired lasing mode from the emission spectrum. An iterative optimization method is employed, first in the regime of strong scattering where modes are spatially localized and can be easily selected using local pumping. Remarkably, this method works efficiently even in the weakly scattering regime, where strong spatial overlap of the modes precludes spatial selectivity. A complex optimized pump profile is found, which selects the desired lasing mode at the expense of others, thus demonstrating the potential of pump shaping for robust and controllable single mode operation of a random laser.
Taming Random Lasers through Active Spatial Control of the Pump
NASA Astrophysics Data System (ADS)
Bachelard, N.; Andreasen, J.; Gigan, S.; Sebbah, P.
2012-07-01
Active control of the spatial pump profile is proposed to exercise control over random laser emission. We demonstrate numerically the selection of any desired lasing mode from the emission spectrum. An iterative optimization method is employed, first in the regime of strong scattering where modes are spatially localized and can be easily selected using local pumping. Remarkably, this method works efficiently even in the weakly scattering regime, where strong spatial overlap of the modes precludes spatial selectivity. A complex optimized pump profile is found, which selects the desired lasing mode at the expense of others, thus demonstrating the potential of pump shaping for robust and controllable single mode operation of a random laser.
Randomness Of Amoeba Movements
NASA Astrophysics Data System (ADS)
Hashiguchi, S.; Khadijah, Siti; Kuwajima, T.; Ohki, M.; Tacano, M.; Sikula, J.
2005-11-01
Movements of amoebas were automatically traced using the difference between two successive frames of the microscopic movie. It was observed that the movements were almost random in that the directions and the magnitudes of the successive two steps are not correlated, and that the distance from the origin was proportional to the square root of the step number.
ERIC Educational Resources Information Center
Griffiths, Martin
2011-01-01
One of the author's undergraduate students recently asked him whether it was possible to generate a random positive integer. After some thought, the author realised that there were plenty of interesting mathematical ideas inherent in her question. So much so in fact, that the author decided to organise a workshop, open both to undergraduates and…
Contouring randomly spaced data
NASA Technical Reports Server (NTRS)
Kibler, J. F.; Morris, W. D.; Hamm, R. W.
1977-01-01
Computer program using triangulation contouring technique contours data points too numerous to fit into rectangular grid. Using random access procedures, program can handle up to 56,000 data points and provides up to 20 contour intervals for multiple number of parameters.
Uniform random number generators
NASA Technical Reports Server (NTRS)
Farr, W. R.
1971-01-01
Methods are presented for the generation of random numbers with uniform and normal distributions. Subprogram listings of Fortran generators for the Univac 1108, SDS 930, and CDC 3200 digital computers are also included. The generators are of the mixed multiplicative type, and the mathematical method employed is that of Marsaglia and Bray.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
Relativistic Weierstrass random walks.
Saa, Alberto; Venegeroles, Roberto
2010-08-01
The Weierstrass random walk is a paradigmatic Markov chain giving rise to a Lévy-type superdiffusive behavior. It is well known that special relativity prevents the arbitrarily high velocities necessary to establish a superdiffusive behavior in any process occurring in Minkowski spacetime, implying, in particular, that any relativistic Markov chain describing spacetime phenomena must be essentially Gaussian. Here, we introduce a simple relativistic extension of the Weierstrass random walk and show that there must exist a transition time t{c} delimiting two qualitative distinct dynamical regimes: the (nonrelativistic) superdiffusive Lévy flights, for t
Relativistic Weierstrass random walks.
Saa, Alberto; Venegeroles, Roberto
2010-08-01
The Weierstrass random walk is a paradigmatic Markov chain giving rise to a Lévy-type superdiffusive behavior. It is well known that special relativity prevents the arbitrarily high velocities necessary to establish a superdiffusive behavior in any process occurring in Minkowski spacetime, implying, in particular, that any relativistic Markov chain describing spacetime phenomena must be essentially Gaussian. Here, we introduce a simple relativistic extension of the Weierstrass random walk and show that there must exist a transition time t{c} delimiting two qualitative distinct dynamical regimes: the (nonrelativistic) superdiffusive Lévy flights, for t
Interactions in random copolymers
NASA Astrophysics Data System (ADS)
Marinov, Toma; Luettmer-Strathmann, Jutta
2002-04-01
The description of thermodynamic properties of copolymers in terms of simple lattice models requires a value for the effective interaction strength between chain segments, in addition to parameters that can be derived from the properties of the corresponding homopolymers. If the monomers are chemically similar, Berthelot's geometric-mean combining rule provides a good first approximation for interactions between unlike segments. In earlier work on blends of polyolefins [1], we found that the small-scale architecture of the chains leads to corrections to the geometric-mean approximation that are important for the prediction of phase diagrams. In this work, we focus on the additional effects due to sequencing of the monomeric units. In order to estimate the effective interaction for random copolymers, the small-scale simulation approach developed in [1] is extended to allow for random sequencing of the monomeric units. The approach is applied here to random copolymers of ethylene and 1-butene. [1] J. Luettmer-Strathmann and J.E.G. Lipson. Phys. Rev. E 59, 2039 (1999) and Macromolecules 32, 1093 (1999).
A random number generator for continuous random variables
NASA Technical Reports Server (NTRS)
Guerra, V. M.; Tapia, R. A.; Thompson, J. R.
1972-01-01
A FORTRAN 4 routine is given which may be used to generate random observations of a continuous real valued random variable. Normal distribution of F(x), X, E(akimas), and E(linear) is presented in tabular form.
Random tag insertions by Transposon Integration mediated Mutagenesis (TIM).
Hoeller, Brigitte M; Reiter, Birgit; Abad, Sandra; Graze, Ina; Glieder, Anton
2008-10-01
Transposon Integration mediated Mutagenesis (TIM) is a broadly applicable tool for protein engineering. This method combines random integration of modified bacteriophage Mu transposons with their subsequent defined excision employing type IIS restriction endonuclease AarI. TIM enables deletion or insertion of an arbitrary number of bases at random positions, insertion of functional sequence tags at random positions, replacing randomly selected triplets by a specific codon (e.g. scanning) and site-saturation mutagenesis. As a proof of concept a transposon named GeneOpenerAarIKan was designed and employed to introduce 6xHis tags randomly into the esterase EstC from Burkholderia gladioli. A TIM library was screened with colony based assays for clones with an integrated 6xHis tag and for clones exhibiting esterase activity. The employed strategy enables the isolation of randomly tagged active enzymes in single mutagenesis experiments.
Randomization Methods in Emergency Setting Trials: A Descriptive Review
ERIC Educational Resources Information Center
Corbett, Mark Stephen; Moe-Byrne, Thirimon; Oddie, Sam; McGuire, William
2016-01-01
Background: Quasi-randomization might expedite recruitment into trials in emergency care settings but may also introduce selection bias. Methods: We searched the Cochrane Library and other databases for systematic reviews of interventions in emergency medicine or urgent care settings. We assessed selection bias (baseline imbalances) in prognostic…
Cluster Randomized Controlled Trial
Young, John; Chapman, Katie; Nixon, Jane; Patel, Anita; Holloway, Ivana; Mellish, Kirste; Anwar, Shamaila; Breen, Rachel; Knapp, Martin; Murray, Jenni; Farrin, Amanda
2015-01-01
Background and Purpose— We developed a new postdischarge system of care comprising a structured assessment covering longer-term problems experienced by patients with stroke and their carers, linked to evidence-based treatment algorithms and reference guides (the longer-term stroke care system of care) to address the poor longer-term recovery experienced by many patients with stroke. Methods— A pragmatic, multicentre, cluster randomized controlled trial of this system of care. Eligible patients referred to community-based Stroke Care Coordinators were randomized to receive the new system of care or usual practice. The primary outcome was improved patient psychological well-being (General Health Questionnaire-12) at 6 months; secondary outcomes included functional outcomes for patients, carer outcomes, and cost-effectiveness. Follow-up was through self-completed postal questionnaires at 6 and 12 months. Results— Thirty-two stroke services were randomized (29 participated); 800 patients (399 control; 401 intervention) and 208 carers (100 control; 108 intervention) were recruited. In intention to treat analysis, the adjusted difference in patient General Health Questionnaire-12 mean scores at 6 months was −0.6 points (95% confidence interval, −1.8 to 0.7; P=0.394) indicating no evidence of statistically significant difference between the groups. Costs of Stroke Care Coordinator inputs, total health and social care costs, and quality-adjusted life year gains at 6 months, 12 months, and over the year were similar between the groups. Conclusions— This robust trial demonstrated no benefit in clinical or cost-effectiveness outcomes associated with the new system of care compared with usual Stroke Care Coordinator practice. Clinical Trial Registration— URL: http://www.controlled-trials.com. Unique identifier: ISRCTN 67932305. PMID:26152298
Composite Random Fiber Networks
NASA Astrophysics Data System (ADS)
Picu, Catalin; Shahsavari, Ali
2013-03-01
Systems made from fibers are common in the biological and engineering worlds. In many instances, as for example in skin, where elastin and collagen fibers are present, the fiber network is composite, in the sense that it contains fibers of very different properties. The relationship between microstructural parameters and the elastic moduli of random fiber networks containing a single type of fiber is understood. In this work we address a similar target for the composite networks. We show that linear superposition of the contributions to stiffness of individual sub-networks does not apply and interesting non-linear effects are observed. A physical basis of these effects is proposed.
Random numbers from vacuum fluctuations
NASA Astrophysics Data System (ADS)
Shi, Yicheng; Chng, Brenda; Kurtsiefer, Christian
2016-07-01
We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.
Random recursive trees and the elephant random walk
NASA Astrophysics Data System (ADS)
Kürsten, Rüdiger
2016-03-01
One class of random walks with infinite memory, so-called elephant random walks, are simple models describing anomalous diffusion. We present a surprising connection between these models and bond percolation on random recursive trees. We use a coupling between the two models to translate results from elephant random walks to the percolation process. We calculate, besides other quantities, exact expressions for the first and the second moment of the root cluster size and of the number of nodes in child clusters of the first generation. We further introduce another model, the skew elephant random walk, and calculate the first and second moment of this process.
Random recursive trees and the elephant random walk.
Kürsten, Rüdiger
2016-03-01
One class of random walks with infinite memory, so-called elephant random walks, are simple models describing anomalous diffusion. We present a surprising connection between these models and bond percolation on random recursive trees. We use a coupling between the two models to translate results from elephant random walks to the percolation process. We calculate, besides other quantities, exact expressions for the first and the second moment of the root cluster size and of the number of nodes in child clusters of the first generation. We further introduce another model, the skew elephant random walk, and calculate the first and second moment of this process. PMID:27078296
NASA Astrophysics Data System (ADS)
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C →U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
Random rough surface photofabrication
NASA Astrophysics Data System (ADS)
Brissonneau, Vincent; Escoubas, Ludovic; Flory, François; Berginc, Gérard
2011-10-01
Random rough surfaces are of primary interest for their optical properties: reducing reflection at the interface or obtaining specific scattering diagram for example. Thus controlling surface statistics during the fabrication process paves the way to original and specific behaviors of reflected optical waves. We detail an experimental method allowing the fabrication of random rough surfaces showing tuned statistical properties. A two-step photoresist exposure process was developed. In order to initiate photoresist polymerization, an energy threshold needs to be reached by light exposure. This energy is brought by a uniform exposure equipment comprising UV-LEDs. This pre-exposure is studied by varying parameters such as optical power and exposure time. The second step consists in an exposure based on the Gray method.1 The speckle pattern of an enlarged scattered laser beam is used to insolate the photoresist. A specific photofabrication bench using an argon ion laser was implemented. Parameters such as exposure time and distances between optical components are discussed. Then, we describe how we modify the speckle-based exposure bench to include a spatial light modulator (SLM). The SLM used is a micromirror matrix known as Digital Micromirror Device (DMD) which allows spatial modulation by displaying binary images. Thus, the spatial beam shape can be tuned and so the speckle pattern on the photoresist is modified. As the photoresist photofabricated surface is correlated to the speckle pattern used to insolate, the roughness parameters can be adjusted.
Mak, Chi H; Pham, Phuong; Afif, Samir A; Goodman, Myron F
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C→U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-01-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C → U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508
Mak, Chi H; Pham, Phuong; Afif, Samir A; Goodman, Myron F
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C→U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508
Random Numbers and Quantum Computers
ERIC Educational Resources Information Center
McCartney, Mark; Glass, David
2002-01-01
The topic of random numbers is investigated in such a way as to illustrate links between mathematics, physics and computer science. First, the generation of random numbers by a classical computer using the linear congruential generator and logistic map is considered. It is noted that these procedures yield only pseudo-random numbers since…
Work analysis by random sampling.
Divilbiss, J L; Self, P C
1978-01-01
Random sampling of work activities using an electronic random alarm mechanism provided a simple and effective way to determine how time was divided between various activities. At each random alarm the subject simply recorded the time and the activity. Analysis of the data led to reassignment of staff functions and also resulted in additional support for certain critical activities. PMID:626793
Investigating the Randomness of Numbers
ERIC Educational Resources Information Center
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
Randomized Response Analysis in Mplus
ERIC Educational Resources Information Center
Hox, Joop; Lensvelt-Mulders, Gerty
2004-01-01
This article describes a technique to analyze randomized response data using available structural equation modeling (SEM) software. The randomized response technique was developed to obtain estimates that are more valid when studying sensitive topics. The basic feature of all randomized response methods is that the data are deliberately…
Weighted Hybrid Decision Tree Model for Random Forest Classifier
NASA Astrophysics Data System (ADS)
Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.
2016-06-01
Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
NASA Astrophysics Data System (ADS)
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-08-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling.
Barranca, Victor J; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling.
Barranca, Victor J; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-08-24
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging.
Mayr, E
1985-05-01
Much of the resistance against Darwin's theory of natural selection has been due to misunderstandings. It is shown that natural selection is not a tautology and that it is a two-step process. The first step, the production of variation, is under the control of chance; the second step, selection proper, is an anti-chance process, but subject to many constraints. The target of selection is the individual as a whole, and many neutral mutations can be retained as hitchhikers of successful genotypes. Sexual selection results from selection for pure reproductive success.
Jain, Sudhir R; Srivastava, Shashi C L
2008-09-01
We present a Gaussian ensemble of random cyclic matrices on the real field and study their spectral fluctuations. These cyclic matrices are shown to be pseudosymmetric with respect to generalized parity. We calculate the joint probability distribution function of eigenvalues and the spacing distributions analytically and numerically. For small spacings, the level spacing distribution exhibits either a Gaussian or a linear form. Furthermore, for the general case of two arbitrary complex eigenvalues, leaving out the spacings among real eigenvalues, and, among complex conjugate pairs, we find that the spacing distribution agrees completely with the Wigner distribution for a Poisson process on a plane. The cyclic matrices occur in a wide variety of physical situations, including disordered linear atomic chains and the Ising model in two dimensions. These exact results are also relevant to two-dimensional statistical mechanics and nu -parametrized quantum chromodynamics. PMID:18851127
Reinelt, Douglas A.; van Swol, Frank B.; Kraynik, Andrew Michael
2004-06-01
The Surface Evolver was used to compute the equilibrium microstructure of dry soap foams with random structure and a wide range of cell-size distributions. Topological and geometric properties of foams and individual cells were evaluated. The theory for isotropic Plateau polyhedra describes the dependence of cell geometric properties on their volume and number of faces. The surface area of all cells is about 10% greater than a sphere of equal volume; this leads to a simple but accurate theory for the surface free energy density of foam. A novel parameter based on the surface-volume mean bubble radius R32 is used to characterize foam polydispersity. The foam energy, total cell edge length, and average number of faces per cell all decrease with increasing polydispersity. Pentagonal faces are the most common in monodisperse foam but quadrilaterals take over in highly polydisperse structures.
CONTOURING RANDOMLY SPACED DATA
NASA Technical Reports Server (NTRS)
Hamm, R. W.
1994-01-01
This program prepares contour plots of three-dimensional randomly spaced data. The contouring techniques use a triangulation procedure developed by Dr. C. L. Lawson of the Jet Propulsion Laboratory which allows the contouring of randomly spaced input data without first fitting the data into a rectangular grid. The program also allows contour points to be fitted with a smooth curve using an interpolating spline under tension. The input data points to be contoured are read from a magnetic tape or disk file with one record for each data point. Each record contains the X and Y coordinates, value to be contoured, and an alternate contour value (if applicable). The contour data is then partitioned by the program to reduce core storage requirements. Output consists of the contour plots and user messages. Several output options are available to the user such as: controlling which value in the data record is to be contoured, whether contours are drawn by polygonal lines or by a spline under tension (smooth curves), and controlling the contour level labels which may be suppressed if desired. The program can handle up to 56,000 data points and provide for up to 20 contour intervals for a multiple number of parameters. This program was written in FORTRAN IV for implementation on a CDC 6600 computer using CALCOMP plotting capabilities. The field length required is dependent upon the number of data points to be contoured. The program requires 42K octal storage locations plus the larger of: 24 times the maximum number of points in each data partition (defaults to maximum of 1000 data points in each partition with 20 percent overlap) or 2K plus four times the total number of points to be plotted. This program was developed in 1975.
Adaptive Random Testing with Combinatorial Input Domain
Lu, Yansheng
2014-01-01
Random testing (RT) is a fundamental testing technique to assess software reliability, by simply selecting test cases in a random manner from the whole input domain. As an enhancement of RT, adaptive random testing (ART) has better failure-detection capability and has been widely applied in different scenarios, such as numerical programs, some object-oriented programs, and mobile applications. However, not much work has been done on the effectiveness of ART for the programs with combinatorial input domain (i.e., the set of categorical data). To extend the ideas to the testing for combinatorial input domain, we have adopted different similarity measures that are widely used for categorical data in data mining and have proposed two similarity measures based on interaction coverage. Then, we propose a new version named ART-CID as an extension of ART in combinatorial input domain, which selects an element from categorical data as the next test case such that it has the lowest similarity against already generated test cases. Experimental results show that ART-CID generally performs better than RT, with respect to different evaluation metrics. PMID:24772036
How random are random numbers generated using photons?
NASA Astrophysics Data System (ADS)
Solis, Aldo; Angulo Martínez, Alí M.; Ramírez Alarcón, Roberto; Cruz Ramírez, Hector; U'Ren, Alfred B.; Hirsch, Jorge G.
2015-06-01
Randomness is fundamental in quantum theory, with many philosophical and practical implications. In this paper we discuss the concept of algorithmic randomness, which provides a quantitative method to assess the Borel normality of a given sequence of numbers, a necessary condition for it to be considered random. We use Borel normality as a tool to investigate the randomness of ten sequences of bits generated from the differences between detection times of photon pairs generated by spontaneous parametric downconversion. These sequences are shown to fulfil the randomness criteria without difficulties. As deviations from Borel normality for photon-generated random number sequences have been reported in previous work, a strategy to understand these diverging findings is outlined.
Quantum random walk polynomial and quantum random walk measure
NASA Astrophysics Data System (ADS)
Kang, Yuanbao; Wang, Caishi
2014-05-01
In the paper, we introduce a quantum random walk polynomial (QRWP) that can be defined as a polynomial , which is orthogonal with respect to a quantum random walk measure (QRWM) on , such that the parameters are in the recurrence relations and satisfy . We firstly obtain some results of QRWP and QRWM, in which case the correspondence between measures and orthogonal polynomial sequences is one-to-one. It shows that any measure with respect to which a quantum random walk polynomial sequence is orthogonal is a quantum random walk measure. We next collect some properties of QRWM; moreover, we extend Karlin and McGregor's representation formula for the transition probabilities of a quantum random walk (QRW) in the interacting Fock space, which is a parallel result with the CGMV method. Using these findings, we finally obtain some applications for QRWM, which are of interest in the study of quantum random walk, highlighting the role played by QRWP and QRWM.
Pseudo-random number generator for the Sigma 5 computer
NASA Technical Reports Server (NTRS)
Carroll, S. N.
1983-01-01
A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.
Nature of Random Variation in the Nutrient Composition of Meals
Balintfy, Joseph L.; Prekopa, Andras
1966-01-01
The mathematical formulation of nutrient variation in meals in presented by means of random vectors. The primary sources of nutrient variation in unit portions of menu items are identified and expressed in terms of random food-nutrient, random portion size and random ingredient composition variations. A secondary source of nutrient variation can be traced to the random selection process of combining menu items into individual meals from multiple choice menus. The separate as well as the joint effect of these sources on the total variation of the nutrient content of meals is described with the aid of variance-covariance matrices. The investigation is concluded with the formulation of multivariate probability statements concerning the adequacy of the nutrient content of meals relative to the distribution of the nutrient requirements over a given population. PMID:5971545
Random Test Run Length and Effectiveness
NASA Technical Reports Server (NTRS)
Andrews, James H.; Groce, Alex; Weston, Melissa; Xu, Ru-Gang
2008-01-01
A poorly understood but important factor in many applications of random testing is the selection of a maximum length for test runs. Given a limited time for testing, it is seldom clear whether executing a small number of long runs or a large number of short runs maximizes utility. It is generally expected that longer runs are more likely to expose failures -- which is certainly true with respect to runs shorter than the shortest failing trace. However, longer runs produce longer failing traces, requiring more effort from humans in debugging or more resources for automated minimization. In testing with feedback, increasing ranges for parameters may also cause the probability of failure to decrease in longer runs. We show that the choice of test length dramatically impacts the effectiveness of random testing, and that the patterns observed in simple models and predicted by analysis are useful in understanding effects observed.
Molecular selection in a unified evolutionary sequence
NASA Technical Reports Server (NTRS)
Fox, S. W.
1986-01-01
With guidance from experiments and observations that indicate internally limited phenomena, an outline of unified evolutionary sequence is inferred. Such unification is not visible for a context of random matrix and random mutation. The sequence proceeds from Big Bang through prebiotic matter, protocells, through the evolving cell via molecular and natural selection, to mind, behavior, and society.
Adaptive pumping for spectral control of random lasers
NASA Astrophysics Data System (ADS)
Bachelard, Nicolas; Gigan, Sylvain; Noblin, Xavier; Sebbah, Patrick
2014-06-01
A laser is not necessarily a sophisticated device: pumping an amplifying medium randomly filled with scatterers makes a perfectly viable `random laser'. The absence of mirrors greatly simplifies laser design, but control over the emission wavelength and directionality is lost, seriously hindering prospects for this otherwise simple laser. Recently, we proposed an approach to tame random lasers, inspired by coherent light control in complex media. Here, we implement this method in an optofluidic random laser where modes are spatially extended and overlap, making individual mode selection impossible, a priori. We show experimentally that control over laser emission can be regained even in this extreme case. By actively shaping the optical pump within the random laser, single-mode operation at any selected wavelength is achieved with spectral selectivity down to 0.06 nm and more than 10 dB side-lobe rejection. This method paves the way towards versatile tunable and controlled random lasers as well as the taming of other laser sources.
Spatially embedded random networks.
Barnett, L; Di Paolo, E; Bullock, S
2007-11-01
Many real-world networks analyzed in modern network theory have a natural spatial element; e.g., the Internet, social networks, neural networks, etc. Yet, aside from a comparatively small number of somewhat specialized and domain-specific studies, the spatial element is mostly ignored and, in particular, its relation to network structure disregarded. In this paper we introduce a model framework to analyze the mediation of network structure by spatial embedding; specifically, we model connectivity as dependent on the distance between network nodes. Our spatially embedded random networks construction is not primarily intended as an accurate model of any specific class of real-world networks, but rather to gain intuition for the effects of spatial embedding on network structure; nevertheless we are able to demonstrate, in a quite general setting, some constraints of spatial embedding on connectivity such as the effects of spatial symmetry, conditions for scale free degree distributions and the existence of small-world spatial networks. We also derive some standard structural statistics for spatially embedded networks and illustrate the application of our model framework with concrete examples. PMID:18233726
Spatially embedded random networks
NASA Astrophysics Data System (ADS)
Barnett, L.; di Paolo, E.; Bullock, S.
2007-11-01
Many real-world networks analyzed in modern network theory have a natural spatial element; e.g., the Internet, social networks, neural networks, etc. Yet, aside from a comparatively small number of somewhat specialized and domain-specific studies, the spatial element is mostly ignored and, in particular, its relation to network structure disregarded. In this paper we introduce a model framework to analyze the mediation of network structure by spatial embedding; specifically, we model connectivity as dependent on the distance between network nodes. Our spatially embedded random networks construction is not primarily intended as an accurate model of any specific class of real-world networks, but rather to gain intuition for the effects of spatial embedding on network structure; nevertheless we are able to demonstrate, in a quite general setting, some constraints of spatial embedding on connectivity such as the effects of spatial symmetry, conditions for scale free degree distributions and the existence of small-world spatial networks. We also derive some standard structural statistics for spatially embedded networks and illustrate the application of our model framework with concrete examples.
Does Random Dispersion Help Survival?
NASA Astrophysics Data System (ADS)
Schinazi, Rinaldo B.
2015-04-01
Many species live in colonies that prosper for a while and then collapse. After the collapse the colony survivors disperse randomly and found new colonies that may or may not make it depending on the new environment they find. We use birth and death chains in random environments to model such a population and to argue that random dispersion is a superior strategy for survival.
Leadership statistics in random structures
NASA Astrophysics Data System (ADS)
Ben-Naim, E.; Krapivsky, P. L.
2004-01-01
The largest component ("the leader") in evolving random structures often exhibits universal statistical properties. This phenomenon is demonstrated analytically for two ubiquitous structures: random trees and random graphs. In both cases, lead changes are rare as the average number of lead changes increases quadratically with logarithm of the system size. As a function of time, the number of lead changes is self-similar. Additionally, the probability that no lead change ever occurs decays exponentially with the average number of lead changes.
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
Hosken, David J; House, Clarissa M
2011-01-25
Sexual selection is a concept that has probably been misunderstood and misrepresented more than any other idea in evolutionary biology, confusion that continues to the present day. We are not entirely sure why this is, but sexual politics seems to have played its role, as does a failure to understand what sexual selection is and why it was initially invoked. While in some ways less intuitive than natural selection, sexual selection is conceptually identical to it, and evolution via either mechanism will occur given sufficient genetic variation. Recent claims that sexual selection theory is fundamentally flawed are simply wrong and ignore an enormous body of evidence that provides a bedrock of support for this major mechanism of organic evolution. In fact it is partly due to this solid foundation that current research has largely shifted from documenting whether or not sexual selection occurs, to addressing more complex evolutionary questions. PMID:21256434
ERIC Educational Resources Information Center
Pereus, Steven C.
2002-01-01
Describes a comprehensive computer software selection and evaluation process, including documenting district needs, evaluating software packages, weighing the alternatives, and making the purchase. (PKP)
Mazenko, Gene F
2008-09-01
We study the random diffusion model. This is a continuum model for a conserved scalar density field varphi driven by diffusive dynamics. The interesting feature of the dynamics is that the bare diffusion coefficient D is density dependent. In the simplest case, D=D[over ]+D_{1}deltavarphi , where D[over ] is the constant average diffusion constant. In the case where the driving effective Hamiltonian is quadratic, the model can be treated using perturbation theory in terms of the single nonlinear coupling D1 . We develop perturbation theory to fourth order in D1 . The are two ways of analyzing this perturbation theory. In one approach, developed by Kawasaki, at one-loop order one finds mode-coupling theory with an ergodic-nonergodic transition. An alternative more direct interpretation at one-loop order leads to a slowing down as the nonlinear coupling increases. Eventually one hits a critical coupling where the time decay becomes algebraic. Near this critical coupling a weak peak develops at a wave number well above the peak at q=0 associated with the conservation law. The width of this peak in Fourier space decreases with time and can be identified with a characteristic kinetic length which grows with a power law in time. For stronger coupling the system becomes metastable and then unstable. At two-loop order it is shown that the ergodic-nonergodic transition is not supported. It is demonstrated that the critical properties of the direct approach survive, going to higher order in perturbation theory.
A program for contouring randomly spaced data
NASA Technical Reports Server (NTRS)
Hamm, R. W.; Kibler, J. F.; Morris, W. D.
1975-01-01
A description is given of a digital computer program which prepares contour plots of three dimensional data. The contouring technique uses a triangulation procedure. As presently configured, the program can accept up to 56,000 randomly spaced data points, although the required computer resources may be prohibitive. However, with relatively minor internal modifications, the program can handle essentially unlimited amounts of data. Up to 20 contouring intervals can be selected and contoured with either polygonal lines or smooth curves. Sample cases are illustrated. A general description of the main program and primary level subroutines is included to permit simple modifications of the program.
NASA Astrophysics Data System (ADS)
Jung, P.; Talkner, P.
2010-09-01
A simple way to convert a purely random sequence of events into a signal with a strong periodic component is proposed. The signal consists of those instants of time at which the length of the random sequence exceeds an integer multiple of a given number. The larger this number the more pronounced the periodic behavior becomes.
Students' Misconceptions about Random Variables
ERIC Educational Resources Information Center
Kachapova, Farida; Kachapov, Ilias
2012-01-01
This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)
Randomness versus Nonlocality and Entanglement
NASA Astrophysics Data System (ADS)
Acín, Antonio; Massar, Serge; Pironio, Stefano
2012-03-01
The outcomes obtained in Bell tests involving two-outcome measurements on two subsystems can, in principle, generate up to 2 bits of randomness. However, the maximal violation of the Clauser-Horne-Shimony-Holt inequality guarantees the generation of only 1.23 bits of randomness. We prove here that quantum correlations with arbitrarily little nonlocality and states with arbitrarily little entanglement can be used to certify that close to the maximum of 2 bits of randomness are produced. Our results show that nonlocality, entanglement, and randomness are inequivalent quantities. They also imply that device-independent quantum key distribution with an optimal key generation rate is possible by using almost-local correlations and that device-independent randomness generation with an optimal rate is possible with almost-local correlations and with almost-unentangled states.
Exploring number space by random digit generation.
Loetscher, Tobias; Brugger, Peter
2007-07-01
There is some evidence that human subjects preferentially select small numbers when asked to sample numbers from large intervals "at random". A retrospective analysis of single digit frequencies in 16 independent experiments with the Mental Dice Task (generation of digits 1-6 during 1 min) confirmed the occurrence of small-number biases (SNBs) in 488 healthy subjects. A subset of these experiments suggested a spatial nature of this bias in the sense of a "leftward" shift along the number line. First, individual SNBs were correlated with leftward deviations in a number line bisection task (but unrelated to the bisection of physical lines). Second, in 20 men, the magnitude of SNBs significantly correlated with leftward attentional biases in the judgment of chimeric faces. Finally, cognitive activation of the right hemisphere enhanced SNBs in 20 different men, while left hemisphere activation reduced them. Together, these findings provide support for a spatial component in random number generation. Specifically, they allow an interpretation of SNBs in terms of "pseudoneglect in number space." We recommend the use of random digit generation for future explorations of spatial-attentional asymmetries in numerical processing and discuss methodological issues relevant to prospective designs.
KASER: Knowledge Amplification by Structured Expert Randomization.
Rubin, Stuart H; Murthy, S N Jayaram; Smith, Michael H; Trajković, Ljiljana
2004-12-01
In this paper and attached video, we present a third-generation expert system named Knowledge Amplification by Structured Expert Randomization (KASER) for which a patent has been filed by the U.S. Navy's SPAWAR Systems Center, San Diego, CA (SSC SD). KASER is a creative expert system. It is capable of deductive, inductive, and mixed derivations. Its qualitative creativity is realized by using a tree-search mechanism. The system achieves creative reasoning by using a declarative representation of knowledge consisting of object trees and inheritance. KASER computes with words and phrases. It possesses a capability for metaphor-based explanations. This capability is useful in explaining its creative suggestions and serves to augment the capabilities provided by the explanation subsystems of conventional expert systems. KASER also exhibits an accelerated capability to learn. However, this capability depends on the particulars of the selected application domain. For example, application domains such as the game of chess exhibit a high degree of geometric symmetry. Conversely, application domains such as the game of craps played with two dice exhibit no predictable pattern, unless the dice are loaded. More generally, we say that domains whose informative content can be compressed to a significant degree without loss (or with relatively little loss) are symmetric. Incompressible domains are said to be asymmetric or random. The measure of symmetry plus the measure of randomness must always sum to unity.
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate...
NASA Astrophysics Data System (ADS)
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-09-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-01-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514
Random sequential adsorption on fractals.
Ciesla, Michal; Barbasz, Jakub
2012-07-28
Irreversible adsorption of spheres on flat collectors having dimension d < 2 is studied. Molecules are adsorbed on Sierpinski's triangle and carpet-like fractals (1 < d < 2), and on general Cantor set (d < 1). Adsorption process is modeled numerically using random sequential adsorption (RSA) algorithm. The paper concentrates on measurement of fundamental properties of coverages, i.e., maximal random coverage ratio and density autocorrelation function, as well as RSA kinetics. Obtained results allow to improve phenomenological relation between maximal random coverage ratio and collector dimension. Moreover, simulations show that, in general, most of known dimensional properties of adsorbed monolayers are valid for non-integer dimensions.
A random forest classifier for lymph diseases.
Azar, Ahmad Taher; Elshazly, Hanaa Ismail; Hassanien, Aboul Ella; Elkorany, Abeer Mohamed
2014-02-01
Machine learning-based classification techniques provide support for the decision-making process in many areas of health care, including diagnosis, prognosis, screening, etc. Feature selection (FS) is expected to improve classification performance, particularly in situations characterized by the high data dimensionality problem caused by relatively few training examples compared to a large number of measured features. In this paper, a random forest classifier (RFC) approach is proposed to diagnose lymph diseases. Focusing on feature selection, the first stage of the proposed system aims at constructing diverse feature selection algorithms such as genetic algorithm (GA), Principal Component Analysis (PCA), Relief-F, Fisher, Sequential Forward Floating Search (SFFS) and the Sequential Backward Floating Search (SBFS) for reducing the dimension of lymph diseases dataset. Switching from feature selection to model construction, in the second stage, the obtained feature subsets are fed into the RFC for efficient classification. It was observed that GA-RFC achieved the highest classification accuracy of 92.2%. The dimension of input feature space is reduced from eighteen to six features by using GA. PMID:24290902
Optimizing reproducibility evaluation for random amplified polymorphic DNA markers.
Ramos, J R; Telles, M P C; Diniz-Filho, J A F; Soares, T N; Melo, D B; Oliveira, G
2008-01-01
The random amplified polymorphic DNA (RAPD) technique is often criticized because it usually shows low levels of repeatability; thus it can generate spurious bands. These problems can be partially overcome by rigid laboratory protocols and by performing repeatability tests. However, because it is expensive and time-consuming to obtain genetic data twice for all individuals, a few randomly chosen individuals are usually selected for a priori repeatability analysis, introducing a potential bias in genetic parameter estimates. We developed a procedure to optimize repeatability analysis based on RAPD data, which was applied to evaluate genetic variability in three local populations of Tibochina papyrus, an endemic Cerrado plant found in elevated rocky fields in Brazil. We used a simulated annealing procedure to select the smallest number of individuals that contain all bands and repeated the analyses only for those bands that were reproduced in these individuals. We compared genetic parameter estimates using HICKORY and POPGENE softwares on an unreduced data set and on data sets in which we eliminated bands based on repeatability of individuals selected by simulated annealing and based on three randomly selected individuals. Genetic parameter estimates were very similar when we used the optimization procedure to reduce the number of bands analyzed, but as expected, selecting only three individuals to evaluate the repeatability of bands produced very different estimates. We conclude that the problems of repeatability attributed to RAPD markers could be due to bias in the selection of loci and primers and not necessarily to the RAPD technique per se. PMID:19065774
Selected Health Practices Among Ohio's Rural Residents.
ERIC Educational Resources Information Center
Phillips, G. Howard; Pugh, Albert
Using a stratified random sample of 12 of Ohio's 88 counties, this 1967 study had as its objectives (1) to measure the level of participation in selected health practices by Ohio's rural residents, (2) to compare the level of participation in selected health practices of farm and rural nonfarm residents, and (3) to examine levels of participation…
Record statistics of financial time series and geometric random walks.
Sabir, Behlool; Santhanam, M S
2014-09-01
The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.
Record statistics of financial time series and geometric random walks
NASA Astrophysics Data System (ADS)
Sabir, Behlool; Santhanam, M. S.
2014-09-01
The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.
Walter, Scott R; Rose, Nectarios
2013-09-01
Allocating an incomplete address to randomly selected property coordinates within a locality, known as random property allocation, has many advantages over other geoimputation techniques. We compared the performance of random property allocation to four other methods under various conditions using a simulation approach. All methods performed well for large spatial units, but random property allocation was the least prone to bias and error under volatile scenarios with small units and low prevalence. Both its coordinate based approach as well as the random process of assignment contribute to its increased accuracy and reduced bias in many scenarios. Hence it is preferable to fixed or areal geoimputation for many epidemiological and surveillance applications.
Performance of Random Effects Model Estimators under Complex Sampling Designs
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
A random spatial sampling method in a rural developing nation
2014-01-01
Background Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. Methods We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. Results This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Conclusions Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available. PMID:24716473
Jijakli, Kenan; Khraiwesh, Basel; Fu, Weiqi; Luo, Liming; Alzahmi, Amnah; Koussa, Joseph; Chaiboonchoe, Amphun; Kirmizialtin, Serdal; Yen, Laising; Salehi-Ashtiani, Kourosh
2016-08-15
Through iterative cycles of selection, amplification, and mutagenesis, in vitro selection provides the ability to isolate molecules of desired properties and function from large pools (libraries) of random molecules with as many as 10(16) distinct species. This review, in recognition of a quarter of century of scientific discoveries made through in vitro selection, starts with a brief overview of the method and its history. It further covers recent developments in in vitro selection with a focus on tools that enhance the capabilities of in vitro selection and its expansion from being purely a nucleic acids selection to that of polypeptides and proteins. In addition, we cover how next generation sequencing and modern biological computational tools are being used to complement in vitro selection experiments. On the very least, sequencing and computational tools can translate the large volume of information associated with in vitro selection experiments to manageable, analyzable, and exploitable information. Finally, in vivo selection is briefly compared and contrasted to in vitro selection to highlight the unique capabilities of each method. PMID:27312879
Jijakli, Kenan; Khraiwesh, Basel; Fu, Weiqi; Luo, Liming; Alzahmi, Amnah; Koussa, Joseph; Chaiboonchoe, Amphun; Kirmizialtin, Serdal; Yen, Laising; Salehi-Ashtiani, Kourosh
2016-08-15
Through iterative cycles of selection, amplification, and mutagenesis, in vitro selection provides the ability to isolate molecules of desired properties and function from large pools (libraries) of random molecules with as many as 10(16) distinct species. This review, in recognition of a quarter of century of scientific discoveries made through in vitro selection, starts with a brief overview of the method and its history. It further covers recent developments in in vitro selection with a focus on tools that enhance the capabilities of in vitro selection and its expansion from being purely a nucleic acids selection to that of polypeptides and proteins. In addition, we cover how next generation sequencing and modern biological computational tools are being used to complement in vitro selection experiments. On the very least, sequencing and computational tools can translate the large volume of information associated with in vitro selection experiments to manageable, analyzable, and exploitable information. Finally, in vivo selection is briefly compared and contrasted to in vitro selection to highlight the unique capabilities of each method.
Control theory for random systems
NASA Technical Reports Server (NTRS)
Bryson, A. E., Jr.
1972-01-01
A survey is presented of the current knowledge available for designing and predicting the effectiveness of controllers for dynamic systems which can be modeled by ordinary differential equations. A short discussion of feedback control is followed by a description of deterministic controller design and the concept of system state. The need for more realistic disturbance models led to the use of stochastic process concepts, in particular the Gauss-Markov process. A compensator controlled system, with random forcing functions, random errors in the measurements, and random initial conditions, is treated as constituting a Gauss-Markov random process; hence the mean-square behavior of the controlled system is readily predicted. As an example, a compensator is designed for a helicopter to maintain it in hover in a gusty wind over a point on the ground.
Quantifying randomness in real networks
NASA Astrophysics Data System (ADS)
Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-10-01
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.
Quantifying randomness in real networks.
Orsini, Chiara; Dankulov, Marija M; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-10-20
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.
Quantifying randomness in real networks
Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-01-01
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs. PMID:26482121
Diffraction by random Ronchi gratings.
Torcal-Milla, Francisco Jose; Sanchez-Brea, Luis Miguel
2016-08-01
In this work, we obtain analytical expressions for the near-and far-field diffraction of random Ronchi diffraction gratings where the slits of the grating are randomly displaced around their periodical positions. We theoretically show that the effect of randomness in the position of the slits of the grating produces a decrease of the contrast and even disappearance of the self-images for high randomness level at the near field. On the other hand, it cancels high-order harmonics in far field, resulting in only a few central diffraction orders. Numerical simulations by means of the Rayleigh-Sommerfeld diffraction formula are performed in order to corroborate the analytical results. These results are of interest for industrial and technological applications where manufacture errors need to be considered.
Cluster randomization and political philosophy.
Chwang, Eric
2012-11-01
In this paper, I will argue that, while the ethical issues raised by cluster randomization can be challenging, they are not new. My thesis divides neatly into two parts. In the first, easier part I argue that many of the ethical challenges posed by cluster randomized human subjects research are clearly present in other types of human subjects research, and so are not novel. In the second, more difficult part I discuss the thorniest ethical challenge for cluster randomized research--cases where consent is genuinely impractical to obtain. I argue that once again these cases require no new analytic insight; instead, we should look to political philosophy for guidance. In other words, the most serious ethical problem that arises in cluster randomized research also arises in political philosophy.
Quantum entanglement from random measurements
NASA Astrophysics Data System (ADS)
Tran, Minh Cong; Dakić, Borivoje; Arnault, François; Laskowski, Wiesław; Paterek, Tomasz
2015-11-01
We show that the expectation value of squared correlations measured along random local directions is an identifier of quantum entanglement in pure states, which can be directly experimentally assessed if two copies of the state are available. Entanglement can therefore be detected by parties who do not share a common reference frame and whose local reference frames, such as polarizers or Stern-Gerlach magnets, remain unknown. Furthermore, we also show that in every experimental run, access to only one qubit from the macroscopic reference is sufficient to identify entanglement, violate a Bell inequality, and, in fact, observe all phenomena observable with macroscopic references. Finally, we provide a state-independent entanglement witness solely in terms of random correlations and emphasize how data gathered for a single random measurement setting per party reliably detects entanglement. This is only possible due to utilized randomness and should find practical applications in experimental confirmation of multiphoton entanglement or space experiments.
Diffraction by random Ronchi gratings.
Torcal-Milla, Francisco Jose; Sanchez-Brea, Luis Miguel
2016-08-01
In this work, we obtain analytical expressions for the near-and far-field diffraction of random Ronchi diffraction gratings where the slits of the grating are randomly displaced around their periodical positions. We theoretically show that the effect of randomness in the position of the slits of the grating produces a decrease of the contrast and even disappearance of the self-images for high randomness level at the near field. On the other hand, it cancels high-order harmonics in far field, resulting in only a few central diffraction orders. Numerical simulations by means of the Rayleigh-Sommerfeld diffraction formula are performed in order to corroborate the analytical results. These results are of interest for industrial and technological applications where manufacture errors need to be considered. PMID:27505363
Quasi-Random Sequence Generators.
1994-03-01
Version 00 LPTAU generates quasi-random sequences. The sequences are uniformly distributed sets of L=2**30 points in the N-dimensional unit cube: I**N=[0,1]. The sequences are used as nodes for multidimensional integration, as searching points in global optimization, as trial points in multicriteria decision making, as quasi-random points for quasi Monte Carlo algorithms.
Staggered chiral random matrix theory
Osborn, James C.
2011-02-01
We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.
On Pfaffian Random Point Fields
NASA Astrophysics Data System (ADS)
Kargin, V.
2014-02-01
We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.
Randomness and degrees of irregularity.
Pincus, S; Singer, B H
1996-01-01
The fundamental question "Are sequential data random?" arises in myriad contexts, often with severe data length constraints. Furthermore, there is frequently a critical need to delineate nonrandom sequences in terms of closeness to randomness--e.g., to evaluate the efficacy of therapy in medicine. We address both these issues from a computable framework via a quantification of regularity. ApEn (approximate entropy), defining maximal randomness for sequences of arbitrary length, indicating the applicability to sequences as short as N = 5 points. An infinite sequence formulation of randomness is introduced that retains the operational (and computable) features of the finite case. In the infinite sequence setting, we indicate how the "foundational" definition of independence in probability theory, and the definition of normality in number theory, reduce to limit theorems without rates of convergence, from which we utilize ApEn to address rates of convergence (of a deficit from maximal randomness), refining the aforementioned concepts in a computationally essential manner. Representative applications among many are indicated to assess (i) random number generation output; (ii) well-shuffled arrangements; and (iii) (the quality of) bootstrap replicates. PMID:11607637
Random and non-random mating populations: Evolutionary dynamics in meiotic drive.
Sarkar, Bijan
2016-01-01
Game theoretic tools are utilized to analyze a one-locus continuous selection model of sex-specific meiotic drive by considering nonequivalence of the viabilities of reciprocal heterozygotes that might be noticed at an imprinted locus. The model draws attention to the role of viability selections of different types to examine the stable nature of polymorphic equilibrium. A bridge between population genetics and evolutionary game theory has been built up by applying the concept of the Fundamental Theorem of Natural Selection. In addition to pointing out the influences of male and female segregation ratios on selection, configuration structure reveals some noted results, e.g., Hardy-Weinberg frequencies hold in replicator dynamics, occurrence of faster evolution at the maximized variance fitness, existence of mixed Evolutionarily Stable Strategy (ESS) in asymmetric games, the tending evolution to follow not only a 1:1 sex ratio but also a 1:1 different alleles ratio at particular gene locus. Through construction of replicator dynamics in the group selection framework, our selection model introduces a redefining bases of game theory to incorporate non-random mating where a mating parameter associated with population structure is dependent on the social structure. Also, the model exposes the fact that the number of polymorphic equilibria will depend on the algebraic expression of population structure. PMID:26524140
Experimental evidence of quantum randomness incomputability
Calude, Cristian S.; Dinneen, Michael J.; Dumitrescu, Monica; Svozil, Karl
2010-08-15
In contrast with software-generated randomness (called pseudo-randomness), quantum randomness can be proven incomputable; that is, it is not exactly reproducible by any algorithm. We provide experimental evidence of incomputability--an asymptotic property--of quantum randomness by performing finite tests of randomness inspired by algorithmic information theory.
Tucker, W.G.
1990-08-01
The paper summarizes various criteria that can be used in selecting material used in the construction, furnishing, maintenance, and operation of a building. It also summarizes the types of material and product testing that can be especially useful in the selection process. In broad terms, materials in buildings are classified as building materials, furnishings, maintenance materials, and other contents. At any given time, emissions from materials and products in any of these four categories can dominate the impact on indoor air quality (IAQ) in a building. Responsibility for the selection of these materials may be that of the designer, owner, or occupants. Information on IAQ impacts of materials therefore needs to be developed for a wide range of people.
Phase Transitions on Random Lattices: How Random is Topological Disorder?
NASA Astrophysics Data System (ADS)
Barghathi, Hatem; Vojta, Thomas
2015-03-01
We study the effects of topological (connectivity) disorder on phase transitions. We identify a broad class of random lattices whose disorder fluctuations decay much faster with increasing length scale than those of generic random systems, yielding a wandering exponent of ω = (d - 1) / (2 d) in d dimensions. The stability of clean critical points is thus governed by the criterion (d + 1) ν > 2 rather than the usual Harris criterion dν > 2 , making topological disorder less relevant than generic randomness. The Imry-Ma criterion is also modified, allowing first-order transitions to survive in all dimensions d > 1 . These results explain a host of puzzling violations of the original criteria for equilibrium and nonequilibrium phase transitions on random lattices. We discuss applications, and we illustrate our theory by computer simulations of random Voronoi and other lattices. This work was supported by the NSF under Grant Nos. DMR-1205803 and PHYS-1066293. We acknowledge the hospitality of the Aspen Center for Physics.
Virial expansion for almost diagonal random matrices
NASA Astrophysics Data System (ADS)
Yevtushenko, Oleg; Kravtsov, Vladimir E.
2003-08-01
Energy level statistics of Hermitian random matrices hat H with Gaussian independent random entries Higeqj is studied for a generic ensemble of almost diagonal random matrices with langle|Hii|2rangle ~ 1 and langle|Hi\
Full randomness from arbitrarily deterministic events
NASA Astrophysics Data System (ADS)
Gallego, Rodrigo; Masanes, Lluis; de la Torre, Gonzalo; Dhara, Chirag; Aolita, Leandro; Acín, Antonio
2013-10-01
Do completely unpredictable events exist? Classical physics excludes fundamental randomness. Although quantum theory makes probabilistic predictions, this does not imply that nature is random, as randomness should be certified without relying on the complete structure of the theory being used. Bell tests approach the question from this perspective. However, they require prior perfect randomness, falling into a circular reasoning. A Bell test that generates perfect random bits from bits possessing high—but less than perfect—randomness has recently been obtained. Yet, the main question remained open: does any initial randomness suffice to certify perfect randomness? Here we show that this is indeed the case. We provide a Bell test that uses arbitrarily imperfect random bits to produce bits that are, under the non-signalling principle assumption, perfectly random. This provides the first protocol attaining full randomness amplification. Our results have strong implications onto the debate of whether there exist events that are fully random.
On Convergent Probability of a Random Walk
ERIC Educational Resources Information Center
Lee, Y.-F.; Ching, W.-K.
2006-01-01
This note introduces an interesting random walk on a straight path with cards of random numbers. The method of recurrent relations is used to obtain the convergent probability of the random walk with different initial positions.
Wave propagation through a random medium - The random slab problem
NASA Technical Reports Server (NTRS)
Acquista, C.
1978-01-01
The first-order smoothing approximation yields integral equations for the mean and the two-point correlation function of a wave in a random medium. A method is presented for the approximate solution of these equations that combines features of the eiconal approximation and of the Born expansion. This method is applied to the problem of reflection and transmission of a plane wave by a slab of a random medium. Both the mean wave and the covariance are calculated to determine the reflected and transmitted amplitudes and intensities.
Cover times of random searches
NASA Astrophysics Data System (ADS)
Chupeau, Marie; Bénichou, Olivier; Voituriez, Raphaël
2015-10-01
How long must one undertake a random search to visit all sites of a given domain? This time, known as the cover time, is a key observable to quantify the efficiency of exhaustive searches, which require a complete exploration of an area and not only the discovery of a single target. Examples range from immune-system cells chasing pathogens to animals harvesting resources, from robotic exploration for cleaning or demining to the task of improving search algorithms. Despite its broad relevance, the cover time has remained elusive and so far explicit results have been scarce and mostly limited to regular random walks. Here we determine the full distribution of the cover time for a broad range of random search processes, including Lévy strategies, intermittent strategies, persistent random walks and random walks on complex networks, and reveal its universal features. We show that for all these examples the mean cover time can be minimized, and that the corresponding optimal strategies also minimize the mean search time for a single target, unambiguously pointing towards their robustness.
Random walk through fractal environments.
Isliker, H; Vlahos, L
2003-02-01
We analyze random walk through fractal environments, embedded in three-dimensional, permeable space. Particles travel freely and are scattered off into random directions when they hit the fractal. The statistical distribution of the flight increments (i.e., of the displacements between two consecutive hittings) is analytically derived from a common, practical definition of fractal dimension, and it turns out to approximate quite well a power-law in the case where the dimension D(F) of the fractal is less than 2, there is though, always a finite rate of unaffected escape. Random walks through fractal sets with D(F)< or =2 can thus be considered as defective Levy walks. The distribution of jump increments for D(F)>2 is decaying exponentially. The diffusive behavior of the random walk is analyzed in the frame of continuous time random walk, which we generalize to include the case of defective distributions of walk increments. It is shown that the particles undergo anomalous, enhanced diffusion for D(F)<2, the diffusion is dominated by the finite escape rate. Diffusion for D(F)>2 is normal for large times, enhanced though for small and intermediate times. In particular, it follows that fractals generated by a particular class of self-organized criticality models give rise to enhanced diffusion. The analytical results are illustrated by Monte Carlo simulations.
Random root movements in weightlessness
NASA Technical Reports Server (NTRS)
Johnsson, A.; Karlsson, C.; Iversen, T. H.; Chapman, D. K.
1996-01-01
The dynamics of root growth was studied in weightlessness. In the absence of the gravitropic reference direction during weightlessness, root movements could be controlled by spontaneous growth processes, without any corrective growth induced by the gravitropic system. If truly random of nature, the bending behavior should follow so-called 'random walk' mathematics during weightlessness. Predictions from this hypothesis were critically tested. In a Spacelab ESA-experiment, denoted RANDOM and carried out during the IML-2 Shuttle flight in July 1994, the growth of garden cress (Lepidium sativum) roots was followed by time lapse photography at 1-h intervals. The growth pattern was recorded for about 20 h. Root growth was significantly smaller in weightlessness as compared to gravity (control) conditions. It was found that the roots performed spontaneous movements in weightlessness. The average direction of deviation of the plants consistently stayed equal to zero, despite these spontaneous movements. The average squared deviation increased linearly with time as predicted theoretically (but only for 8-10 h). Autocorrelation calculations showed that bendings of the roots, as determined from the 1-h photographs, were uncorrelated after about a 2-h interval. It is concluded that random processes play an important role in root growth. Predictions from a random walk hypothesis as to the growth dynamics could explain parts of the growth patterns recorded. This test of the hypothesis required microgravity conditions as provided for in a space experiment.
Visual Categorization with Random Projection.
Arriaga, Rosa I; Rutter, David; Cakmak, Maya; Vempala, Santosh S
2015-10-01
Humans learn categories of complex objects quickly and from a few examples. Random projection has been suggested as a means to learn and categorize efficiently. We investigate how random projection affects categorization by humans and by very simple neural networks on the same stimuli and categorization tasks, and how this relates to the robustness of categories. We find that (1) drastic reduction in stimulus complexity via random projection does not degrade performance in categorization tasks by either humans or simple neural networks, (2) human accuracy and neural network accuracy are remarkably correlated, even at the level of individual stimuli, and (3) the performance of both is strongly indicated by a natural notion of category robustness.
Fast Randomized STDMA Link Scheduling
NASA Astrophysics Data System (ADS)
Gomez, Sergio; Gras, Oriol; Friderikos, Vasilis
In this paper a fast randomized parallel link swap based packing (RSP) algorithm for timeslot allocation in a spatial time division multiple access (STDMA) wireless mesh network is presented. The proposed randomized algorithm extends several greedy scheduling algorithms that utilize the physical interference model by applying a local search that leads to a substantial improvement in the spatial timeslot reuse. Numerical simulations reveal that compared to previously scheduling schemes the proposed randomized algorithm can achieve a performance gain of up to 11%. A significant benefit of the proposed scheme is that the computations can be parallelized and therefore can efficiently utilize commoditized and emerging multi-core and/or multi-CPU processors.
Forest Fires in a Random Forest
NASA Astrophysics Data System (ADS)
Leuenberger, Michael; Kanevski, Mikhaïl; Vega Orozco, Carmen D.
2013-04-01
Forest fires in Canton Ticino (Switzerland) are very complex phenomena. Meteorological data can explain some occurrences of fires in time, but not necessarily in space. Using anthropogenic and geographical feature data with the random forest algorithm, this study tries to highlight factors that most influence the fire-ignition and to identify areas under risk. The fundamental scientific problem considered in the present research deals with an application of random forest algorithms for the analysis and modeling of forest fires patterns in a high dimensional input feature space. This study is focused on the 2,224 anthropogenic forest fires among the 2,401 forest fire ignition points that have occurred in Canton Ticino from 1969 to 2008. Provided by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL), the database characterizes each fire by their location (x,y coordinates of the ignition point), start date, duration, burned area, and other information such as ignition cause and topographic features such as slope, aspect, altitude, etc. In addition, the database VECTOR25 from SwissTopo was used to extract information of the distances between fire ignition points and anthropogenic structures like buildings, road network, rail network, etc. Developed by L. Breiman and A. Cutler, the Random Forests (RF) algorithm provides an ensemble of classification and regression trees. By a pseudo-random variable selection for each split node, this method grows a variety of decision trees that do not return the same results, and thus by a committee system, returns a value that has a better accuracy than other machine learning methods. This algorithm incorporates directly measurement of importance variable which is used to display factors affecting forest fires. Dealing with this parameter, several models can be fit, and thus, a prediction can be made throughout the validity domain of Canton Ticino. Comprehensive RF analysis was carried out in order to 1
A Randomized Experiment Comparing Random and Cutoff-Based Assignment
ERIC Educational Resources Information Center
Shadish, William R.; Galindo, Rodolfo; Wong, Vivian C.; Steiner, Peter M.; Cook, Thomas D.
2011-01-01
In this article, we review past studies comparing randomized experiments to regression discontinuity designs, mostly finding similar results, but with significant exceptions. The latter might be due to potential confounds of study characteristics with assignment method or with failure to estimate the same parameter over methods. In this study, we…
Relatively Random: Context Effects on Perceived Randomness and Predicted Outcomes
ERIC Educational Resources Information Center
Matthews, William J.
2013-01-01
This article concerns the effect of context on people's judgments about sequences of chance outcomes. In Experiment 1, participants judged whether sequences were produced by random, mechanical processes (such as a roulette wheel) or skilled human action (such as basketball shots). Sequences with lower alternation rates were judged more likely to…
Relatively random: context effects on perceived randomness and predicted outcomes.
Matthews, William J
2013-09-01
This article concerns the effect of context on people's judgments about sequences of chance outcomes. In Experiment 1, participants judged whether sequences were produced by random, mechanical processes (such as a roulette wheel) or skilled human action (such as basketball shots). Sequences with lower alternation rates were judged more likely to result from human action. However, this effect was highly context-dependent: A moderate alternation rate was judged more likely to indicate a random physical process when encountered among sequences with lower alternation rates than when embedded among sequences with higher alternation rates. Experiment 2 found the same effect for predictions of the next outcome following a streak: A streak of 3 at the end of the sequence was judged less likely to continue by participants who had encountered shorter terminal streaks in previous trials than by those who had encountered longer ones. These contrast effects (a) help to explain variability in the types of sequences that are judged to be random and that elicit the gambler's fallacy, and urge caution about attempts to establish universal parameterizations of these effects; (b) are congruent with theories of sequence judgment that emphasize the importance of people's actual experiences with sequences of different kinds; (c) provide a link between models of sequence judgment and broader accounts of psychophysical/economic judgment; and (d) may offer new insight into individual differences in randomness judgments and sequence predictions.
Variable Selection for Qualitative Interactions
Gunter, L.; Zhu, J.; Murphy, S. A.
2009-01-01
In this article we discuss variable selection for decision making with focus on decisions regarding when to provide treatment and which treatment to provide. Current variable selection techniques were developed for use in a supervised learning setting where the goal is prediction of the response. These techniques often downplay the importance of interaction variables that have small predictive ability but that are critical when the ultimate goal is decision making rather than prediction. We propose two new techniques designed specifically to find variables that aid in decision making. Simulation results are given along with an application of the methods on data from a randomized controlled trial for the treatment of depression. PMID:21179592
May random processes explain mating success in leks?
Focardi, S; Tinelli, A
1996-06-01
The object of this paper is to verify whether in specific cases the variance of mating success among lekking males may be due exclusively to a random mechanism, as opposed to the adaptive mechanisms of mate choice which are usually postulated in the literature in the framework of sexual selection theory. In fact, some studies attempted to compare observed distributions of male mating success with a Poisson 'null' distribution based on the conjecture of random mating; the conjecture is usually rejected. In this paper we construct a plausible model (the 'null' hypothesis) for a strictly random non-adaptive pattern of social behaviour of lekking males and females and we perform several simulations for reasonable choices of parameter values. It should be observed that some of the simulations based on our random model lead to a distribution of male mating success which is Poisson-like. However, contrary to predictions, in several simulations a random process of mate choice lead to non-Poissonian distributions. Accordingly, the fact that, when performing a statistical test on several sets of field data, we find both cases which are in agreement with Poisson distribution, or a normal one, and cases which are not, does not allow us to reject the assumption of random male reproductive success. Thus it is legitimate to conjecture that in many cases the inter-individual variability of male mating success might indeed be determined by random processes. If this conjecture were to be confirmed by further studies, the actual significance of sexual selection in the evolution of lekking species should be reassessed, and a novel approach in the analysis of field data would be called for.
Random photonic crystal optical memory
NASA Astrophysics Data System (ADS)
Wirth Lima, A., Jr.; Sombra, A. S. B.
2012-10-01
Currently, optical cross-connects working on wavelength division multiplexing systems are based on optical fiber delay lines buffering. We designed and analyzed a novel photonic crystal optical memory, which replaces the fiber delay lines of the current optical cross-connect buffer. Optical buffering systems based on random photonic crystal optical memory have similar behavior to the electronic buffering systems based on electronic RAM memory. In this paper, we show that OXCs working with optical buffering based on random photonic crystal optical memories provides better performance than the current optical cross-connects.
Truncations of random orthogonal matrices.
Khoruzhenko, Boris A; Sommers, Hans-Jürgen; Życzkowski, Karol
2010-10-01
Statistical properties of nonsymmetric real random matrices of size M, obtained as truncations of random orthogonal N×N matrices, are investigated. We derive an exact formula for the density of eigenvalues which consists of two components: finite fraction of eigenvalues are real, while the remaining part of the spectrum is located inside the unit disk symmetrically with respect to the real axis. In the case of strong nonorthogonality, M/N=const, the behavior typical to real Ginibre ensemble is found. In the case M=N-L with fixed L, a universal distribution of resonance widths is recovered.
Truncations of random orthogonal matrices
NASA Astrophysics Data System (ADS)
Khoruzhenko, Boris A.; Sommers, Hans-Jürgen; Życzkowski, Karol
2010-10-01
Statistical properties of nonsymmetric real random matrices of size M , obtained as truncations of random orthogonal N×N matrices, are investigated. We derive an exact formula for the density of eigenvalues which consists of two components: finite fraction of eigenvalues are real, while the remaining part of the spectrum is located inside the unit disk symmetrically with respect to the real axis. In the case of strong nonorthogonality, M/N=const , the behavior typical to real Ginibre ensemble is found. In the case M=N-L with fixed L , a universal distribution of resonance widths is recovered.
Neutron transport in random media
Makai, M.
1996-08-01
The survey reviews the methods available in the literature which allow a discussion of corium recriticality after a severe accident and a characterization of the corium. It appears that to date no one has considered the eigenvalue problem, though for the source problem several approaches have been proposed. The mathematical formulation of a random medium may be approached in different ways. Based on the review of the literature, we can draw three basic conclusions. The problem of static, random perturbations has been solved. The static case is tractable by the Monte Carlo method. There is a specific time dependent case for which the average flux is given as a series expansion.
Molecular random tilings as glasses
Garrahan, Juan P.; Stannard, Andrew; Blunt, Matthew O.; Beton, Peter H.
2009-01-01
We have recently shown that p-terphenyl-3,5,3′,5′-tetracarboxylic acid adsorbed on graphite self-assembles into a two-dimensional rhombus random tiling. This tiling is close to ideal, displaying long-range correlations punctuated by sparse localized tiling defects. In this article we explore the analogy between dynamic arrest in this type of random tilings and that of structural glasses. We show that the structural relaxation of these systems is via the propagation–reaction of tiling defects, giving rise to dynamic heterogeneity. We study the scaling properties of the dynamics and discuss connections with kinetically constrained models of glasses. PMID:19720990
Synchronizability of random rectangular graphs
Estrada, Ernesto Chen, Guanrong
2015-08-15
Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien
2013-01-01
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Comparison of Web and Mail Surveys in Collecting Illicit Drug Use Data: A Randomized Experiment
ERIC Educational Resources Information Center
McCabe, Sean Esteban
2004-01-01
This randomized experiment examined survey mode effects for self-reporting illicit drug use by comparing prevalence estimates between a Web-based survey and a mail-based survey. A random sample of 7,000 traditional-aged undergraduate students attending a large public university in the United States was selected to participate in the spring of…
Technology Transfer Automated Retrieval System (TEKTRAN)
The genetic effects of long term random mating and natural selection aided by genetic male sterility were evaluated in two soybean [Glycine max (L.) Merr.] populations: RSII and RSIII. Population means, variances, and heritabilities were estimated to determine the effects of 26 generations of random...
Balancing Participation across Students in Large College Classes via Randomized Participation Credit
ERIC Educational Resources Information Center
McCleary, Daniel F.; Aspiranti, Kathleen B.; Foster, Lisa N.; Blondin, Carolyn A.; Gaylon, Charles E.; Yaw, Jared S.; Forbes, Bethany N.; Williams, Robert L.
2011-01-01
The study examines the effects of randomized credit on the percentage of students participating at four predefined levels. Students recorded their comments on specially designed record cards, and days were randomly selected for participation credit. This arrangement balanced participation across students while cutting instructor time for recording…
Using Psychokinesis to Explore the Nature of Quantum Randomness
Burns, Jean E.
2011-11-29
In retrocausation different causal events can produce different successor events, yet a successor event reflecting a particular cause occurs before the causal event does. It is sometimes proposed that the successor event is determined by propagation of the causal effect backwards in time via the dynamical equations governing the events. However, because dynamical equations are time reversible, the evolution of the system is not subject to change. Therefore, the backward propagation hypothesis implies that what may have seemed to be an arbitrary selection of a causal factor was in reality predetermined.Yet quantum randomness can be used to determine the causal factor, and a quantum random event is ordinarily thought of as being arbitrarily generated. So we must ask, when quantum random events occur, are they arbitrary (subject to their probabilistic constraints) or are they predetermined?Because psychokinesis (PK) can act on quantum random events, it can be used as a probe to explore questions such as the above. It is found that if quantum random events are predetermined (aside from the action of PK), certain types of experimental design can show enhanced PK through the use of precognition. Actual experiments are examined and compared, and most of those for which the design is especially suitable for showing this effect had unusually low p values for the number of trials. It is concluded that either the experimenter produced a remarkably strong experimenter effect or quantum random events are predetermined, thereby enabling enhanced PK in suitable experimental designs.
Mixing rates and limit theorems for random intermittent maps
NASA Astrophysics Data System (ADS)
Bahsoun, Wael; Bose, Christopher
2016-04-01
We study random transformations built from intermittent maps on the unit interval that share a common neutral fixed point. We focus mainly on random selections of Pomeu-Manneville-type maps {{T}α} using the full parameter range 0<α <∞ , in general. We derive a number of results around a common theme that illustrates in detail how the constituent map that is fastest mixing (i.e. smallest α) combined with details of the randomizing process, determines the asymptotic properties of the random transformation. Our key result (theorem 1.1) establishes sharp estimates on the position of return time intervals for the quenched dynamics. The main applications of this estimate are to limit laws (in particular, CLT and stable laws, depending on the parameters chosen in the range 0<α <1 ) for the associated skew product; these are detailed in theorem 3.2. Since our estimates in theorem 1.1 also hold for 1≤slant α <∞ we study a second class of random transformations derived from piecewise affine Gaspard-Wang maps, prove existence of an infinite (σ-finite) invariant measure and study the corresponding correlation asymptotics. To the best of our knowledge, this latter kind of result is completely new in the setting of random transformations.
A New Random Walk for Replica Detection in WSNs
Aalsalem, Mohammed Y.; Saad, N. M.; Hossain, Md. Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram
2016-01-01
Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical. PMID:27409082
A New Random Walk for Replica Detection in WSNs.
Aalsalem, Mohammed Y; Khan, Wazir Zada; Saad, N M; Hossain, Md Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram
2016-01-01
Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical.
A New Random Walk for Replica Detection in WSNs.
Aalsalem, Mohammed Y; Khan, Wazir Zada; Saad, N M; Hossain, Md Shohrab; Atiquzzaman, Mohammed; Khan, Muhammad Khurram
2016-01-01
Wireless Sensor Networks (WSNs) are vulnerable to Node Replication attacks or Clone attacks. Among all the existing clone detection protocols in WSNs, RAWL shows the most promising results by employing Simple Random Walk (SRW). More recently, RAND outperforms RAWL by incorporating Network Division with SRW. Both RAND and RAWL have used SRW for random selection of witness nodes which is problematic because of frequently revisiting the previously passed nodes that leads to longer delays, high expenditures of energy with lower probability that witness nodes intersect. To circumvent this problem, we propose to employ a new kind of constrained random walk, namely Single Stage Memory Random Walk and present a distributed technique called SSRWND (Single Stage Memory Random Walk with Network Division). In SSRWND, single stage memory random walk is combined with network division aiming to decrease the communication and memory costs while keeping the detection probability higher. Through intensive simulations it is verified that SSRWND guarantees higher witness node security with moderate communication and memory overheads. SSRWND is expedient for security oriented application fields of WSNs like military and medical. PMID:27409082
RANDOMIZED CONTROLLED CLINICAL TRIALS IN ORTHOPEDICS: DIFFICULTIES AND LIMITATIONS
Malavolta, Eduardo Angeli; Demange, Marco Kawamura; Gobbi, Riccardo Gomes; Imamura, Marta; Fregni, Felipe
2015-01-01
Randomized controlled clinical trials (RCTs) are considered to be the gold standard for evidence-based medicine nowadays, and are important for directing medical practice through consistent scientific observations. Steps such as patient selection, randomization and blinding are fundamental for conducting a RCT, but some additional difficulties are presented in trials that involve surgical procedures, as is common in orthopedics. The aim of this article was to highlight and discuss some difficulties and possible limitations on RCTs within the field of surgery. PMID:27027037
The infinite hidden Markov random field model.
Chatzis, Sotirios P; Tsechpenakis, Gabriel
2010-06-01
Hidden Markov random field (HMRF) models are widely used for image segmentation, as they appear naturally in problems where a spatially constrained clustering scheme is asked for. A major limitation of HMRF models concerns the automatic selection of the proper number of their states, i.e., the number of region clusters derived by the image segmentation procedure. Existing methods, including likelihood- or entropy-based criteria, and reversible Markov chain Monte Carlo methods, usually tend to yield noisy model size estimates while imposing heavy computational requirements. Recently, Dirichlet process (DP, infinite) mixture models have emerged in the cornerstone of nonparametric Bayesian statistics as promising candidates for clustering applications where the number of clusters is unknown a priori; infinite mixture models based on the original DP or spatially constrained variants of it have been applied in unsupervised image segmentation applications showing promising results. Under this motivation, to resolve the aforementioned issues of HMRF models, in this paper, we introduce a nonparametric Bayesian formulation for the HMRF model, the infinite HMRF model, formulated on the basis of a joint Dirichlet process mixture (DPM) and Markov random field (MRF) construction. We derive an efficient variational Bayesian inference algorithm for the proposed model, and we experimentally demonstrate its advantages over competing methodologies.
Two-Stage Modelling Of Random Phenomena
NASA Astrophysics Data System (ADS)
Barańska, Anna
2015-12-01
The main objective of this publication was to present a two-stage algorithm of modelling random phenomena, based on multidimensional function modelling, on the example of modelling the real estate market for the purpose of real estate valuation and estimation of model parameters of foundations vertical displacements. The first stage of the presented algorithm includes a selection of a suitable form of the function model. In the classical algorithms, based on function modelling, prediction of the dependent variable is its value obtained directly from the model. The better the model reflects a relationship between the independent variables and their effect on the dependent variable, the more reliable is the model value. In this paper, an algorithm has been proposed which comprises adjustment of the value obtained from the model with a random correction determined from the residuals of the model for these cases which, in a separate analysis, were considered to be the most similar to the object for which we want to model the dependent variable. The effect of applying the developed quantitative procedures for calculating the corrections and qualitative methods to assess the similarity on the final outcome of the prediction and its accuracy, was examined by statistical methods, mainly using appropriate parametric tests of significance. The idea of the presented algorithm has been designed so as to approximate the value of the dependent variable of the studied phenomenon to its value in reality and, at the same time, to have it "smoothed out" by a well fitted modelling function.
NASA Technical Reports Server (NTRS)
Katti, Romney R.
1995-01-01
Random-access memory (RAM) devices of proposed type exploit magneto-optical properties of magnetic garnets exhibiting perpendicular anisotropy. Magnetic writing and optical readout used. Provides nonvolatile storage and resists damage by ionizing radiation. Because of basic architecture and pinout requirements, most likely useful as small-capacity memory devices.
Common Randomness Principles of Secrecy
ERIC Educational Resources Information Center
Tyagi, Himanshu
2013-01-01
This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…
Randomized Item Response Theory Models
ERIC Educational Resources Information Center
Fox, Jean-Paul
2005-01-01
The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR…
Universality in random quantum networks
NASA Astrophysics Data System (ADS)
Novotný, Jaroslav; Alber, Gernot; Jex, Igor
2015-12-01
Networks constitute efficient tools for assessing universal features of complex systems. In physical contexts, classical as well as quantum networks are used to describe a wide range of phenomena, such as phase transitions, intricate aspects of many-body quantum systems, or even characteristic features of a future quantum internet. Random quantum networks and their associated directed graphs are employed for capturing statistically dominant features of complex quantum systems. Here, we develop an efficient iterative method capable of evaluating the probability of a graph being strongly connected. It is proven that random directed graphs with constant edge-establishing probability are typically strongly connected, i.e., any ordered pair of vertices is connected by a directed path. This typical topological property of directed random graphs is exploited to demonstrate universal features of the asymptotic evolution of large random qubit networks. These results are independent of our knowledge of the details of the network topology. These findings suggest that other highly complex networks, such as a future quantum internet, may also exhibit similar universal properties.
Undecidability Theorem and Quantum Randomness
NASA Astrophysics Data System (ADS)
Berezin, Alexander A.
2005-04-01
As scientific folklore has it, Kurt Godel was once annoyed by question whether he sees any link between his Undecidability Theorem (UT) and Uncertainty Relationship. His reaction, however, may indicate that he probably felt that such a hidden link could indeed exist but he was unable clearly formulate it. Informational version of UT (G.J.Chaitin) states impossibility to rule out algorithmic compressibility of arbitrary digital string. Thus, (mathematical) randomness can only be disproven, not proven. Going from mathematical to physical (mainly quantum) randomness, we encounter seemingly random acts of radioactive decays of isotopes (such as C14), emission of excited atoms, tunneling effects, etc. However, our notion of quantum randomness (QR) may likely hit similarly formidable wall of physical version of UT leading to seemingly bizarre ideas such as Everett many world model (D.Deutsch) or backward causation (J.A.Wheeler). Resolution may potentially lie in admitting some form of Aristotelean final causation (AFC) as an ultimate foundational principle (G.W.Leibniz) connecting purely mathematical (Platonic) grounding aspects with it physically observable consequences, such as plethora of QR effects. Thus, what we interpret as QR may eventually be manifestation of AFC in which UT serves as delivery vehicle. Another example of UT/QR/AFC connection is question of identity (indistinguishability) of elementary particles (are all electrons exactly the same or just approximately so to a very high degree?).
Plated wire random access memories
NASA Technical Reports Server (NTRS)
Gouldin, L. D.
1975-01-01
A program was conducted to construct 4096-work by 18-bit random access, NDRO-plated wire memory units. The memory units were subjected to comprehensive functional and environmental tests at the end-item level to verify comformance with the specified requirements. A technical description of the unit is given, along with acceptance test data sheets.
A Mixed Effects Randomized Item Response Model
ERIC Educational Resources Information Center
Fox, J.-P.; Wyrick, Cheryl
2008-01-01
The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…
Random trinomial tree models and vanilla options
NASA Astrophysics Data System (ADS)
Ganikhodjaev, Nasir; Bayram, Kamola
2013-09-01
In this paper we introduce and study random trinomial model. The usual trinomial model is prescribed by triple of numbers (u, d, m). We call the triple (u, d, m) an environment of the trinomial model. A triple (Un, Dn, Mn), where {Un}, {Dn} and {Mn} are the sequences of independent, identically distributed random variables with 0 < Dn < 1 < Un and Mn = 1 for all n, is called a random environment and trinomial tree model with random environment is called random trinomial model. The random trinomial model is considered to produce more accurate results than the random binomial model or usual trinomial model.
EDITORIAL: Nanotechnological selection Nanotechnological selection
NASA Astrophysics Data System (ADS)
Demming, Anna
2013-01-01
At the nanoscale measures can move from a mass-scale analogue calibration to counters of discrete units. The shift redefines the possible levels of control that can be achieved in a system if adequate selectivity can be imposed. As an example as ionic substances pass through nanoscale pores, the quantity of ions is low enough that the pore can contain either negative or positive ions. Yet precise control over this selectivity still raises difficulties. In this issue researchers address the challenge of how to regulate the ionic selectivity of negative and positive charges with the use of an external charge. The approach may be useful for controlling the behaviour, properties and chemical composition of liquids and has possible technical applications for nanofluidic field effect transistors [1]. Selectivity is a critical advantage in the administration of drugs. Nanoparticles functionalized with targeting moieties can allow delivery of anti-cancer drugs to tumour cells, whilst avoiding healthy cells and hence reducing some of the debilitating side effects of cancer treatments [2]. Researchers in Belarus and the US developed a new theranostic approach—combining therapy and diagnosis—to support the evident benefits of cellular selectivity that can be achieved when nanoparticles are applied in medicine [3]. Their process uses nanobubbles of photothermal vapour, referred to as plasmonic nanobubbles, generated by plasmonic excitations in gold nanoparticles conjugated to diagnosis-specific antibodies. The intracellular plasmonic nanobubbles are controlled by laser fluence so that the response can be tuned in individual living cells. Lower fluence allows non-invasive high-sensitive imaging for diagnosis and higher fluence can disrupt the cellular membrane for treatments. The selective response of carbon nanotubes to different gases has leant them to be used within various different types of sensors, as summarized in a review by researchers at the University of
Random density matrices versus random evolution of open system
NASA Astrophysics Data System (ADS)
Pineda, Carlos; Seligman, Thomas H.
2015-10-01
We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.
Random walk with random resetting to the maximum position
NASA Astrophysics Data System (ADS)
Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2015-11-01
We study analytically a simple random walk model on a one-dimensional lattice, where at each time step the walker resets to the maximum of the already visited positions (to the rightmost visited site) with a probability r , and with probability (1 -r ) , it undergoes symmetric random walk, i.e., it hops to one of its neighboring sites, with equal probability (1 -r )/2 . For r =0 , it reduces to a standard random walk whose typical distance grows as √{n } for large n . In the presence of a nonzero resetting rate 0
Random walk with random resetting to the maximum position.
Majumdar, Satya N; Sabhapandit, Sanjib; Schehr, Grégory
2015-11-01
We study analytically a simple random walk model on a one-dimensional lattice, where at each time step the walker resets to the maximum of the already visited positions (to the rightmost visited site) with a probability r, and with probability (1-r), it undergoes symmetric random walk, i.e., it hops to one of its neighboring sites, with equal probability (1-r)/2. For r=0, it reduces to a standard random walk whose typical distance grows as √n for large n. In the presence of a nonzero resetting rate 0
Initial Status in Growth Curve Modeling for Randomized Trials
Chou, Chih-Ping; Chi, Felicia; Weisner, Constance; Pentz, MaryAnn; Hser, Yih-Ing
2010-01-01
The growth curve modeling (GCM) technique has been widely adopted in longitudinal studies to investigate progression over time. The simplest growth profile involves two growth factors, initial status (intercept) and growth trajectory (slope). Conventionally, all repeated measures of outcome are included as components of the growth profile, and the first measure is used to reflect the initial status. Selection of the initial status, however, can greatly influence study findings, especially for randomized trials. In this article, we propose an alternative GCM approach involving only post-intervention measures in the growth profile and treating the first wave after intervention as the initial status. We discuss and empirically illustrate how choices of initial status may influence study conclusions in addressing research questions in randomized trials using two longitudinal studies. Data from two randomized trials are used to illustrate that the alternative GCM approach proposed in this article offers better model fitting and more meaningful results. PMID:21572585
Bridging Emergent Attributes and Darwinian Principles in Teaching Natural Selection
ERIC Educational Resources Information Center
Xu, Dongchen; Chi, Michelene T. H.
2016-01-01
Students often have misconceptions about natural selection as they misuse a direct causal schema to explain the process. Natural selection is in fact an emergent process where random interactions lead to changes in a population. The misconceptions stem from students' lack of emergent schema for natural selection. In order to help students…
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the...
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the...
Randomized study of zinc supplementation during pregnancy
Hambidge, K.M.; Oliva-Rasbach, J.; Jacobs, M.; Purcell, S.; Statland, C.; Poirier, J.
1986-03-05
The hypothesis underlying this study was that a daily dietary Zn supplement during pregnancy would be associated with higher values for selected indices of Zn nutriture than corresponding values for non-Zn-supplemented subjects, if, and only if, Zn status of the unsupplemented control group was sub-optimal. The 12 test and 17 control subjects were healthy, apparently well-nourished anglos who were enrolled before the 12th week of gestation. Mean age=29 yrs, mean parity=0.8. Test subjects received a daily supplement of 15 mg Zn (mean compliance=90%) from the time of enrollment until 3 months post-partum. The supplement was taken at bedtime while other vitamin/mineral preparations were taken before breakfast. Blood samples were obtained at 4 week intervals from enrollment. Selected preliminary results: plasma Zn declined progressively with length of gestation to a nadir of 53 +/- 6 ..mu..g/dl at 10 months. (Non-pregnant mean 77 +/- 7). As in a previous, non-randomized, study the rate of decline for the test group did not differ from that of the control group. Mean monthly neutrophil Zn ranged from 43 +/- 8 - 50 +/- 14 ..mu..g/10/sup 10/ cells; there was not consistent pattern across gestation. Serum alkaline phosphatase activity and pre-albumin of the test group did not differ from the control group. These data did not give any indication of sub-optimal Zn nutriture in this pregnant population.
Markov random field surface reconstruction.
Paulsen, Rasmus R; Baerentzen, Jakob Andreas; Larsen, Rasmus
2010-01-01
A method for implicit surface reconstruction is proposed. The novelty in this paper is the adaptation of Markov Random Field regularization of a distance field. The Markov Random Field formulation allows us to integrate both knowledge about the type of surface we wish to reconstruct (the prior) and knowledge about data (the observation model) in an orthogonal fashion. Local models that account for both scene-specific knowledge and physical properties of the scanning device are described. Furthermore, how the optimal distance field can be computed is demonstrated using conjugate gradients, sparse Cholesky factorization, and a multiscale iterative optimization scheme. The method is demonstrated on a set of scanned human heads and, both in terms of accuracy and the ability to close holes, the proposed method is shown to have similar or superior performance when compared to current state-of-the-art algorithms.
Knot probabilities in random diagrams
NASA Astrophysics Data System (ADS)
Cantarella, Jason; Chapman, Harrison; Mastin, Matt
2016-10-01
We consider a natural model of random knotting—choose a knot diagram at random from the finite set of diagrams with n crossings. We tabulate diagrams with 10 and fewer crossings and classify the diagrams by knot type, allowing us to compute exact probabilities for knots in this model. As expected, most diagrams with 10 and fewer crossings are unknots (about 78% of the roughly 1.6 billion 10 crossing diagrams). For these crossing numbers, the unknot fraction is mostly explained by the prevalence of ‘tree-like’ diagrams which are unknots for any assignment of over/under information at crossings. The data shows a roughly linear relationship between the log of knot type probability and the log of the frequency rank of the knot type, analogous to Zipf’s law for word frequency. The complete tabulation and all knot frequencies are included as supplementary data.
Optimal randomized scheduling by replacement
Saias, I.
1996-05-01
In the replacement scheduling problem, a system is composed of n processors drawn from a pool of p. The processors can become faulty while in operation and faulty processors never recover. A report is issued whenever a fault occurs. This report states only the existence of a fault but does not indicate its location. Based on this report, the scheduler can reconfigure the system and choose another set of n processors. The system operates satisfactorily as long as, upon report of a fault, the scheduler chooses n non-faulty processors. We provide a randomized protocol maximizing the expected number of faults the system can sustain before the occurrence of a crash. The optimality of the protocol is established by considering a closely related dual optimization problem. The game-theoretic technical difficulties that we solve in this paper are very general and encountered whenever proving the optimality of a randomized algorithm in parallel and distributed computation.
Percolation on correlated random networks
NASA Astrophysics Data System (ADS)
Agliari, E.; Cioli, C.; Guadagnini, E.
2011-09-01
We consider a class of random, weighted networks, obtained through a redefinition of patterns in an Hopfield-like model, and, by performing percolation processes, we get information about topology and resilience properties of the networks themselves. Given the weighted nature of the graphs, different kinds of bond percolation can be studied: stochastic (deleting links randomly) and deterministic (deleting links based on rank weights), each mimicking a different physical process. The evolution of the network is accordingly different, as evidenced by the behavior of the largest component size and of the distribution of cluster sizes. In particular, we can derive that weak ties are crucial in order to maintain the graph connected and that, when they are the most prone to failure, the giant component typically shrinks without abruptly breaking apart; these results have been recently evidenced in several kinds of social networks.
NASA Astrophysics Data System (ADS)
Korneta, W.; Pytel, Z.
1988-07-01
The random walk of a particle on a three-dimensional semi-infinite lattice is considered. In order to study the effect of the surface on the random walk, it is assumed that the velocity of the particle depends on the distance to the surface. Moreover it is assumed that at any point the particle may be absorbed with a certain probability. The probability of the return of the particle to the starting point and the average time of eventual return are calculated. The dependence of these quantities on the distance to the surface, the probability of absorption and the properties of the surface is discussed. The method of generating functions is used.
Random modelling of contagious diseases.
Demongeot, J; Hansen, O; Hessami, H; Jannot, A S; Mintsa, J; Rachdi, M; Taramasco, C
2013-03-01
Modelling contagious diseases needs to include a mechanistic knowledge about contacts between hosts and pathogens as specific as possible, e.g., by incorporating in the model information about social networks through which the disease spreads. The unknown part concerning the contact mechanism can be modelled using a stochastic approach. For that purpose, we revisit SIR models by introducing first a microscopic stochastic version of the contacts between individuals of different populations (namely Susceptible, Infective and Recovering), then by adding a random perturbation in the vicinity of the endemic fixed point of the SIR model and eventually by introducing the definition of various types of random social networks. We propose as example of application to contagious diseases the HIV, and we show that a micro-simulation of individual based modelling (IBM) type can reproduce the current stable incidence of the HIV epidemic in a population of HIV-positive men having sex with men (MSM). PMID:23525763
Equations of the Randomizer's Dynamics
NASA Astrophysics Data System (ADS)
Strzałko, Jarosław; Grabski, Juliusz; Perlikowski, Przemysław; Stefanski, Andrzej; Kapitaniak, Tomasz
Basing on the Newton-Euler laws of mechanics we derive the equations which describe the dynamics of the coin toss, the die throw, and roulette run. The equations for full 3D models and for lower dimensional simplifications are given. The influence of the air resistance and energy dissipation at the impacts is described. The obtained equations allow for the numerical simulation of the randomizer's dynamics and define the mapping of the initial conditions into the final outcome.
NASA Technical Reports Server (NTRS)
Hornstein, J.; Fainberg, J.
1981-01-01
We review ray-optical methods of analyzing short-wavelength propagation in random media. The advantages and limitations of ray methods are discussed, and results of the statistical theory of ray segment fluctuations pertinent to ray tracing are summarized. The standard method of Monte Carlo ray tracing is compared to a new method which takes into account recent results on the statistics of ray segment fluctuations.
Random drift and culture change.
Bentley, R. Alexander; Hahn, Matthew W.; Shennan, Stephen J.
2004-01-01
We show that the frequency distributions of cultural variants, in three different real-world examples--first names, archaeological pottery and applications for technology patents--follow power laws that can be explained by a simple model of random drift. We conclude that cultural and economic choices often reflect a decision process that is value-neutral; this result has far-reaching testable implications for social-science research. PMID:15306315
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Mixed interactions in random copolymers
NASA Astrophysics Data System (ADS)
Marinov, Toma; Luettmer-Strathmann, Jutta
2002-03-01
The description of thermodynamic properties of copolymers in terms of simple lattice models requires a value for the mixed interaction strength (ɛ_12) between unlike chain segments, in addition to parameters that can be derived from the properties of the corresponding homopolymers. If the monomers are chemically similar, Berthelot's geometric-mean combining rule provides a good first approximation for ɛ_12. In earlier work on blends of polyolefins [1], we found that the small-scale architecture of the chains leads to corrections to the geometric-mean approximation that are important for the prediction of phase diagrams. In this work, we focus on the additional effects due to sequencing of the monomeric units. In order to estimate the mixed interaction ɛ_12 for random copolymers, the small-scale simulation approach developed in [1] is extended to allow for random sequencing of the monomeric units. The approach is applied here to random copolymers of ethylene and 1-butene. [1] J. Luettmer-Strathmann and J.E.G. Lipson. Phys. Rev. E 59, 2039 (1999) and Macromolecules 32, 1093 (1999).
Resolution analysis by random probing
NASA Astrophysics Data System (ADS)
Simutė, S.; Fichtner, A.; van Leeuwen, T.
2015-12-01
We develop and apply methods for resolution analysis in tomography, based on stochastic probing of the Hessian or resolution operators. Key properties of our methods are (i) low algorithmic complexity and easy implementation, (ii) applicability to any tomographic technique, including full-waveform inversion and linearized ray tomography, (iii) applicability in any spatial dimension and to inversions with a large number of model parameters, (iv) low computational costs that are mostly a fraction of those required for synthetic recovery tests, and (v) the ability to quantify both spatial resolution and inter-parameter trade-offs. Using synthetic full-waveform inversions as benchmarks, we demonstrate that auto-correlations of random-model applications to the Hessian yield various resolution measures, including direction- and position-dependent resolution lengths, and the strength of inter-parameter mappings. We observe that the required number of random test models is around 5 in one, two and three dimensions. This means that the proposed resolution analyses are not only more meaningful than recovery tests but also computationally less expensive. We demonstrate the applicability of our method in 3D real-data full-waveform inversions for the western Mediterranean and Japan. In addition to tomographic problems, resolution analysis by random probing may be used in other inverse methods that constrain continuously distributed properties, including electromagnetic and potential-field inversions, as well as recently emerging geodynamic data assimilation.
Topological insulators in random potentials
NASA Astrophysics Data System (ADS)
Pieper, Andreas; Fehske, Holger
2016-01-01
We investigate the effects of magnetic and nonmagnetic impurities on the two-dimensional surface states of three-dimensional topological insulators (TIs). Modeling weak and strong TIs using a generic four-band Hamiltonian, which allows for a breaking of inversion and time-reversal symmetries and takes into account random local potentials as well as the Zeeman and orbital effects of external magnetic fields, we compute the local density of states, the single-particle spectral function, and the conductance for a (contacted) slab geometry by numerically exact techniques based on kernel polynomial expansion and Green's function approaches. We show that bulk disorder refills the surface-state Dirac gap induced by a homogeneous magnetic field with states, whereas orbital (Peierls-phase) disorder preserves the gap feature. The former effect is more pronounced in weak TIs than in strong TIs. At moderate randomness, disorder-induced conducting channels appear in the surface layer, promoting diffusive metallicity. Random Zeeman fields rapidly destroy any conducting surface states. Imprinting quantum dots on a TI's surface, we demonstrate that carrier transport can be easily tuned by varying the gate voltage, even to the point where quasibound dot states may appear.
Random Time Identity Based Firewall In Mobile Ad hoc Networks
NASA Astrophysics Data System (ADS)
Suman, Patel, R. B.; Singh, Parvinder
2010-11-01
A mobile ad hoc network (MANET) is a self-organizing network of mobile routers and associated hosts connected by wireless links. MANETs are highly flexible and adaptable but at the same time are highly prone to security risks due to the open medium, dynamically changing network topology, cooperative algorithms, and lack of centralized control. Firewall is an effective means of protecting a local network from network-based security threats and forms a key component in MANET security architecture. This paper presents a review of firewall implementation techniques in MANETs and their relative merits and demerits. A new approach is proposed to select MANET nodes at random for firewall implementation. This approach randomly select a new node as firewall after fixed time and based on critical value of certain parameters like power backup. This approach effectively balances power and resource utilization of entire MANET because responsibility of implementing firewall is equally shared among all the nodes. At the same time it ensures improved security for MANETs from outside attacks as intruder will not be able to find out the entry point in MANET due to the random selection of nodes for firewall implementation.
All optical mode controllable Er-doped random fiber laser with distributed Bragg gratings.
Zhang, W L; Ma, R; Tang, C H; Rao, Y J; Zeng, X P; Yang, Z J; Wang, Z N; Gong, Y; Wang, Y S
2015-07-01
An all-optical method to control the lasing modes of Er-doped random fiber lasers (RFLs) is proposed and demonstrated. In the RFL, an Er-doped fiber (EDF) recoded with randomly separated fiber Bragg gratings (FBG) is used as the gain medium and randomly distributed reflectors, as well as the controllable element. By combining random feedback of the FBG array and Fresnel feedback of a cleaved fiber end, multi-mode coherent random lasing is obtained with a threshold of 14 mW and power efficiency of 14.4%. Moreover, a laterally-injected control light is used to induce local gain perturbation, providing additional gain for certain random resonance modes. As a result, active mode selection of the RFL is realized by changing locations of the laser cavity that is exposed to the control light. PMID:26125397
In vitro selection of catalytic RNAs
NASA Technical Reports Server (NTRS)
Chapman, K. B.; Szostak, J. W.
1994-01-01
In vitro selection techniques are poised to allow a rapid expansion of the study of catalysis by RNA enzymes (ribozymes). This truly molecular version of genetics has already been applied to the study of the structures of known ribozymes and to the tailoring of their catalytic activity to meet specific requirements of substrate specificity or reaction conditions. During the past year, in vitro selection has been successfully used to isolate novel RNA catalysts from random sequence pools.
Modeling stereopsis via Markov random field.
Ming, Yansheng; Hu, Zhanyi
2010-08-01
Markov random field (MRF) and belief propagation have given birth to stereo vision algorithms with top performance. This article explores their biological plausibility. First, an MRF model guided by physiological and psychophysical facts was designed. Typically an MRF-based stereo vision algorithm employs a likelihood function that reflects the local similarity of two regions and a potential function that models the continuity constraint. In our model, the likelihood function is constructed on the basis of the disparity energy model because complex cells are considered as front-end disparity encoders in the visual pathway. Our likelihood function is also relevant to several psychological findings. The potential function in our model is constrained by the psychological finding that the strength of the cooperative interaction minimizing relative disparity decreases as the separation between stimuli increases. Our model is tested on three kinds of stereo images. In simulations on images with repetitive patterns, we demonstrate that our model could account for the human depth percepts that were previously explained by the second-order mechanism. In simulations on random dot stereograms and natural scene images, we demonstrate that false matches introduced by the disparity energy model can be reliably removed using our model. A comparison with the coarse-to-fine model shows that our model is able to compute the absolute disparity of small objects with larger relative disparity. We also relate our model to several physiological findings. The hypothesized neurons of the model are selective for absolute disparity and have facilitative extra receptive field. There are plenty of such neurons in the visual cortex. In conclusion, we think that stereopsis can be implemented by neural networks resembling MRF.
NASA Astrophysics Data System (ADS)
Chinh, Pham Duc
1998-11-01
The envelopes of the overall conductivities of effective medium intergranularly random and completely random polycrystalline aggregates are compared with the available bounds on the polycrystals' properties. The geometrically realizable models cover the major parts of the property ranges permitted by the bounds, hence the estimates represent the behaviour of realistic random aggregates well, given the uncertainty in the shapes of constituent crystals.
The HEART Pathway Randomized Trial
Mahler, Simon A.; Riley, Robert F.; Hiestand, Brian C.; Russell, Gregory B.; Hoekstra, James W.; Lefebvre, Cedric W.; Nicks, Bret A.; Cline, David M.; Askew, Kim L.; Elliott, Stephanie B.; Herrington, David M.; Burke, Gregory L.; Miller, Chadwick D.
2015-01-01
Background The HEART Pathway is a decision aid designed to identify emergency department patients with acute chest pain for early discharge. No randomized trials have compared the HEART Pathway with usual care. Methods and Results Adult emergency department patients with symptoms related to acute coronary syndrome without ST-elevation on ECG (n=282) were randomized to the HEART Pathway or usual care. In the HEART Pathway arm, emergency department providers used the HEART score, a validated decision aid, and troponin measures at 0 and 3 hours to identify patients for early discharge. Usual care was based on American College of Cardiology/American Heart Association guidelines. The primary outcome, objective cardiac testing (stress testing or angiography), and secondary outcomes, index length of stay, early discharge, and major adverse cardiac events (death, myocardial infarction, or coronary revascularization), were assessed at 30 days by phone interview and record review. Participants had a mean age of 53 years, 16% had previous myocardial infarction, and 6% (95% confidence interval, 3.6%–9.5%) had major adverse cardiac events within 30 days of randomization. Compared with usual care, use of the HEART Pathway decreased objective cardiac testing at 30 days by 12.1% (68.8% versus 56.7%; P=0.048) and length of stay by 12 hours (9.9 versus 21.9 hours; P=0.013) and increased early discharges by 21.3% (39.7% versus 18.4%; P<0.001). No patients identified for early discharge had major adverse cardiac events within 30 days. Conclusions The HEART Pathway reduces objective cardiac testing during 30 days, shortens length of stay, and increases early discharges. These important efficiency gains occurred without any patients identified for early discharge suffering MACE at 30 days. PMID:25737484
Random Matrix Theory and Econophysics
NASA Astrophysics Data System (ADS)
Rosenow, Bernd
2000-03-01
Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory
Quantum random walks without walking
Manouchehri, K.; Wang, J. B.
2009-12-15
Quantum random walks have received much interest due to their nonintuitive dynamics, which may hold the key to a new generation of quantum algorithms. What remains a major challenge is a physical realization that is experimentally viable and not limited to special connectivity criteria. We present a scheme for walking on arbitrarily complex graphs, which can be realized using a variety of quantum systems such as a Bose-Einstein condensate trapped inside an optical lattice. This scheme is particularly elegant since the walker is not required to physically step between the nodes; only flipping coins is sufficient.
Local leaders in random networks
NASA Astrophysics Data System (ADS)
Blondel, Vincent D.; Guillaume, Jean-Loup; Hendrickx, Julien M.; de Kerchove, Cristobald; Lambiotte, Renaud
2008-03-01
We consider local leaders in random uncorrelated networks, i.e., nodes whose degree is higher than or equal to the degree of all their neighbors. An analytical expression is found for the probability for a node of degree k to be a local leader. This quantity is shown to exhibit a transition from a situation where high-degree nodes are local leaders to a situation where they are not, when the tail of the degree distribution behaves like the power law ˜k-γc with γc=3 . Theoretical results are verified by computer simulations, and the importance of finite-size effects is discussed.
Random bearings and their stability.
Mahmoodi Baram, Reza; Herrmann, Hans J
2005-11-25
Self-similar space-filling bearings have been proposed some time ago as models for the motion of tectonic plates and appearance of seismic gaps. These models have two features which, however, seem unrealistic, namely, high symmetry in the arrangement of the particles, and lack of a lower cutoff in the size of the particles. In this work, an algorithm for generating random bearings in both two and three dimensions is presented. Introducing a lower cutoff for the sizes of the particles, the instabilities of the bearing under an external force such as gravity, are studied. PMID:16384225
Generation of pseudo-random numbers
NASA Technical Reports Server (NTRS)
Howell, L. W.; Rheinfurth, M. H.
1982-01-01
Practical methods for generating acceptable random numbers from a variety of probability distributions which are frequently encountered in engineering applications are described. The speed, accuracy, and guarantee of statistical randomness of the various methods are discussed.
Self-correcting random number generator
Humble, Travis S.; Pooser, Raphael C.
2016-09-06
A system and method for generating random numbers. The system may include a random number generator (RNG), such as a quantum random number generator (QRNG) configured to self-correct or adapt in order to substantially achieve randomness from the output of the RNG. By adapting, the RNG may generate a random number that may be considered random regardless of whether the random number itself is tested as such. As an example, the RNG may include components to monitor one or more characteristics of the RNG during operation, and may use the monitored characteristics as a basis for adapting, or self-correcting, to provide a random number according to one or more performance criteria.
Why the null matters: statistical tests, random walks and evolution.
Sheets, H D; Mitchell, C E
2001-01-01
A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.
Eliminating bias in randomized controlled trials: importance of allocation concealment and masking.
Viera, Anthony J; Bangdiwala, Shrikant I
2007-02-01
Randomization in randomized controlled trials involves more than generation of a random sequence by which to assign subjects. For randomization to be successfully implemented, the randomization sequence must be adequately protected (concealed) so that investigators, involved health care providers, and subjects are not aware of the upcoming assignment. The absence of adequate allocation concealment can lead to selection bias, one of the very problems that randomization was supposed to eliminate. Authors of reports of randomized trials should provide enough details on how allocation concealment was achieved so the reader can determine the likelihood of success. Fortunately, a plan of allocation concealment can always be incorporated into the design of a randomized trial. Certain methods minimize the risk of concealment failing more than others. Keeping knowledge of subjects' assignment after allocation from subjects, investigators/health care providers, or those assessing outcomes is referred to as masking (also known as blinding). The goal of masking is to prevent ascertainment bias. In contrast to allocation concealment, masking cannot always be incorporated into a randomized controlled trial. Both allocation concealment and masking add to the elimination of bias in randomized controlled trials.
SOME NOTES ON VALIDATING TEACHER SELECTION PROCEDURES.
ERIC Educational Resources Information Center
MEDLEY, DONALD M.
AT PRESENT, THE PART OF ANY TEACHER EFFECTIVENESS CRITERION THAT CAN BE PREDICTED WITH A SELECTION TEST IS PROBABLY IRRELEVANT TO TEACHER COMPETENCE. TESTING THE VALIDITY OF PREDICTORS OF TEACHER COMPETENCE IS IMPOSSIBLE BECAUSE IT WOULD REQUIRE HIRING A SIZABLE RANDOM SAMPLE OF ALL WHO APPLY FOR POSITIONS, WITHOUT PRIOR SCREENING. FURTHER,…
Stipčević, Mario
2016-03-01
In this work, a new type of elementary logic circuit, named random flip-flop (RFF), is proposed, experimentally realized, and studied. Unlike conventional Boolean logic circuits whose action is deterministic and highly reproducible, the action of a RFF is intentionally made maximally unpredictable and, in the proposed realization, derived from a fundamentally random process of emission and detection of light quanta. We demonstrate novel applications of RFF in randomness preserving frequency division, random frequency synthesis, and random number generation. Possible usages of these applications in the information and communication technology, cryptographic hardware, and testing equipment are discussed. PMID:27036825
Stipčević, Mario
2016-03-01
In this work, a new type of elementary logic circuit, named random flip-flop (RFF), is proposed, experimentally realized, and studied. Unlike conventional Boolean logic circuits whose action is deterministic and highly reproducible, the action of a RFF is intentionally made maximally unpredictable and, in the proposed realization, derived from a fundamentally random process of emission and detection of light quanta. We demonstrate novel applications of RFF in randomness preserving frequency division, random frequency synthesis, and random number generation. Possible usages of these applications in the information and communication technology, cryptographic hardware, and testing equipment are discussed.
NASA Astrophysics Data System (ADS)
Stipčević, Mario
2016-03-01
In this work, a new type of elementary logic circuit, named random flip-flop (RFF), is proposed, experimentally realized, and studied. Unlike conventional Boolean logic circuits whose action is deterministic and highly reproducible, the action of a RFF is intentionally made maximally unpredictable and, in the proposed realization, derived from a fundamentally random process of emission and detection of light quanta. We demonstrate novel applications of RFF in randomness preserving frequency division, random frequency synthesis, and random number generation. Possible usages of these applications in the information and communication technology, cryptographic hardware, and testing equipment are discussed.
Quantifying errors without random sampling
Phillips, Carl V; LaPole, Luwanna M
2003-01-01
Background All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. Discussion We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Summary Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research. PMID:12892568
Classical randomness in quantum measurements
NASA Astrophysics Data System (ADS)
Mauro D'Ariano, Giacomo; Lo Presti, Paoloplacido; Perinotti, Paolo
2005-07-01
Similarly to quantum states, also quantum measurements can be 'mixed', corresponding to a random choice within an ensemble of measuring apparatuses. Such mixing is equivalent to a sort of hidden variable, which produces a noise of purely classical nature. It is then natural to ask which apparatuses are indecomposable, i.e. do not correspond to any random choice of apparatuses. This problem is interesting not only for foundations, but also for applications, since most optimization strategies give optimal apparatuses that are indecomposable. Mathematically the problem is posed describing each measuring apparatus by a positive operator-valued measure (POVM), which gives the statistics of the outcomes for any input state. The POVMs form a convex set, and in this language the indecomposable apparatuses are represented by extremal points—the analogous of 'pure states' in the convex set of states. Differently from the case of states, however, indecomposable POVMs are not necessarily rank-one, e.g. von Neumann measurements. In this paper we give a complete classification of indecomposable apparatuses (for discrete spectrum), by providing different necessary and sufficient conditions for extremality of POVMs, along with a simple general algorithm for the decomposition of a POVM into extremals. As an interesting application, 'informationally complete' measurements are analysed in this respect. The convex set of POVMs is fully characterized by determining its border in terms of simple algebraic properties of the corresponding POVMs.
The wasteland of random supergravities
NASA Astrophysics Data System (ADS)
Marsh, David; McAllister, Liam; Wrase, Timm
2012-03-01
We show that in a general {N} = {1} supergravity with N ≫ 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kähler potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Building on the Tracy-Widom law governing fluctuations of extreme eigenvalues, we determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P ∝ exp(- c N p ), with c, p being constants. For generic critical points we find p ≈ 1 .5, while for approximately-supersymmetric critical points, p ≈ 1 .3. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.
Random sources for cusped beams.
Li, Jia; Wang, Fei; Korotkova, Olga
2016-08-01
We introduce two novel classes of partially coherent sources whose degrees of coherence are described by the rectangular Lorentz-correlated Schell-model (LSM) and rectangular fractional multi-Gaussian-correlated Schell-model (FMGSM) functions. Based on the generalized Collins formula, analytical expressions are derived for the spectral density distributions of these beams propagating through a stigmatic ABCD optical system. It is shown that beams belonging to both classes form the spectral density apex that is much higher and sharper than that generated by the Gaussian Schell-model (GSM) beam with a comparable coherence state. We experimentally generate these beams by using a nematic, transmissive spatial light modulator (SLM) that serves as a random phase screen controlled by a computer. The experimental data is consistent with theoretical predictions. Moreover, it is illustrated that the FMGSM beam generated in our experiments has a better focusing capacity than the GSM beam with the same coherence state. The applications that can potentially benefit from the use of novel beams range from material surface processing, to communications and sensing through random media. PMID:27505746
Randomized approximate nearest neighbors algorithm.
Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir
2011-09-20
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {x(j)} in R(d), the algorithm attempts to find k nearest neighbors for each of x(j), where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k(2)·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {x(j)} for an arbitrary point x ∈ R(d). The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme's behavior for certain types of distributions of {x(j)} and illustrate its performance via several numerical examples.
Randomness in Sequence Evolution Increases over Time.
Wang, Guangyu; Sun, Shixiang; Zhang, Zhang
2016-01-01
The second law of thermodynamics states that entropy, as a measure of randomness in a system, increases over time. Although studies have investigated biological sequence randomness from different aspects, it remains unknown whether sequence randomness changes over time and whether this change consists with the second law of thermodynamics. To capture the dynamics of randomness in molecular sequence evolution, here we detect sequence randomness based on a collection of eight statistical random tests and investigate the randomness variation of coding sequences with an application to Escherichia coli. Given that core/essential genes are more ancient than specific/non-essential genes, our results clearly show that core/essential genes are more random than specific/non-essential genes and accordingly indicate that sequence randomness indeed increases over time, consistent well with the second law of thermodynamics. We further find that an increase in sequence randomness leads to increasing randomness of GC content and longer sequence length. Taken together, our study presents an important finding, for the first time, that sequence randomness increases over time, which may provide profound insights for unveiling the underlying mechanisms of molecular sequence evolution. PMID:27224236
Randomness, Its Meanings and Educational Implications.
ERIC Educational Resources Information Center
Batanero, Carmen; Green, David R.; Serrano, Luis Romero
1998-01-01
Presents an analysis of the different meanings associated with randomness throughout its historical evolution as well as a summary of research concerning the subjective perception of randomness by children and adolescents. Some teaching suggestions are included to help students gradually understand the characteristics of random phenomena. Contains…
Randomness in Sequence Evolution Increases over Time
Wang, Guangyu; Sun, Shixiang; Zhang, Zhang
2016-01-01
The second law of thermodynamics states that entropy, as a measure of randomness in a system, increases over time. Although studies have investigated biological sequence randomness from different aspects, it remains unknown whether sequence randomness changes over time and whether this change consists with the second law of thermodynamics. To capture the dynamics of randomness in molecular sequence evolution, here we detect sequence randomness based on a collection of eight statistical random tests and investigate the randomness variation of coding sequences with an application to Escherichia coli. Given that core/essential genes are more ancient than specific/non-essential genes, our results clearly show that core/essential genes are more random than specific/non-essential genes and accordingly indicate that sequence randomness indeed increases over time, consistent well with the second law of thermodynamics. We further find that an increase in sequence randomness leads to increasing randomness of GC content and longer sequence length. Taken together, our study presents an important finding, for the first time, that sequence randomness increases over time, which may provide profound insights for unveiling the underlying mechanisms of molecular sequence evolution. PMID:27224236
Cluster randomization: a trap for the unwary.
Underwood, M; Barnett, A; Hajioff, S
1998-01-01
Controlled trials that randomize by practice can provide robust evidence to inform patient care. However, compared with randomizing by each individual patient, this approach may have substantial implications for sample size calculations and the interpretation of results. An increased awareness of these effects will improve the quality of research based on randomization by practice. PMID:9624757
Source-Independent Quantum Random Number Generation
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng
2016-01-01
Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .
Parabolic Anderson Model in a Dynamic Random Environment: Random Conductances
NASA Astrophysics Data System (ADS)
Erhard, D.; den Hollander, F.; Maillard, G.
2016-06-01
The parabolic Anderson model is defined as the partial differential equation ∂ u( x, t)/ ∂ t = κ Δ u( x, t) + ξ( x, t) u( x, t), x ∈ ℤ d , t ≥ 0, where κ ∈ [0, ∞) is the diffusion constant, Δ is the discrete Laplacian, and ξ is a dynamic random environment that drives the equation. The initial condition u( x, 0) = u 0( x), x ∈ ℤ d , is typically taken to be non-negative and bounded. The solution of the parabolic Anderson equation describes the evolution of a field of particles performing independent simple random walks with binary branching: particles jump at rate 2 d κ, split into two at rate ξ ∨ 0, and die at rate (- ξ) ∨ 0. In earlier work we looked at the Lyapunov exponents λ p(κ ) = limlimits _{tto ∞} 1/t log {E} ([u(0,t)]p)^{1/p}, quad p in {N} , qquad λ 0(κ ) = limlimits _{tto ∞} 1/2 log u(0,t). For the former we derived quantitative results on the κ-dependence for four choices of ξ : space-time white noise, independent simple random walks, the exclusion process and the voter model. For the latter we obtained qualitative results under certain space-time mixing conditions on ξ. In the present paper we investigate what happens when κΔ is replaced by Δ𝓚, where 𝓚 = {𝓚( x, y) : x, y ∈ ℤ d , x ˜ y} is a collection of random conductances between neighbouring sites replacing the constant conductances κ in the homogeneous model. We show that the associated annealed Lyapunov exponents λ p (𝓚), p ∈ ℕ, are given by the formula λ p({K} ) = {sup} {λ p(κ ) : κ in {Supp} ({K} )}, where, for a fixed realisation of 𝓚, Supp(𝓚) is the set of values taken by the 𝓚-field. We also show that for the associated quenched Lyapunov exponent λ 0(𝓚) this formula only provides a lower bound, and we conjecture that an upper bound holds when Supp(𝓚) is replaced by its convex hull. Our proof is valid for three classes of reversible ξ, and for all 𝓚
In silico selection of RNA aptamers
Chushak, Yaroslav; Stone, Morley O.
2009-01-01
In vitro selection of RNA aptamers that bind to a specific ligand usually begins with a random pool of RNA sequences. We propose a computational approach for designing a starting pool of RNA sequences for the selection of RNA aptamers for specific analyte binding. Our approach consists of three steps: (i) selection of RNA sequences based on their secondary structure, (ii) generating a library of three-dimensional (3D) structures of RNA molecules and (iii) high-throughput virtual screening of this library to select aptamers with binding affinity to a desired small molecule. We developed a set of criteria that allows one to select a sequence with potential binding affinity from a pool of random sequences and developed a protocol for RNA 3D structure prediction. As verification, we tested the performance of in silico selection on a set of six known aptamer–ligand complexes. The structures of the native sequences for the ligands in the testing set were among the top 5% of the selected structures. The proposed approach reduces the RNA sequences search space by four to five orders of magnitude—significantly accelerating the experimental screening and selection of high-affinity aptamers. PMID:19465396
Functional methods for waves in random media
NASA Technical Reports Server (NTRS)
Chow, P. L.
1981-01-01
Some basic ideas in functional methods for waves in random media are illustrated through a simple random differential equation. These methods are then generalized to solve certain random parabolic equations via an exponential representation given by the Feynman-Kac formula. It is shown that these functional methods are applicable to a number of problems in random wave propagation. They include the forward-scattering approximation in Gaussian white-noise media; the solution of the optical beam propagation problem by a phase-integral method; the high-frequency scattering by bounded random media; and a derivation of approximate moment equations from the functional integral representation.
Functional methods for waves in random media
NASA Technical Reports Server (NTRS)
Chow, P. L.
1981-01-01
Some basic ideas in functional methods for waves in random media are illustrated through a simple random differential equation. These methods are then generalized to solve certain random parabolic equations via an exponential representation given by the Feynman-Kac formula. It is shown that these functional methods are applicable to a number of problems in random wave propagation. They include the forward-scattering approximation in Gaussian white-noise media; the solution of the optical beam propagation problem by a phase-integral method; the high-frequency scattering by bounded random media, and a derivation of approximate moment equations from the functional integral representation.
Zhao, Wenle; Hill, Michael D; Palesch, Yuko
2015-12-01
In many clinical trials, baseline covariates could affect the primary outcome. Commonly used strategies to balance baseline covariates include stratified constrained randomization and minimization. Stratification is limited to few categorical covariates. Minimization lacks the randomness of treatment allocation. Both apply only to categorical covariates. As a result, serious imbalances could occur in important baseline covariates not included in the randomization algorithm. Furthermore, randomness of treatment allocation could be significantly compromised because of the high proportion of deterministic assignments associated with stratified block randomization and minimization, potentially resulting in selection bias. Serious baseline covariate imbalances and selection biases often contribute to controversial interpretation of the trial results. The National Institute of Neurological Disorders and Stroke recombinant tissue plasminogen activator Stroke Trial and the Captopril Prevention Project are two examples. In this article, we propose a new randomization strategy, termed the minimal sufficient balance randomization, which will dually prevent serious imbalances in all important baseline covariates, including both categorical and continuous types, and preserve the randomness of treatment allocation. Computer simulations are conducted using the data from the National Institute of Neurological Disorders and Stroke recombinant tissue plasminogen activator Stroke Trial. Serious imbalances in four continuous and one categorical covariate are prevented with a small cost in treatment allocation randomness. A scenario of simultaneously balancing 11 baseline covariates is explored with similar promising results. The proposed minimal sufficient balance randomization algorithm can be easily implemented in computerized central randomization systems for large multicenter trials.
Instructive selection and immunological theory.
Lederberg, Joshua
2002-07-01
The turning point of modern immunological theory was the advent of the clonal selection theory (Burnet, Talmage - 1957). A useful heuristic in the classification of theoretical models was the contrast of 'instructive' with 'selective' models of the acquisition of information by biological systems. The neo-Darwinian synthesis of the 1940s had consolidated biologists' model of evolution based on prior random variation and natural selection, viz. differential fecundity. While evolution in the large was by then pretty well settled, controversy remained about examples of cellular adaptation to chemical challenges, like induced drug-resistance, enzyme formation and the antibody response. While instructive theories have been on the decline, some clear cut examples can be found of molecular imprinting in the abiotic world, leading, e.g. to the production of specific sorbents. Template-driven assembly, as in DNA synthesis, has remained a paradigm of instructive specification. Nevertheless, the classification may break down with more microscopic scrutiny of the processes of molecular fit of substrates with enzymes, of monomers to an elongating polymer chain, as the reactants often traverse a state space from with activated components are appropriately selected. The same process may be 'instructive' from a holistic, 'selective' from an atomic perspective.
Certifying Unpredictable Randomness from Quantum Nonlocality
NASA Astrophysics Data System (ADS)
Bierhorst, Peter
2015-03-01
A device-independent quantum randomness protocol takes an initial random seed as input and then expands it in to a longer random string. It has been proven that if the initial random seed is trusted to be unpredictable, then the longer output string can also be certified to be unpredictable by an experimental violation of Bell's inequality. It has furthermore been argued that the initial random seed may not need to be truly unpredictable, but only uncorrelated to specific parts of the Bell experiment. In this work, we demonstrate rigorously that this is indeed true, under assumptions related to ``no superdeterminism/no conspiracy'' concepts along with the no-signaling assumption. So if we assume that superluminal signaling is impossible, then a loophole-free test of Bell's inequality would be able to generate provably unpredictable randomness from an input source of (potentially predictable) classical randomness.
High-speed random access laser tuning
Thompson, D.C.; Busch, G.E.; Hewitt, C.J.; Remelius, D.K.; Shimada, T.; Strauss, C.E.; Wilson, C.W.; Zaugg, T.J.
1999-04-01
We have developed a technique for laser tuning at rates of 100 kHz or more using a pair of acousto-optic modulators. In addition to all-electronic wavelength control, the same modulators also can provide electronically variable {ital Q}-switching, cavity length and power stabilization, chirp and linewidth control, and variable output coupling, all at rates far beyond what is possible with conventional mechanically tuned components. Tuning rates of 70 kHz have been demonstrated on a radio-frequency-pumped CO{sub 2} laser, with random access to over 50 laser lines spanning a 17{percent} range in wavelength and with wavelength discrimination better than 1 part in 1000. A compact tuner and {ital Q}-switch has been deployed in a 5{endash}10-kHz pulsed lidar system. The modulators each operate at a fixed Bragg angle, with the acoustic frequency determining the selected wavelength. This arrangement doubles the wavelength resolution without introducing an undesirable frequency shift. {copyright} 1999 Optical Society of America
Model ecosystems with random nonlinear interspecies interactions.
Santos, Danielle O C; Fontanari, José F
2004-12-01
The principle of competitive exclusion in ecology establishes that two species living together cannot occupy the same ecological niche. Here we present a model ecosystem in which the species are described by a series of phenotypic characters and the strength of the competition between two species is given by a nondecreasing (modulating) function of the number of common characters. Using analytical tools of statistical mechanics we find that the ecosystem diversity, defined as the fraction of species that coexist at equilibrium, decreases as the complexity (i.e., number of characters) of the species increases, regardless of the modulating function. By considering both selective and random elimination of the links in the community web, we show that ecosystems composed of simple species are more robust than those composed of complex species. In addition, we show that the puzzling result that there exists either rich or poor ecosystems for a linear modulating function is not typical of communities in which the interspecies interactions are determined by a complementarity rule.
NASA Technical Reports Server (NTRS)
Kester, DO; Bontekoe, Tj. Romke
1994-01-01
In order to make the best high resolution images of IRAS data it is necessary to incorporate any knowledge about the instrument into a model: the IRAS model. This is necessary since every remaining systematic effect will be amplified by any high resolution technique into spurious artifacts in the images. The search for random noise is in fact the never-ending quest for better quality results, and can only be obtained by better models. The Dutch high-resolution effort has resulted in HIRAS which drives the MEMSYS5 algorithm. It is specifically designed for IRAS image construction. A detailed description of HIRAS with many results is in preparation. In this paper we emphasize many of the instrumental effects incorporated in the IRAS model, including our improved 100 micron IRAS response functions.
Flow Through Randomly Curved Manifolds
Mendoza, M.; Succi, S.; Herrmann, H. J.
2013-01-01
We present a computational study of the transport properties of campylotic (intrinsically curved) media. It is found that the relation between the flow through a campylotic media, consisting of randomly located curvature perturbations, and the average Ricci scalar of the system, exhibits two distinct functional expressions, depending on whether the typical spatial extent of the curvature perturbation lies above or below the critical value maximizing the overall scalar of curvature. Furthermore, the flow through such systems as a function of the number of curvature perturbations is found to present a sublinear behavior for large concentrations, due to the interference between curvature perturbations leading to an overall less curved space. We have also characterized the flux through such media as a function of the local Reynolds number and the scale of interaction between impurities. For the purpose of this study, we have also developed and validated a new lattice Boltzmann model. PMID:24173367
Random walks for image segmentation.
Grady, Leo
2006-11-01
A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs.
Random Telegraph Noise in Microstructures
Kogan, S.
1998-10-01
The theory of random current switchings in conductors with S -type current-voltage characteristic is presented. In the range of bistability, the mean time spent by the system in the low-current state before a transition to the high-current state occurs, {bar {tau}}{sub l} , decreases with voltage, and that for the high-current state, {bar {tau}}{sub h} , grows with voltage; both variations are exponential-like. {bar {tau}}{sub l}={bar {tau}}{sub h} at a definite voltage in the bistability range. These results are in full accordance with experiments on microstructures. Because of the growth of both times with the size of the conductor, such noise is observable just in microstructures. {copyright} {ital 1998} {ital The American Physical Society}
Structure of random bidisperse foam.
Reinelt, Douglas A.; van Swol, Frank B.; Kraynik, Andrew Michael
2005-02-01
The Surface Evolver was used to compute the equilibrium microstructure of random soap foams with bidisperse cell-size distributions and to evaluate topological and geometric properties of the foams and individual cells. The simulations agree with the experimental data of Matzke and Nestler for the probability {rho}(F) of finding cells with F faces and its dependence on the fraction of large cells. The simulations also agree with the theory for isotropic Plateau polyhedra (IPP), which describes the F-dependence of cell geometric properties, such as surface area, edge length, and mean curvature (diffusive growth rate); this is consistent with results for polydisperse foams. Cell surface areas are about 10% greater than spheres of equal volume, which leads to a simple but accurate relation for the surface free energy density of foams. The Aboav-Weaire law is not valid for bidisperse foams.
Ergodic theory, randomness, and "chaos".
Ornstein, D S
1989-01-13
Ergodic theory is the theory of the long-term statistical behavior of dynamical systems. The baker's transformation is an object of ergodic theory that provides a paradigm for the possibility of deterministic chaos. It can now be shown that this connection is more than an analogy and that at some level of abstraction a large number of systems governed by Newton's laws are the same as the baker's transformation. Going to this level of abstraction helps to organize the possible kinds of random behavior. The theory also gives new concrete results. For example, one can show that the same process could be produced by a mechanism governed by Newton's laws or by a mechanism governed by coin tossing. It also gives a statistical analog of structural stability.
Random vibration of compliant wall
NASA Technical Reports Server (NTRS)
Yang, J.-N.; Heller, R. A.
1976-01-01
The paper is concerned with the realistic case of two-dimensional random motion of a membrane with bending stiffness supported on a viscoelastic spring substrate and on an elastic base plate under both subsonic and supersonic boundary layer turbulence. The cross-power spectral density of surface displacements is solved in terms of design variables of the compliant wall - such as the dimensions and material properties of the membrane (Mylar), substrate (PVC foam), and panel (aluminum) - so that a sensitivity analysis can be made to examine the influence of each design variable on the surface response statistics. Three numerical examples typical of compliant wall design are worked out and their response statistics in relation to wave drag and roughness drag are assessed. The results can serve as a guideline for experimental investigation of the drag reduction concept through the use of a compliant wall.
Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Wilson, Mark; Shih, Ching-Lin
2006-01-01
This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random…
Lasso adjustments of treatment effect estimates in randomized experiments
Bloniarz, Adam; Liu, Hanzhong; Zhang, Cun-Hui; Sekhon, Jasjeet S.; Yu, Bin
2016-01-01
We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman–Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS. PMID:27382153
Weight distributions for turbo codes using random and nonrandom permutations
NASA Technical Reports Server (NTRS)
Dolinar, S.; Divsalar, D.
1995-01-01
This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.
Compressive sensing optical coherence tomography using randomly accessible lasers
NASA Astrophysics Data System (ADS)
Harfouche, Mark; Satyan, Naresh; Vasilyev, Arseny; Yariv, Amnon
2014-05-01
We propose and demonstrate a novel a compressive sensing swept source optical coherence tomography (SSOCT) system that enables high speed images to be taken while maintaining the high resolution offered from a large bandwidth sweep. Conventional SSOCT systems sweep the optical frequency of a laser ω(t) to determine the depth of the reflectors at a given lateral location. A scatterer located at delay τ appears as a sinusoid cos (ω(t)τ ) at the photodetector. The finite optical chirp rate and the speed of analog to digital and digital to analog converters limit the acquisition rate of an axial scan. The proposed acquisition modality enables much faster image acquisition rates by interrogating the beat signal at randomly selected optical frequencies while preserving resolution and depth of field. The system utilizes a randomly accessible laser, a modulated grating Y-branch laser, to sample the interference pattern from a scene at randomly selected optical frequencies over an optical bandwidth of 5 THz , corresponding to a resolution of 30 μm in air. The depth profile is then reconstructed using an l1 minimization algorithm with a LASSO constraint. Signal-dependent noise sources, shot noise and phase noise, are analyzed and taken into consideration during the recovery. Redundant dictionaries are used to improve the reconstruction of the depth profile. A compression by a factor of 10 for sparse targets up to a depth of 15 mm in noisy environments is shown.
Lasso adjustments of treatment effect estimates in randomized experiments.
Bloniarz, Adam; Liu, Hanzhong; Zhang, Cun-Hui; Sekhon, Jasjeet S; Yu, Bin
2016-07-01
We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman-Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS. PMID:27382153
Evaluating Random Forests for Survival Analysis using Prediction Error Curves.
Mogensen, Ulla B; Ishwaran, Hemant; Gerds, Thomas A
2012-09-01
Prediction error curves are increasingly used to assess and compare predictions in survival analysis. This article surveys the R package pec which provides a set of functions for efficient computation of prediction error curves. The software implements inverse probability of censoring weights to deal with right censored data and several variants of cross-validation to deal with the apparent error problem. In principle, all kinds of prediction models can be assessed, and the package readily supports most traditional regression modeling strategies, like Cox regression or additive hazard regression, as well as state of the art machine learning methods such as random forests, a nonparametric method which provides promising alternatives to traditional strategies in low and high-dimensional settings. We show how the functionality of pec can be extended to yet unsupported prediction models. As an example, we implement support for random forest prediction models based on the R-packages randomSurvivalForest and party. Using data of the Copenhagen Stroke Study we use pec to compare random forests to a Cox regression model derived from stepwise variable selection. Reproducible results on the user level are given for publicly available data from the German breast cancer study group.
Solving the accuracy-diversity dilemma via directed random walks.
Liu, Jian-Guo; Shi, Kerui; Guo, Qiang
2012-01-01
Random walks have been successfully used to measure user or object similarities in collaborative filtering (CF) recommender systems, which is of high accuracy but low diversity. A key challenge of a CF system is that the reliably accurate results are obtained with the help of peers' recommendation, but the most useful individual recommendations are hard to be found among diverse niche objects. In this paper we investigate the direction effect of the random walk on user similarity measurements and find that the user similarity, calculated by directed random walks, is reverse to the initial node's degree. Since the ratio of small-degree users to large-degree users is very large in real data sets, the large-degree users' selections are recommended extensively by traditional CF algorithms. By tuning the user similarity direction from neighbors to the target user, we introduce a new algorithm specifically to address the challenge of diversity of CF and show how it can be used to solve the accuracy-diversity dilemma. Without relying on any context-specific information, we are able to obtain accurate and diverse recommendations, which outperforms the state-of-the-art CF methods. This work suggests that the random-walk direction is an important factor to improve the personalized recommendation performance.
Solving the accuracy-diversity dilemma via directed random walks
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Shi, Kerui; Guo, Qiang
2012-01-01
Random walks have been successfully used to measure user or object similarities in collaborative filtering (CF) recommender systems, which is of high accuracy but low diversity. A key challenge of a CF system is that the reliably accurate results are obtained with the help of peers' recommendation, but the most useful individual recommendations are hard to be found among diverse niche objects. In this paper we investigate the direction effect of the random walk on user similarity measurements and find that the user similarity, calculated by directed random walks, is reverse to the initial node's degree. Since the ratio of small-degree users to large-degree users is very large in real data sets, the large-degree users' selections are recommended extensively by traditional CF algorithms. By tuning the user similarity direction from neighbors to the target user, we introduce a new algorithm specifically to address the challenge of diversity of CF and show how it can be used to solve the accuracy-diversity dilemma. Without relying on any context-specific information, we are able to obtain accurate and diverse recommendations, which outperforms the state-of-the-art CF methods. This work suggests that the random-walk direction is an important factor to improve the personalized recommendation performance.
Natural selection and social preferences.
Weibull, Jörgen W; Salomonsson, Marcus
2006-03-01
A large number of individuals are randomly matched into groups, where each group plays a finite symmetric game. Individuals breed true. The expected number of surviving offspring depends on own material payoff, but may also, due to cooperative breeding and/or reproductive competition, depend on the material payoffs to other group members. The induced population dynamic is equivalent with the replicator dynamic for a game with payoffs derived from those in the original game. We apply this selection dynamic to a number of examples, including prisoners' dilemma games with and without a punishment option, coordination games, and hawk-dove games. For each of these, we compare the outcomes with those obtained under the standard replicator dynamic. By way of a revealed-preference argument, our selection dynamic can explain certain "altruistic" and "spiteful" behaviors that are consistent with individuals having social preferences.
Effects of zinc supplementation on subscales of anorexia in children: A randomized controlled trial
Khademian, Majid; Farhangpajouh, Neda; Shahsanaee, Armindokht; Bahreynian, Maryam; Mirshamsi, Mehran; Kelishadi, Roya
2014-01-01
Objectives: This study aims to assess the effects of zinc supplementation on improving the appetite and its subscales in children. Methods: This study was conducted in 2013 in Isfahan, Iran. It had two phases. At the first step, after validation of the Child Eating Behaviour Questionaire (CEBQ), it was completed for 300 preschool children, who were randomly selected. The second phase was conducted as a randomized controlled trial. Eighty of these children were randomly selected, and were randomly assigned to two groups of equal number receiving zinc (10 mg/day) or placebo for 12 weeks. Results: Overall 77 children completed the trial (39 in the case and 3 in the control group).The results showed that zinc supplement can improve calorie intake in children by affecting some CEBQ subscales like Emotional over Eating and Food Responsible. Conclusion: Zinc supplementation had positive impact in promoting the calorie intake and some subscales of anorexia. PMID:25674110
NASA Astrophysics Data System (ADS)
Zhou, Yu-Qian; Gao, Fei; Li, Dan-Dan; Li, Xin-Hui; Wen, Qiao-Yan
2016-09-01
We have proved that new randomness can be certified by partially free sources using 2 →1 quantum random access code (QRAC) in the framework of semi-device-independent (SDI) protocols [Y.-Q. Zhou, H.-W. Li, Y.-K. Wang, D.-D. Li, F. Gao, and Q.-Y. Wen, Phys. Rev. A 92, 022331 (2015), 10.1103/PhysRevA.92.022331]. To improve the effectiveness of the randomness generation, here we propose the SDI randomness expansion using 3 →1 QRAC and obtain the corresponding classical and quantum bounds of the two-dimensional quantum witness. Moreover, we get the condition which should be satisfied by the partially free sources to successfully certify new randomness, and the analytic relationship between the certified randomness and the two-dimensional quantum witness violation.
Marcus, Sue M; Stuart, Elizabeth A; Wang, Pei; Shadish, William R; Steiner, Peter M
2012-06-01
Although randomized studies have high internal validity, generalizability of the estimated causal effect from randomized clinical trials to real-world clinical or educational practice may be limited. We consider the implication of randomized assignment to treatment, as compared with choice of preferred treatment as it occurs in real-world conditions. Compliance, engagement, or motivation may be better with a preferred treatment, and this can complicate the generalizability of results from randomized trials. The doubly randomized preference trial (DRPT) is a hybrid randomized and nonrandomized design that allows for estimation of the causal effect of randomization versus treatment preference. In the DRPT, individuals are first randomized to either randomized assignment or choice assignment. Those in the randomized assignment group are then randomized to treatment or control, and those in the choice group receive their preference of treatment versus control. Using the potential outcomes framework, we apply the algebra of conditional independence to show how the DRPT can be used to derive an unbiased estimate of the causal effect of randomization versus preference for each of the treatment and comparison conditions. Also, we show how these results can be implemented using full matching on the propensity score. The methodology is illustrated with a DRPT of introductory psychology students who were randomized to randomized assignment or preference of mathematics versus vocabulary training. We found a small to moderate benefit of preference versus randomization with respect to the mathematics outcome for those who received mathematics training.
True random number generator based on discretized encoding of the time interval between photons.
Li, Shen; Wang, Long; Wu, Ling-An; Ma, Hai-Qiang; Zhai, Guang-Jie
2013-01-01
We propose an approach to generate true random number sequences based on the discretized encoding of the time interval between photons. The method is simple and efficient, and can produce a highly random sequence several times longer than that of other methods based on threshold or parity selection, without the need for hashing. A proof-of-principle experiment has been performed, showing that the system could be easily integrated and applied to quantum cryptography and other fields. PMID:23456008
Strenge, Hans; Rogge, Carolin
2010-04-01
The effects of different instructions on verbal random number generation were examined in 40 healthy students who attempted to generate random sequences of the digits 1 to 6. Two groups of 20 received different instructions with alternative numerical representations. The Symbolic group (Arabic digits) was instructed to randomize while continuously using the analogy of selecting and replacing numbered balls from a hat, whereas the Nonsymbolic group (arrays of dots) was instructed to imagine repeatedly throwing a die. Participants asked for self-reports on their strategies reported spontaneously occurring visuospatial imagination of a mental number line (42%), or imagining throwing a die (23%). Individual number representation was not affected by the initial instruction. There were no differences in randomization performance by group. Comprehensive understanding of the nature of the randomization task requires considering individual differences in construction of mental models. PMID:20499555
Doing better by getting worse: posthypnotic amnesia improves random number generation.
Terhune, Devin Blair; Brugger, Peter
2011-01-01
Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation.
Matter-wave analog of an optical random laser
Plodzien, Marcin; Sacha, Krzysztof
2011-08-15
The accumulation of atoms in the lowest energy level of a trap and the subsequent out coupling of these atoms is a realization of a matter-wave analog of a conventional optical laser. Optical random lasers require materials that provide optical gain but, contrary to conventional lasers, the modes are determined by multiple scattering and not a cavity. We show that a Bose-Einstein condensate can be loaded in a spatially correlated disorder potential prepared in such a way that the Anderson localization phenomenon operates as a bandpass filter. A multiple scattering process selects atoms with certain momenta and determines laser modes which represents a matter-wave analog of an optical random laser.
Non-local MRI denoising using random sampling.
Hu, Jinrong; Zhou, Jiliu; Wu, Xi
2016-09-01
In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338
Structure damage detection based on random forest recursive feature elimination
NASA Astrophysics Data System (ADS)
Zhou, Qifeng; Zhou, Hao; Zhou, Qingqing; Yang, Fan; Luo, Linkai
2014-05-01
Feature extraction is a key former step in structural damage detection. In this paper, a structural damage detection method based on wavelet packet decomposition (WPD) and random forest recursive feature elimination (RF-RFE) is proposed. In order to gain the most effective feature subset and to improve the identification accuracy a two-stage feature selection method is adopted after WPD. First, the damage features are sorted according to original random forest variable importance analysis. Second, using RF-RFE to eliminate the least important feature and reorder the feature list each time, then get the new feature importance sequence. Finally, k-nearest neighbor (KNN) algorithm, as a benchmark classifier, is used to evaluate the extracted feature subset. A four-storey steel shear building model is chosen as an example in method verification. The experimental results show that using the fewer features got from proposed method can achieve higher identification accuracy and reduce the detection time cost.
Random geometric graph description of connectedness percolation in rod systems
NASA Astrophysics Data System (ADS)
Chatterjee, Avik P.; Grimaldi, Claudio
2015-09-01
The problem of continuum percolation in dispersions of rods is reformulated in terms of weighted random geometric graphs. Nodes (or sites or vertices) in the graph represent spatial locations occupied by the centers of the rods. The probability that an edge (or link) connects any randomly selected pair of nodes depends upon the rod volume fraction as well as the distribution over their sizes and shapes, and also upon quantities that characterize their state of dispersion (such as the orientational distribution function). We employ the observation that contributions from closed loops of connected rods are negligible in the limit of large aspect ratios to obtain percolation thresholds that are fully equivalent to those calculated within the second-virial approximation of the connectedness Ornstein-Zernike equation. Our formulation can account for effects due to interactions between the rods, and many-body features can be partially addressed by suitable choices for the edge probabilities.
Wills, C; Miller, C
1976-02-01
It is shown, through theory and computer simulations of outbreeding Mendelian populations, that there may be conditions under which a balance is struck between two facotrs. The first is the advantage of random assortment, which will, when multilocus selection is for intermediate equilibrium values, lead to higher average heterozygosity than when linkage is introduced. There is some indication that random assortment is also advantageous when selection is toward a uniform distribution of equilibrium values. The second factor is the advantage of linkage between loci having positive epistatic interactions. When multilocus selection is for a bimodal distribution of equilibrium values an early advantage of random assortment is replaced by a later disadvantage. Linkage disequilibrium, which in finite populations is increased only by random or selective sampling, may hinder the movement of alleles to their selective equilibria, thus leading to the advantage of random assortment.-Some consequences of this approach to the structure of natural populations are discussed.
SERF: in vitro election of random RNA fragments to identify protein binding sites within large RNAs.
Stelzl, U; Nierhaus, K H
2001-11-01
In vitro selection experiments have various goals depending on the composition of the initial pool and the selection method applied. We developed an in vitro selection variant (SERF, selection of random RNA fragments) that is useful for the identification of short RNA fragments originating from large RNAs that bind specifically to a protein. A pool of randomly fragmented RNA is constructed from a large RNA, which is the natural binding partner for a protein. Such a pool contains all the potential binding sites and is therefore used as starting material for affinity selection with the purified protein to find its natural target. Here we provide a detailed experimental protocol of the method. SERF has been developed for ribosomal systems and is a general approach providing a basis for functional and structural characterization of RNA-protein interactions in large ribonucleoprotein particles.
Quantifying consistent individual differences in habitat selection.
Leclerc, Martin; Vander Wal, Eric; Zedrosser, Andreas; Swenson, Jon E; Kindberg, Jonas; Pelletier, Fanie
2016-03-01
Habitat selection is a fundamental behaviour that links individuals to the resources required for survival and reproduction. Although natural selection acts on an individual's phenotype, research on habitat selection often pools inter-individual patterns to provide inferences on the population scale. Here, we expanded a traditional approach of quantifying habitat selection at the individual level to explore the potential for consistent individual differences of habitat selection. We used random coefficients in resource selection functions (RSFs) and repeatability estimates to test for variability in habitat selection. We applied our method to a detailed dataset of GPS relocations of brown bears (Ursus arctos) taken over a period of 6 years, and assessed whether they displayed repeatable individual differences in habitat selection toward two habitat types: bogs and recent timber-harvest cut blocks. In our analyses, we controlled for the availability of habitat, i.e. the functional response in habitat selection. Repeatability estimates of habitat selection toward bogs and cut blocks were 0.304 and 0.420, respectively. Therefore, 30.4 and 42.0 % of the population-scale habitat selection variability for bogs and cut blocks, respectively, was due to differences among individuals, suggesting that consistent individual variation in habitat selection exists in brown bears. Using simulations, we posit that repeatability values of habitat selection are not related to the value and significance of β estimates in RSFs. Although individual differences in habitat selection could be the results of non-exclusive factors, our results illustrate the evolutionary potential of habitat selection.
Quantifying consistent individual differences in habitat selection.
Leclerc, Martin; Vander Wal, Eric; Zedrosser, Andreas; Swenson, Jon E; Kindberg, Jonas; Pelletier, Fanie
2016-03-01
Habitat selection is a fundamental behaviour that links individuals to the resources required for survival and reproduction. Although natural selection acts on an individual's phenotype, research on habitat selection often pools inter-individual patterns to provide inferences on the population scale. Here, we expanded a traditional approach of quantifying habitat selection at the individual level to explore the potential for consistent individual differences of habitat selection. We used random coefficients in resource selection functions (RSFs) and repeatability estimates to test for variability in habitat selection. We applied our method to a detailed dataset of GPS relocations of brown bears (Ursus arctos) taken over a period of 6 years, and assessed whether they displayed repeatable individual differences in habitat selection toward two habitat types: bogs and recent timber-harvest cut blocks. In our analyses, we controlled for the availability of habitat, i.e. the functional response in habitat selection. Repeatability estimates of habitat selection toward bogs and cut blocks were 0.304 and 0.420, respectively. Therefore, 30.4 and 42.0 % of the population-scale habitat selection variability for bogs and cut blocks, respectively, was due to differences among individuals, suggesting that consistent individual variation in habitat selection exists in brown bears. Using simulations, we posit that repeatability values of habitat selection are not related to the value and significance of β estimates in RSFs. Although individual differences in habitat selection could be the results of non-exclusive factors, our results illustrate the evolutionary potential of habitat selection. PMID:26597548
Randomized Controlled Trials of Add-On Antidepressants in Schizophrenia
Joffe, Grigori; Stenberg, Jan-Henry
2015-01-01
Background: Despite adequate treatment with antipsychotics, a substantial number of patients with schizophrenia demonstrate only suboptimal clinical outcome. To overcome this challenge, various psychopharmacological combination strategies have been used, including antidepressants added to antipsychotics. Methods: To analyze the efficacy of add-on antidepressants for the treatment of negative, positive, cognitive, depressive, and antipsychotic-induced extrapyramidal symptoms in schizophrenia, published randomized controlled trials assessing the efficacy of adjunctive antidepressants in schizophrenia were reviewed using the following parameters: baseline clinical characteristics and number of patients, their on-going antipsychotic treatment, dosage of the add-on antidepressants, duration of the trial, efficacy measures, and outcomes. Results: There were 36 randomized controlled trials reported in 41 journal publications (n=1582). The antidepressants used were the selective serotonin reuptake inhibitors, duloxetine, imipramine, mianserin, mirtazapine, nefazodone, reboxetin, trazodone, and bupropion. Mirtazapine and mianserin showed somewhat consistent efficacy for negative symptoms and both seemed to enhance neurocognition. Trazodone and nefazodone appeared to improve the antipsychotics-induced extrapyramidal symptoms. Imipramine and duloxetine tended to improve depressive symptoms. No clear evidence supporting selective serotonin reuptake inhibitors’ efficacy on any clinical domain of schizophrenia was found. Add-on antidepressants did not worsen psychosis. Conclusions: Despite a substantial number of randomized controlled trials, the overall efficacy of add-on antidepressants in schizophrenia remains uncertain mainly due to methodological issues. Some differences in efficacy on several schizophrenia domains seem, however, to exist and to vary by the antidepressant subgroups—plausibly due to differences in the mechanisms of action. Antidepressants may not worsen
The MCNP5 Random number generator
Brown, F. B.; Nagaya, Y.
2002-01-01
MCNP and other Monte Carlo particle transport codes use random number generators to produce random variates from a uniform distribution on the interval. These random variates are then used in subsequent sampling from probability distributions to simulate the physical behavior of particles during the transport process. This paper describes the new random number generator developed for MCNP Version 5. The new generator will optionally preserve the exact random sequence of previous versions and is entirely conformant to the Fortran-90 standard, hence completely portable. In addition, skip-ahead algorithms have been implemented to efficiently initialize the generator for new histories, a capability that greatly simplifies parallel algorithms. Further, the precision of the generator has been increased, extending the period by a factor of 10{sup 5}. Finally, the new generator has been subjected to 3 different sets of rigorous and extensive statistical tests to verify that it produces a sufficiently random sequence.
Dynamic computing random access memory
NASA Astrophysics Data System (ADS)
Traversa, F. L.; Bonani, F.; Pershin, Y. V.; Di Ventra, M.
2014-07-01
The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200-2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology.
Migration in asymmetric, random environments
NASA Astrophysics Data System (ADS)
Deem, Michael; Wang, Dong
Migration is a key mechanism for expansion of communities. As a population migrates, it experiences a changing environment. In heterogeneous environments, rapid adaption is key to the evolutionary success of the population. In the case of human migration, environmental heterogeneity is naturally asymmetric in the North-South and East-West directions. We here consider migration in random, asymmetric, modularly correlated environments. Knowledge about the environment determines the fitness of each individual. We find that the speed of migration is proportional to the inverse of environmental change, and in particular we find that North-South migration rates are lower than East-West migration rates. Fast communication within the population of pieces of knowledge between individuals, similar to horizontal gene transfer in genetic systems, can help to spread beneficial knowledge among individuals. We show that increased modularity of the relation between knowledge and fitness enhances the rate of evolution. We investigate the relation between optimal information exchange rate and modularity of the dependence of fitness on knowledge. These results for the dependence of migration rate on heterogeneity, asymmetry, and modularity are consistent with existing archaeological facts.
Dynamic computing random access memory.
Traversa, F L; Bonani, F; Pershin, Y V; Di Ventra, M
2014-07-18
The present von Neumann computing paradigm involves a significant amount of information transfer between a central processing unit and memory, with concomitant limitations in the actual execution speed. However, it has been recently argued that a different form of computation, dubbed memcomputing (Di Ventra and Pershin 2013 Nat. Phys. 9 200-2) and inspired by the operation of our brain, can resolve the intrinsic limitations of present day architectures by allowing for computing and storing of information on the same physical platform. Here we show a simple and practical realization of memcomputing that utilizes easy-to-build memcapacitive systems. We name this architecture dynamic computing random access memory (DCRAM). We show that DCRAM provides massively-parallel and polymorphic digital logic, namely it allows for different logic operations with the same architecture, by varying only the control signals. In addition, by taking into account realistic parameters, its energy expenditures can be as low as a few fJ per operation. DCRAM is fully compatible with CMOS technology, can be realized with current fabrication facilities, and therefore can really serve as an alternative to the present computing technology.
Aggregated Recommendation through Random Forests
2014-01-01
Aggregated recommendation refers to the process of suggesting one kind of items to a group of users. Compared to user-oriented or item-oriented approaches, it is more general and, therefore, more appropriate for cold-start recommendation. In this paper, we propose a random forest approach to create aggregated recommender systems. The approach is used to predict the rating of a group of users to a kind of items. In the preprocessing stage, we merge user, item, and rating information to construct an aggregated decision table, where rating information serves as the decision attribute. We also model the data conversion process corresponding to the new user, new item, and both new problems. In the training stage, a forest is built for the aggregated training set, where each leaf is assigned a distribution of discrete rating. In the testing stage, we present four predicting approaches to compute evaluation values based on the distribution of each tree. Experiments results on the well-known MovieLens dataset show that the aggregated approach maintains an acceptable level of accuracy. PMID:25180204
Organization of growing random networks
Krapivsky, P. L.; Redner, S.
2001-06-01
The organizational development of growing random networks is investigated. These growing networks are built by adding nodes successively, and linking each to an earlier node of degree k with an attachment probability A{sub k}. When A{sub k} grows more slowly than linearly with k, the number of nodes with k links, N{sub k}(t), decays faster than a power law in k, while for A{sub k} growing faster than linearly in k, a single node emerges which connects to nearly all other nodes. When A{sub k} is asymptotically linear, N{sub k}(t){similar_to}tk{sup {minus}{nu}}, with {nu} dependent on details of the attachment probability, but in the range 2{lt}{nu}{lt}{infinity}. The combined age and degree distribution of nodes shows that old nodes typically have a large degree. There is also a significant correlation in the degrees of neighboring nodes, so that nodes of similar degree are more likely to be connected. The size distributions of the in and out components of the network with respect to a given node{emdash}namely, its {open_quotes}descendants{close_quotes} and {open_quotes}ancestors{close_quotes}{emdash}are also determined. The in component exhibits a robust s{sup {minus}2} power-law tail, where s is the component size. The out component has a typical size of order lnt, and it provides basic insights into the genealogy of the network.
Hierarchy in directed random networks
NASA Astrophysics Data System (ADS)
Mones, Enys
2013-02-01
In recent years, the theory and application of complex networks have been quickly developing in a markable way due to the increasing amount of data from real systems and the fruitful application of powerful methods used in statistical physics. Many important characteristics of social or biological systems can be described by the study of their underlying structure of interactions. Hierarchy is one of these features that can be formulated in the language of networks. In this paper we present some (qualitative) analytic results on the hierarchical properties of random network models with zero correlations and also investigate, mainly numerically, the effects of different types of correlations. The behavior of the hierarchy is different in the absence and the presence of giant components. We show that the hierarchical structure can be drastically different if there are one-point correlations in the network. We also show numerical results suggesting that the hierarchy does not change monotonically with the correlations and there is an optimal level of nonzero correlations maximizing the level of hierarchy.
Fisher Transformations for Correlations Corrected for Selection and Missing Data.
ERIC Educational Resources Information Center
Mendoza, Jorge L.
1993-01-01
A Fisher's Z transformation is developed for the corrected correlation for conditions when the criterion data are missing because of selection on the predictor and when the criterion was missing at random, not because of selection. The two Z transformations were evaluated in a computer simulation and found accurate. (SLD)
40 CFR 204.57-2 - Test compressor sample selection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Test compressor sample selection. (a) Compressors comprising the batch sample which are required to be tested pursuant to a test request in accordance with this subpart will be randomly selected from a batch... will be achieved by sequentially numbering all of the compressors in the batch and then using a...
40 CFR 204.57-2 - Test compressor sample selection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Test compressor sample selection. (a) Compressors comprising the batch sample which are required to be tested pursuant to a test request in accordance with this subpart will be randomly selected from a batch... will be achieved by sequentially numbering all of the compressors in the batch and then using a...
40 CFR 204.57-2 - Test compressor sample selection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Test compressor sample selection. (a) Compressors comprising the batch sample which are required to be tested pursuant to a test request in accordance with this subpart will be randomly selected from a batch... will be achieved by sequentially numbering all of the compressors in the batch and then using a...
40 CFR 204.57-2 - Test compressor sample selection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Test compressor sample selection. (a) Compressors comprising the batch sample which are required to be tested pursuant to a test request in accordance with this subpart will be randomly selected from a batch... will be achieved by sequentially numbering all of the compressors in the batch and then using a...
40 CFR 204.57-2 - Test compressor sample selection.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Test compressor sample selection. (a) Compressors comprising the batch sample which are required to be tested pursuant to a test request in accordance with this subpart will be randomly selected from a batch... will be achieved by sequentially numbering all of the compressors in the batch and then using a...
40 CFR 205.57-2 - Test vehicle sample selection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... sample selection. (a) Vehicles comprising the batch sample which are required to be tested pursuant to a... request from a batch of vehicles of the category or configuration specified in the test request. If the test request specifies that the vehicles comprising the batch sample must be selected randomly,...
Selective Exposure and Retention of Political Advertising: A Regional Comparison.
ERIC Educational Resources Information Center
Surlin, Stuart H.; Gordon, Thomas F.
The results presented in this article are but a portion of the information gathered in a larger survey examining the relative roles of "selective exposure" to and "selective retention" of political advertising during the 1972 presidential election. Random samples in two metropolitan areas in different regions of the country (Atlanta, Ga., n=281;…
Comment on "Bateman in nature: predation on offspring reduces the potential for sexual selection".
Bergeron, P; Martin, A M; Garant, D; Pelletier, F
2013-05-01
Byers and Dunn (Reports, 9 November 2012, p. 802) claimed that predation on offspring reduced the potential for sexual selection in pronghorn. We argue that the potential for sexual selection is not affected by random offspring mortality when relative reproductive success is considered and increases when measured with the opportunity for selection, a metric that describes the potential for selection. PMID:23641093
In vitro selection of optimal DNA substrates for T4 RNA ligase
NASA Technical Reports Server (NTRS)
Harada, Kazuo; Orgel, Leslie E.
1993-01-01
We have used in vitro selection techniques to characterize DNA sequences that are ligated efficiently by T4 RNA ligase. We find that the ensemble of selected sequences ligated about 10 times as efficiently as the random mixture of sequences used as the input for selection. Surprisingly, the majority of the selected sequences approximated a well-defined consensus sequence.
All-optical fast random number generator.
Li, Pu; Wang, Yun-Cai; Zhang, Jian-Zhong
2010-09-13
We propose a scheme of all-optical random number generator (RNG), which consists of an ultra-wide bandwidth (UWB) chaotic laser, an all-optical sampler and an all-optical comparator. Free from the electric-device bandwidth, it can generate 10Gbit/s random numbers in our simulation. The high-speed bit sequences can pass standard statistical tests for randomness after all-optical exclusive-or (XOR) operation.
Nonstationary interference and scattering from random media
Nazikian, R.
1991-12-01
For the small angle scattering of coherent plane waves from inhomogeneous random media, the three dimensional mean square distribution of random fluctuations may be recovered from the interferometric detection of the nonstationary modulational structure of the scattered field. Modulational properties of coherent waves scattered from random media are related to nonlocal correlations in the double sideband structure of the Fourier transform of the scattering potential. Such correlations may be expressed in terms of a suitability generalized spectral coherence function for analytic fields.
Private randomness expansion with untrusted devices
NASA Astrophysics Data System (ADS)
Colbeck, Roger; Kent, Adrian
2011-03-01
Randomness is an important resource for many applications, from gambling to secure communication. However, guaranteeing that the output from a candidate random source could not have been predicted by an outside party is a challenging task, and many supposedly random sources used today provide no such guarantee. Quantum solutions to this problem exist, for example a device which internally sends a photon through a beamsplitter and observes on which side it emerges, but, presently, such solutions require the user to trust the internal workings of the device. Here, we seek to go beyond this limitation by asking whether randomness can be generated using untrusted devices—even ones created by an adversarial agent—while providing a guarantee that no outside party (including the agent) can predict it. Since this is easily seen to be impossible unless the user has an initially private random string, the task we investigate here is private randomness expansion. We introduce a protocol for private randomness expansion with untrusted devices which is designed to take as input an initially private random string and produce as output a longer private random string. We point out that private randomness expansion protocols are generally vulnerable to attacks that can render the initial string partially insecure, even though that string is used only inside a secure laboratory; our protocol is designed to remove this previously unconsidered vulnerability by privacy amplification. We also discuss extensions of our protocol designed to generate an arbitrarily long random string from a finite initially private random string. The security of these protocols against the most general attacks is left as an open question.
Some physical applications of random hierarchical matrices
Avetisov, V. A.; Bikulov, A. Kh.; Vasilyev, O. A.; Nechaev, S. K.; Chertovich, A. V.
2009-09-15
The investigation of spectral properties of random block-hierarchical matrices as applied to dynamic and structural characteristics of complex hierarchical systems with disorder is proposed for the first time. Peculiarities of dynamics on random ultrametric energy landscapes are discussed and the statistical properties of scale-free and polyscale (depending on the topological characteristics under investigation) random hierarchical networks (graphs) obtained by multiple mapping are considered.
Inherent randomness of evolving populations.
Harper, Marc
2014-03-01
The entropy rates of the Wright-Fisher process, the Moran process, and generalizations are computed and used to compare these processes and their dependence on standard evolutionary parameters. Entropy rates are measures of the variation dependent on both short-run and long-run behaviors and allow the relationships between mutation, selection, and population size to be examined. Bounds for the entropy rate are given for the Moran process (independent of population size) and for the Wright-Fisher process (bounded for fixed population size). A generational Moran process is also presented for comparison to the Wright-Fisher Process. Results include analytic results and computational extensions.
Random packing of spheres in Menger sponge.
Cieśla, Michał; Barbasz, Jakub
2013-06-01
Random packing of spheres inside fractal collectors of dimension 2 < d < 3 is studied numerically using Random Sequential Adsorption (RSA) algorithm. The paper focuses mainly on the measurement of random packing saturation limit. Additionally, scaling properties of density autocorrelations in the obtained packing are analyzed. The RSA kinetics coefficients are also measured. Obtained results allow to test phenomenological relation between random packing saturation density and collector dimension. Additionally, performed simulations together with previously obtained results confirm that, in general, the known dimensional relations are obeyed by systems having non-integer dimension, at least for d < 3.
The Theory of Random Laser Systems
Xunya Jiang
2002-06-27
Studies of random laser systems are a new direction with promising potential applications and theoretical interest. The research is based on the theories of localization and laser physics. So far, the research shows that there are random lasing modes inside the systems which is quite different from the common laser systems. From the properties of the random lasing modes, they can understand the phenomena observed in the experiments, such as multi-peak and anisotropic spectrum, lasing mode number saturation, mode competition and dynamic processes, etc. To summarize, this dissertation has contributed the following in the study of random laser systems: (1) by comparing the Lamb theory with the Letokhov theory, the general formulas of the threshold length or gain of random laser systems were obtained; (2) they pointed out the vital weakness of previous time-independent methods in random laser research; (3) a new model which includes the FDTD method and the semi-classical laser theory. The solutions of this model provided an explanation of the experimental results of multi-peak and anisotropic emission spectra, predicted the saturation of lasing modes number and the length of localized lasing modes; (4) theoretical (Lamb theory) and numerical (FDTD and transfer-matrix calculation) studies of the origin of localized lasing modes in the random laser systems; and (5) proposal of using random lasing modes as a new path to study wave localization in random systems and prediction of the lasing threshold discontinuity at mobility edge.
Random walks with similar transition probabilities
NASA Astrophysics Data System (ADS)
Schiefermayr, Klaus
2003-04-01
We consider random walks on the nonnegative integers with a possible absorbing state at -1. A random walk is called [alpha]-similar to a random walk if there exist constants Cij such that for the corresponding n-step transition probabilities , i,j[greater-or-equal, slanted]0, hold. We give necessary and sufficient conditions for the [alpha]-similarity of two random walks both in terms of the parameters and in terms of the corresponding spectral measures which appear in the spectral representation of the n-step transition probabilities developed by Karlin and McGregor.
Analysis and experiment of random ball test
NASA Astrophysics Data System (ADS)
Lu, Liming; Wu, Fan; Hou, Xi; Zhang, Can
2012-10-01
Robert E.Parks from National Institute of Standards and Technology (NIST), America, first reported Random Ball Test (RBT), which is used to measure the absolute error of the reference surface of the interferometer. The basic course of this technology as followed: first, assemble the Random Ball in the confocal position of the interferometer system; then, measure the surface of the Random Ball and record the result; rotate the Random Ball to another position, meanwhile make sure that the Random Ball is in the confocal position all the time; In the new position, measure the surface of the Random Ball and record it again; repeat enough times as above, calculate the mean result of the measuring results, and this mean result is just the absolute error of the reference surface of the interferometer. Since 1998, other scholars have continued Robert E.Parks's research, and created a new type of the RBT. In this new technology, Random Ball is sustained by high pressure airflow, suspending in the air, and rotating around sphere center. This technology is called Dynamic Random Ball Test (DRBT), because the Random Ball is rotating during measurement. This article mainly reported the experiment study about the DRBT.
Raman mode random lasing in ZnS-β-carotene random gain media
NASA Astrophysics Data System (ADS)
Bingi, Jayachandra; Warrier, Anita R.; Vijayan, C.
2013-06-01
Raman mode random lasing is demonstrated in ZnS-β-carotene random gain media at room temperature. A self assembled random medium is prepared with ZnS sub micron spheres synthesized by homogeneous precipitation method. β-Carotene extracted from pale green leaves is embedded in this random medium. The emission band of ZnS random medium (on excitation at 488 nm) overlaps considerably with that of β-carotene, which functions as a gain medium. Here, random medium works as a cavity, leading to Raman mode lasing at 517 nm and 527 nm triggered by stimulated resonance Raman scattering.
Social Selection and Religiously Selective Faith Schools
ERIC Educational Resources Information Center
Pettinger, Paul
2014-01-01
This article reviews recent research looking at the socio-economic profile of pupils at faith schools and the contribution religiously selective admission arrangements make. It finds that selection by faith leads to greater social segregation and is open to manipulation. It urges that such selection should end, making the state-funded school…
Using histograms to introduce randomization in the generation of ensembles of decision trees
Kamath, Chandrika; Cantu-Paz, Erick; Littau, David
2005-02-22
A system for decision tree ensembles that includes a module to read the data, a module to create a histogram, a module to evaluate a potential split according to some criterion using the histogram, a module to select a split point randomly in an interval around the best split, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method includes the steps of reading the data; creating a histogram; evaluating a potential split according to some criterion using the histogram, selecting a split point randomly in an interval around the best split, splitting the data, and combining multiple decision trees in ensembles.
Physical randomness sources for loophole-free Bell tests
NASA Astrophysics Data System (ADS)
Mitchell, Morgan W.
2016-05-01
We describe the strategy and physics used to select unpredictable measurement settings in the loophole-free Bell tests reported in [Hensen et al. Nature 2015, Giustina et al. PRL 2015, and Shalm et al. PRL 2015]. We demonstrate direct measurements of laser phase diffusion, a process driven by spontaneous emission, rigorous bounds on the effect of other, less-trusted contributions, and exponential predictability reduction by randomness extraction. As required for the cited experiments, we show the six-sigma bound for the predictability of the basis choices is below 0.001%. C. Abellan et al. PRL 2015.
Aperture determination of RHIC92 from randomly generated initial coordinates
Dell, G.F.
1992-12-31
Results obtained by tracking 100 particles for 1,000 turns when initial coordinates are selected randomly, with the requirement that the total emittance be constant, are compared to results from 1,000-turn and 10{sup 6}-turn runs when initial coordinates satisfy {epsilon}{sub x}(i) = {epsilon}{sub y}(i) and X{sub i}{prime} = Y{sub i}{prime} = 0. For studies of ten distributions of magnetic field errors, the 100-particle results given apertures equivalent to those from 10{sup 6}-turn runs, have an aperture distribution of considerably less width, and yet require only one tenth the computer time.
Godler, David E; Inaba, Yoshimi; Shi, Elva Z; Skinner, Cindy; Bui, Quang M; Francis, David; Amor, David J; Hopper, John L; Loesch, Danuta Z; Hagerman, Randi J; Schwartz, Charles E; Slater, Howard R
2013-04-15
Methylation of the fragile X-related epigenetic element 2 (FREE2) located on the exon 1/intron 1 boundary of the FMR1 gene is related to FMRP expression and cognitive impairment in full mutation (FM; CGG>200) individuals. We examined the relationship between age, the size of the FMR1 CGG expansion and the methylation output ratio (MOR) at 12 CpG sites proximal to the exon 1/intron 1 boundary using FREE2 MALDI-TOF MS. The patient cohort included 119 males and 368 females, i.e. 121 healthy controls (CGG<40), 176 premutation (CGG 55-170) and 190 FM (CGG 213-2000). For all CpG units examined, FM males showed a significantly elevated MOR compared with that in hypermethylated FM females. In FM males the MOR for most CpG units significantly positively correlated with both age and CGG size (P< 0.05). In FM females the skewing towards the unmethylated state was significant for half of the units between birth and puberty (P < 0.05). The methylation status of intron 1 CpG10-12 that was most significantly related to cognitive impairment in our earlier study, did not change significantly with age in FM females. These results challenge the concept of fragile X syndrome (FXS)-related methylation being static over time, and suggest that due to the preference for the unmethylated state in FM females, X-inactivation at this locus is not random. The findings also highlight that the prognostic value of FXS methylation testing is not uniform between all CpG sites, and thus may need to be evaluated on a site-by-site basis.
Realization of high performance random laser diodes
NASA Astrophysics Data System (ADS)
Yu, S. F.
2011-03-01
For the past four decades, extensive studies have been concentrated on the understanding of the physics of random lasing phenomena in scattering media with optical gain. Although lasing modes can be excited from the mirrorless scattering media, the characteristics of high scattering loss, multiple-direction emission, as well as multiple-mode oscillation prohibited them to be used as practical laser cavities. Furthermore, due to the difficulty of achieving high optical gain under electrical excitation, electrical excitation of random lasing action was seldom reported. Hence, mirrorless random cavities have never been used to realize lasers for practical applications -- CD, DVD, pico-projector, etc. Nowadays, studies of random lasing are still limited to the scientific research. Recently, the difficulty of achieving `battery driven' random laser diodes has been overcome by using nano-structured ZnO as the random medium and the careful design of heterojunctions. This lead to the first demonstration of room-temperature electrically pumped random lasing action under continuity wave and pulsed operation. In this presentation, we proposed to realize an array of quasi-one dimensional ZnO random laser diodes. We can show that if the laser array can be manipulated in a way such that every individual random laser can be coupled laterally to and locked with a particular phase relationship to its adjacent neighbor, the laser array can obtain coherent addition of random modes. Hence, output power can be multiplied and one lasing mode will only be supported due to the repulsion characteristics of random modes. This work was supported by HK PolyU grant no. 1-ZV6X.
ERIC Educational Resources Information Center
GILBERT, HARRY B.; LANG, GERHARD
THIS REPORT OF A TWO-DAY CONFERENCE ON TEACHER SELECTION METHODS, ATTENDED BY 45 EXPERTS IN THE FIELD, CONTAINS 13 POSITION PAPERS DEALING WITH (1) PERSONNEL SELECTION IN NON-TEACHING FIELDS, (2) PROBLEMS IN TEACHER SELECTION, RECRUITMENT AND IN VALIDATION OF SELECTION PROCEDURES, AND (3) NEEDED RESEARCH IN TEACHER SELECTION--ALSO CONFERENCE…
A random interacting network model for complex networks.
Goswami, Bedartha; Shekatkar, Snehal M; Rheinwalt, Aljoscha; Ambika, G; Kurths, Jürgen
2015-01-01
We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems. PMID:26657032