Science.gov

Sample records for age-stratified random sample

  1. Randomization and sampling issues

    USGS Publications Warehouse

    Geissler, P.H.

    1996-01-01

    The need for randomly selected routes and other sampling issues have been debated by the Amphibian electronic discussion group. Many excellent comments have been made, pro and con, but we have not reached consensus yet. This paper brings those comments together and attempts a synthesis. I hope that the resulting discussion will bring us closer to a consensus.

  2. Work analysis by random sampling.

    PubMed Central

    Divilbiss, J L; Self, P C

    1978-01-01

    Random sampling of work activities using an electronic random alarm mechanism provided a simple and effective way to determine how time was divided between various activities. At each random alarm the subject simply recorded the time and the activity. Analysis of the data led to reassignment of staff functions and also resulted in additional support for certain critical activities. PMID:626793

  3. Quantifying errors without random sampling

    PubMed Central

    Phillips, Carl V; LaPole, Luwanna M

    2003-01-01

    Background All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. Discussion We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Summary Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research. PMID:12892568

  4. Parametric models for samples of random functions

    SciTech Connect

    Grigoriu, M.

    2015-09-15

    A new class of parametric models, referred to as sample parametric models, is developed for random elements that match sample rather than the first two moments and/or other global properties of these elements. The models can be used to characterize, e.g., material properties at small scale in which case their samples represent microstructures of material specimens selected at random from a population. The samples of the proposed models are elements of finite-dimensional vector spaces spanned by samples, eigenfunctions of Karhunen–Loève (KL) representations, or modes of singular value decompositions (SVDs). The implementation of sample parametric models requires knowledge of the probability laws of target random elements. Numerical examples including stochastic processes and random fields are used to demonstrate the construction of sample parametric models, assess their accuracy, and illustrate how these models can be used to solve efficiently stochastic equations.

  5. Reduction of display artifacts by random sampling

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.

    1983-01-01

    The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.

  6. Sampled-Data Consensus Over Random Networks

    NASA Astrophysics Data System (ADS)

    Wu, Junfeng; Meng, Ziyang; Yang, Tao; Shi, Guodong; Johansson, Karl Henrik

    2016-09-01

    This paper considers the consensus problem for a network of nodes with random interactions and sampled-data control actions. We first show that consensus in expectation, in mean square, and almost surely are equivalent for a general random network model when the inter-sampling interval and network size satisfy a simple relation. The three types of consensus are shown to be simultaneously achieved over an independent or a Markovian random network defined on an underlying graph with a directed spanning tree. For both independent and Markovian random network models, necessary and sufficient conditions for mean-square consensus are derived in terms of the spectral radius of the corresponding state transition matrix. These conditions are then interpreted as the existence of critical value on the inter-sampling interval, below which global mean-square consensus is achieved and above which the system diverges in mean-square sense for some initial states. Finally, we establish an upper bound on the inter-sampling interval below which almost sure consensus is reached, and a lower bound on the inter-sampling interval above which almost sure divergence is reached. Some numerical simulations are given to validate the theoretical results and some discussions on the critical value of the inter-sampling intervals for the mean-square consensus are provided.

  7. Acceptance sampling using judgmental and randomly selected samples

    SciTech Connect

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  8. Contact allergy to topical medicaments becomes more common with advancing age: an age-stratified study.

    PubMed

    Green, Carl M; Holden, Catherine R; Gawkrodger, David J

    2007-04-01

    Eczema is common in the elderly people who often use topical medicaments. Previous studies in the elderly people have noted allergic positive patch tests in between 43% and 64% of those tested. We set out to assess whether medicament contact allergies are more common in elderly patients. We undertook a retrospective age-stratified study of all patients patch tested at the Royal Hallamshire Hospital, Sheffield, between January 1994 and July 2005. We confirmed that contact allergy to topical medicaments is more common in those aged more than 70 years compared with the younger age groups. There was no sex difference. The commonest problematic allergen types found in medicaments were fragrances and preservatives. The most frequent individual allergens were fragrance mix, Myroxylon pereirae, lanolins, local anaesthetic agents, neomycin and gentamicin, and tixocortol pivolate. The pattern of medicament contact allergens was similar to that of the younger age groups except that multiple allergic positives were more frequent and sensitivities to local anaesthetics and Myroxylon pereirae were proportionally more common. Elderly patients were more likely to have multiple contact allergies than the younger ones. Care needs to be taken when prescribing topical medicaments to elderly patients with eczema, especially for preparations that contain perfumes, lanolins, and local anaesthetics.

  9. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  10. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    NASA Astrophysics Data System (ADS)

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-08-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging.

  11. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling.

    PubMed

    Barranca, Victor J; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  12. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling.

    PubMed

    Barranca, Victor J; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-08-24

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging.

  13. Code System to Generate Latin Hypercube and Random Samples.

    1999-02-25

    Version: 00 LHS was written for the generation of multi variate samples either completely at random or by a constrained randomization termed Latin hypercube sampling (LHS). The generation of these samples is based on user-specified parameters which dictate the characteristics of the generated samples, such as type of sample (LHS or random), sample size, number of samples desired, correlation structure on input variables, and type of distribution specified on each variable. The following distributions aremore » built into the program: normal, lognormal, uniform, loguniform, triangular, and beta. In addition, the samples from the uniform and loguniform distributions may be modified by changing the frequency of the sampling within subintervals, and a subroutine which can be modified by the user to generate samples from other distributions (including empirical data) is provided.« less

  14. Code System to Generate Latin Hypercube and Random Samples.

    SciTech Connect

    IMAN, RONALD L.

    1999-02-25

    Version: 00 LHS was written for the generation of multi variate samples either completely at random or by a constrained randomization termed Latin hypercube sampling (LHS). The generation of these samples is based on user-specified parameters which dictate the characteristics of the generated samples, such as type of sample (LHS or random), sample size, number of samples desired, correlation structure on input variables, and type of distribution specified on each variable. The following distributions are built into the program: normal, lognormal, uniform, loguniform, triangular, and beta. In addition, the samples from the uniform and loguniform distributions may be modified by changing the frequency of the sampling within subintervals, and a subroutine which can be modified by the user to generate samples from other distributions (including empirical data) is provided.

  15. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  16. A random spatial sampling method in a rural developing nation

    PubMed Central

    2014-01-01

    Background Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. Methods We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. Results This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Conclusions Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available. PMID:24716473

  17. Adaptive importance sampling of random walks on continuous state spaces

    SciTech Connect

    Baggerly, K.; Cox, D.; Picard, R.

    1998-11-01

    The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material.

  18. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  19. Investigation of spectral analysis techniques for randomly sampled velocimetry data

    NASA Technical Reports Server (NTRS)

    Sree, Dave

    1993-01-01

    It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable

  20. Non-local MRI denoising using random sampling.

    PubMed

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338

  1. Selecting a Sample for Your Experiment: A Non-Random Stratified Sampling Approach

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2012-01-01

    The purpose of this paper is to develop a more general method for sample recruitment in experiments that is purposive (not random) and that results in a sample that is compositionally similar to the generalization population. This work builds on Tipton et al. (2011) by offering solutions to a larger class of problems than the non-overlapping…

  2. On random sample size, ignorability, ancillarity, completeness, separability, and degeneracy: sequential trials, random sample sizes, and missing data.

    PubMed

    Molenberghs, Geert; Kenward, Michael G; Aerts, Marc; Verbeke, Geert; Tsiatis, Anastasios A; Davidian, Marie; Rizopoulos, Dimitris

    2014-02-01

    The vast majority of settings for which frequentist statistical properties are derived assume a fixed, a priori known sample size. Familiar properties then follow, such as, for example, the consistency, asymptotic normality, and efficiency of the sample average for the mean parameter, under a wide range of conditions. We are concerned here with the alternative situation in which the sample size is itself a random variable which may depend on the data being collected. Further, the rule governing this may be deterministic or probabilistic. There are many important practical examples of such settings, including missing data, sequential trials, and informative cluster size. It is well known that special issues can arise when evaluating the properties of statistical procedures under such sampling schemes, and much has been written about specific areas (Grambsch P. Sequential sampling based on the observed Fisher information to guarantee the accuracy of the maximum likelihood estimator. Ann Stat 1983; 11: 68-77; Barndorff-Nielsen O and Cox DR. The effect of sampling rules on likelihood statistics. Int Stat Rev 1984; 52: 309-326). Our aim is to place these various related examples into a single framework derived from the joint modeling of the outcomes and sampling process and so derive generic results that in turn provide insight, and in some cases practical consequences, for different settings. It is shown that, even in the simplest case of estimating a mean, some of the results appear counterintuitive. In many examples, the sample average may exhibit small sample bias and, even when it is unbiased, may not be optimal. Indeed, there may be no minimum variance unbiased estimator for the mean. Such results follow directly from key attributes such as non-ancillarity of the sample size and incompleteness of the minimal sufficient statistic of the sample size and sample sum. Although our results have direct and obvious implications for estimation following group sequential

  3. The Ross classification for heart failure in children after 25 years: a review and an age-stratified revision.

    PubMed

    Ross, Robert D

    2012-12-01

    Accurate grading of the presence and severity of heart failure (HF) signs and symptoms in infants and children remains challenging. It has been 25 years since the Ross classification was first used for this purpose. Since then, several modifications of the system have been used and others proposed. New evidence has shown that in addition to signs and symptoms, data from echocardiography, exercise testing, and biomarkers such as N-terminal pro-brain natriuretic peptide (NT-proBNP) all are useful in stratifying outcomes for children with HF. It also is apparent that grading of signs and symptoms in children is dependent on age because infants manifest HF differently than toddlers and older children. This review culminates in a proposed new age-based Ross classification for HF in children that incorporates the most useful data from the last two decades. Testing of this new system will be important to determine whether an age-stratified scoring system can unify the way communication of HF severity and research on HF in children is performed in the future.

  4. A New GP Recombination Method Using Random Tree Sampling

    NASA Astrophysics Data System (ADS)

    Tanji, Makoto; Iba, Hitoshi

    We propose a new program evolution method named PORTS (Program Optimization by Random Tree Sampling) which is motivated by the idea of preservation and control of tree fragments in GP (Genetic Programming). We assume that to recombine genetic materials efficiently, tree fragments of any size should be preserved into the next generation. PORTS samples tree fragments and concatenates them by traversing and transitioning between promising trees instead of using subtree crossover and mutation. Because the size of a fragment preserved during a generation update follows a geometric distribution, merits of the method are that it is relatively easy to predict the behavior of tree fragments over time and to control sampling size, by changing a single parameter. From experimental results on RoyalTree, Symbolic Regression and 6-Multiplexer problem, we observed that the performance of PORTS is competitive with Simple GP. Furthermore, the average node size of optimal solutions obtained by PORTS was simple than Simple GP's result.

  5. Randomized Sampling for Large Data Applications of SVM

    SciTech Connect

    Ferragut, Erik M; Laska, Jason A

    2012-01-01

    A trend in machine learning is the application of existing algorithms to ever-larger datasets. Support Vector Machines (SVM) have been shown to be very effective, but have been difficult to scale to large-data problems. Some approaches have sought to scale SVM training by approximating and parallelizing the underlying quadratic optimization problem. This paper pursues a different approach. Our algorithm, which we call Sampled SVM, uses an existing SVM training algorithm to create a new SVM training algorithm. It uses randomized data sampling to better extend SVMs to large data applications. Experiments on several datasets show that our method is faster than and comparably accurate to both the original SVM algorithm it is based on and the Cascade SVM, the leading data organization approach for SVMs in the literature. Further, we show that our approach is more amenable to parallelization than Cascade SVM.

  6. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS

    PubMed Central

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2015-01-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910

  7. An environmental sampling model for combining judgment and randomly placed samples

    SciTech Connect

    Sego, Landon H.; Anderson, Kevin K.; Matzke, Brett D.; Sieber, Karl; Shulman, Stanley; Bennett, James; Gillen, M.; Wilson, John E.; Pulsipher, Brent A.

    2007-08-23

    In the event of the release of a lethal agent (such as anthrax) inside a building, law enforcement and public health responders take samples to identify and characterize the contamination. Sample locations may be rapidly chosen based on available incident details and professional judgment. To achieve greater confidence of whether or not a room or zone was contaminated, or to certify that detectable contamination is not present after decontamination, we consider a Bayesian model for combining the information gained from both judgment and randomly placed samples. We investigate the sensitivity of the model to the parameter inputs and make recommendations for its practical use.

  8. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  9. A comparison of methods for representing sparsely sampled random quantities.

    SciTech Connect

    Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua

    2013-09-01

    This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

  10. Phase Transitions in Sampling Algorithms and the Underlying Random Structures

    NASA Astrophysics Data System (ADS)

    Randall, Dana

    Sampling algorithms based on Markov chains arise in many areas of computing, engineering and science. The idea is to perform a random walk among the elements of a large state space so that samples chosen from the stationary distribution are useful for the application. In order to get reliable results, we require the chain to be rapidly mixing, or quickly converging to equilibrium. For example, to sample independent sets in a given graph G, the so-called hard-core lattice gas model, we can start at any independent set and repeatedly add or remove a single vertex (if allowed). By defining the transition probabilities of these moves appropriately, we can ensure that the chain will converge to a use- ful distribution over the state space Ω. For instance, the Gibbs (or Boltzmann) distribution, parameterized by Λ> 0, is defined so that p(Λ) = π(I) = Λ|I| /Z, where Z = sum_{J in Ω} Λ^{|J|} is the normalizing constant known as the partition function. An interesting phenomenon occurs as Λ is varied. For small values of Λ, local Markov chains converge quickly to stationarity, while for large values, they are prohibitively slow. To see why, imagine the underlying graph G is a region of the Cartesian lattice. Large independent sets will dominate the stationary distribution π when Λ is sufficiently large, and yet it will take a very long time to move from an independent set lying mostly on the odd sublattice to one that is mostly even. This phenomenon is well known in the statistical physics community, and characterizes by a phase transition in the underlying model.

  11. Central limit theorem for variable size simple random sampling from a finite population

    SciTech Connect

    Wright, T.

    1986-02-01

    This paper introduces a sampling plan for finite populations herein called ''variable size simple random sampling'' and compares properties of estimators based on it with results from the usual fixed size simple random sampling without replacement. Necessary and sufficient conditions (in the spirit of Hajek) for the limiting distribution of the sample total (or sample mean) to be normal are given. 19 refs.

  12. USAC: a universal framework for random sample consensus.

    PubMed

    Raguram, Rahul; Chum, Ondrej; Pollefeys, Marc; Matas, Jirí; Frahm, Jan-Michael

    2013-08-01

    A computational problem that arises frequently in computer vision is that of estimating the parameters of a model from data that have been contaminated by noise and outliers. More generally, any practical system that seeks to estimate quantities from noisy data measurements must have at its core some means of dealing with data contamination. The random sample consensus (RANSAC) algorithm is one of the most popular tools for robust estimation. Recent years have seen an explosion of activity in this area, leading to the development of a number of techniques that improve upon the efficiency and robustness of the basic RANSAC algorithm. In this paper, we present a comprehensive overview of recent research in RANSAC-based robust estimation by analyzing and comparing various approaches that have been explored over the years. We provide a common context for this analysis by introducing a new framework for robust estimation, which we call Universal RANSAC (USAC). USAC extends the simple hypothesize-and-verify structure of standard RANSAC to incorporate a number of important practical and computational considerations. In addition, we provide a general-purpose C++ software library that implements the USAC framework by leveraging state-of-the-art algorithms for the various modules. This implementation thus addresses many of the limitations of standard RANSAC within a single unified package. We benchmark the performance of the algorithm on a large collection of estimation problems. The implementation we provide can be used by researchers either as a stand-alone tool for robust estimation or as a benchmark for evaluating new techniques.

  13. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  14. Is Knowledge Random? Introducing Sampling and Bias through Outdoor Inquiry

    ERIC Educational Resources Information Center

    Stier, Sam

    2010-01-01

    Sampling, very generally, is the process of learning about something by selecting and assessing representative parts of that population or object. In the inquiry activity described here, students learned about sampling techniques as they estimated the number of trees greater than 12 cm dbh (diameter at breast height) in a wooded, discrete area…

  15. Stratified random sampling plan for an irrigation customer telephone survey

    SciTech Connect

    Johnston, J.W.; Davis, L.J.

    1986-05-01

    This report describes the procedures used to design and select a sample for a telephone survey of individuals who use electricity in irrigating agricultural cropland in the Pacific Northwest. The survey is intended to gather information on the irrigated agricultural sector that will be useful for conservation assessment, load forecasting, rate design, and other regional power planning activities.

  16. Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2016-01-01

    In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.

  17. Health plan auditing: 100-percent-of-claims vs. random-sample audits.

    PubMed

    Sillup, George P; Klimberg, Ronald K

    2011-01-01

    The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.

  18. Parameter Estimation in Stratified Cluster Sampling under Randomized Response Models for Sensitive Question Survey.

    PubMed

    Pu, Xiangke; Gao, Ge; Fan, Yubo; Wang, Mian

    2016-01-01

    Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.

  19. Prevalence and Severity of College Student Bereavement Examined in a Randomly Selected Sample

    ERIC Educational Resources Information Center

    Balk, David E.; Walker, Andrea C.; Baker, Ardith

    2010-01-01

    The authors used stratified random sampling to assess the prevalence and severity of bereavement in college undergraduates, providing an advance over findings that emerge from convenience sampling methods or from anecdotal observations. Prior research using convenience sampling indicated that 22% to 30% of college students are within 12 months of…

  20. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or...

  1. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or...

  2. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  3. A Comparison of Single Sample and Bootstrap Methods to Assess Mediation in Cluster Randomized Trials

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Stapleton, Laura M.; Kang, Joo Youn

    2006-01-01

    A Monte Carlo study examined the statistical performance of single sample and bootstrap methods that can be used to test and form confidence interval estimates of indirect effects in two cluster randomized experimental designs. The designs were similar in that they featured random assignment of clusters to one of two treatment conditions and…

  4. Conflict-cost based random sampling design for parallel MRI with low rank constraints

    NASA Astrophysics Data System (ADS)

    Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie

    2015-05-01

    In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.

  5. Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions

    SciTech Connect

    Žerovnik, G.; Trkov, A.; Kodeli, I.A.; Capote, R.; Smith, D.L.

    2015-01-15

    Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.

  6. Health plan auditing: 100-percent-of-claims vs. random-sample audits.

    PubMed

    Sillup, George P; Klimberg, Ronald K

    2011-01-01

    The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach. PMID:21406351

  7. Random Sampling of Quantum States: a Survey of Methods. And Some Issues Regarding the Overparametrized Method

    NASA Astrophysics Data System (ADS)

    Maziero, Jonas

    2015-12-01

    The numerical generation of random quantum states (RQS) is an important procedure for investigations in quantum information science. Here, we review some methods that may be used for performing that task. We start by presenting a simple procedure for generating random state vectors, for which the main tool is the random sampling of unbiased discrete probability distributions (DPD). Afterwards, the creation of random density matrices is addressed. In this context, we first present the standard method, which consists in using the spectral decomposition of a quantum state for getting RQS from random DPDs and random unitary matrices. In the sequence, the Bloch vector parametrization method is described. This approach, despite being useful in several instances, is not in general convenient for RQS generation. In the last part of the article, we regard the overparametrized method (OPM) and the related Ginibre and Bures techniques. The OPM can be used to create random positive semidefinite matrices with unit trace from randomly produced general complex matrices in a simple way that is friendly for numerical implementations. We consider a physically relevant issue related to the possible domains that may be used for the real and imaginary parts of the elements of such general complex matrices. Subsequently, a too fast concentration of measure in the quantum state space that appears in this parametrization is noticed.

  8. Plasma Carotenoids, Tocopherols, and Retinol in the Age-Stratified (35–74 Years) General Population: A Cross-Sectional Study in Six European Countries

    PubMed Central

    Stuetz, Wolfgang; Weber, Daniela; Dollé, Martijn E. T.; Jansen, Eugène; Grubeck-Loebenstein, Beatrix; Fiegl, Simone; Toussaint, Olivier; Bernhardt, Juergen; Gonos, Efstathios S.; Franceschi, Claudio; Sikora, Ewa; Moreno-Villanueva, María; Breusing, Nicolle; Grune, Tilman; Bürkle, Alexander

    2016-01-01

    Blood micronutrient status may change with age. We analyzed plasma carotenoids, α-/γ-tocopherol, and retinol and their associations with age, demographic characteristics, and dietary habits (assessed by a short food frequency questionnaire) in a cross-sectional study of 2118 women and men (age-stratified from 35 to 74 years) of the general population from six European countries. Higher age was associated with lower lycopene and α-/β-carotene and higher β-cryptoxanthin, lutein, zeaxanthin, α-/γ-tocopherol, and retinol levels. Significant correlations with age were observed for lycopene (r = −0.248), α-tocopherol (r = 0.208), α-carotene (r = −0.112), and β-cryptoxanthin (r = 0.125; all p < 0.001). Age was inversely associated with lycopene (−6.5% per five-year age increase) and this association remained in the multiple regression model with the significant predictors (covariables) being country, season, cholesterol, gender, smoking status, body mass index (BMI (kg/m2)), and dietary habits. The positive association of α-tocopherol with age remained when all covariates including cholesterol and use of vitamin supplements were included (1.7% vs. 2.4% per five-year age increase). The association of higher β-cryptoxanthin with higher age was no longer statistically significant after adjustment for fruit consumption, whereas the inverse association of α-carotene with age remained in the fully adjusted multivariable model (−4.8% vs. −3.8% per five-year age increase). We conclude from our study that age is an independent predictor of plasma lycopene, α-tocopherol, and α-carotene. PMID:27706032

  9. Output-only modal identification by compressed sensing: Non-uniform low-rate random sampling

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Nagarajaiah, Satish

    2015-05-01

    Modal identification or testing of structures consists of two phases, namely, data acquisition and data analysis. Some structures, such as aircrafts, high-speed machines, and plate-like civil structures, have active modes in the high-frequency range when subjected to high-speed or broadband excitation in their operational conditions. In the data acquisition stage, the Shannon-Nyquist sampling theorem indicates that capturing the high-frequency modes (signals) requires uniform high-rate sampling, resulting in sensing too many samples, which potentially impose burdens on the data transfer (especially in wireless platform) and data analysis stage. This paper explores a new-emerging, alternative, signal sampling and analysis technique, compressed sensing, and investigates the feasibility of a new method for output-only modal identification of structures in a non-uniform low-rate random sensing framework based on a combination of compressed sensing (CS) and blind source separation (BSS). Specifically, in the data acquisition stage, CS sensors sample few non-uniform low-rate random measurements of the structural responses signals, which turn out to be sufficient to capture the underlying mode information. Then in the data analysis stage, the proposed method uses the BSS technique, complexity pursuit (CP) recently explored by the authors, to directly decouple the non-uniform low-rate random samples of the structural responses, simultaneously yielding the mode shape matrix as well as the non-uniform low-rate random samples of the modal responses. Finally, CS with ℓ1-minimization recovers the uniform high-rate modal response from the CP-decoupled non-uniform low-rate random samples of the modal response, thereby enabling estimation of the frequency and damping ratio. Because CS sensors are currently in laboratory prototypes and not yet commercially available, their functionality-randomly sensing few non-uniform samples-is simulated in this study, which is performed on the

  10. Random Sampling Process Leads to Overestimation of β-Diversity of Microbial Communities

    PubMed Central

    Zhou, Jizhong; Jiang, Yi-Huei; Deng, Ye; Shi, Zhou; Zhou, Benjamin Yamin; Xue, Kai; Wu, Liyou; He, Zhili; Yang, Yunfeng

    2013-01-01

    ABSTRACT The site-to-site variability in species composition, known as β-diversity, is crucial to understanding spatiotemporal patterns of species diversity and the mechanisms controlling community composition and structure. However, quantifying β-diversity in microbial ecology using sequencing-based technologies is a great challenge because of a high number of sequencing errors, bias, and poor reproducibility and quantification. Herein, based on general sampling theory, a mathematical framework is first developed for simulating the effects of random sampling processes on quantifying β-diversity when the community size is known or unknown. Also, using an analogous ball example under Poisson sampling with limited sampling efforts, the developed mathematical framework can exactly predict the low reproducibility among technically replicate samples from the same community of a certain species abundance distribution, which provides explicit evidences of random sampling processes as the main factor causing high percentages of technical variations. In addition, the predicted values under Poisson random sampling were highly consistent with the observed low percentages of operational taxonomic unit (OTU) overlap (<30% and <20% for two and three tags, respectively, based on both Jaccard and Bray-Curtis dissimilarity indexes), further supporting the hypothesis that the poor reproducibility among technical replicates is due to the artifacts associated with random sampling processes. Finally, a mathematical framework was developed for predicting sampling efforts to achieve a desired overlap among replicate samples. Our modeling simulations predict that several orders of magnitude more sequencing efforts are needed to achieve desired high technical reproducibility. These results suggest that great caution needs to be taken in quantifying and interpreting β-diversity for microbial community analysis using next-generation sequencing technologies. PMID:23760464

  11. A correction factor for the impact of cluster randomized sampling and its applications.

    PubMed

    Cousineau, Denis; Laurencelle, Louis

    2016-03-01

    Cluster randomized sampling is 1 method for sampling a population. It requires recruiting subgroups of participants from the population of interest (e.g., whole classes from schools) instead of individuals solicited independently. Here, we demonstrate how clusters affect the standard error of the mean. The presence of clusters influences 2 quantities, the variance of the means and the expected variance. Ignoring clustering produces spurious statistical significance and reduces statistical power when effect sizes are moderate to large. Here, we propose a correction factor. It can be used to estimate standard errors and confidence intervals of the mean under cluster randomized sampling. This correction factor is easy to integrate into regular tests of means and effect sizes. It can also be used to determine sample size needed to reach a prespecified power. Finally, this approach is an easy-to-use alternative to linear mixed modeling and hierarchical linear modeling when there are only 2 levels and no covariates. PMID:26651985

  12. HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA

    EPA Science Inventory

    Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...

  13. Recidivism among Child Sexual Abusers: Initial Results of a 13-Year Longitudinal Random Sample

    ERIC Educational Resources Information Center

    Patrick, Steven; Marsh, Robert

    2009-01-01

    In the initial analysis of data from a random sample of all those charged with child sexual abuse in Idaho over a 13-year period, only one predictive variable was found that related to recidivism of those convicted. Variables such as ethnicity, relationship, gender, and age differences did not show a significant or even large association with…

  14. A Model for Predicting Behavioural Sleep Problems in a Random Sample of Australian Pre-Schoolers

    ERIC Educational Resources Information Center

    Hall, Wendy A.; Zubrick, Stephen R.; Silburn, Sven R.; Parsons, Deborah E.; Kurinczuk, Jennifer J.

    2007-01-01

    Behavioural sleep problems (childhood insomnias) can cause distress for both parents and children. This paper reports a model describing predictors of high sleep problem scores in a representative population-based random sample survey of non-Aboriginal singleton children born in 1995 and 1996 (1085 girls and 1129 boys) in Western Australia.…

  15. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    ERIC Educational Resources Information Center

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  16. A Nationwide Random Sampling Survey of Potential Complicated Grief in Japan

    ERIC Educational Resources Information Center

    Mizuno, Yasunao; Kishimoto, Junji; Asukai, Nozomu

    2012-01-01

    To investigate the prevalence of significant loss, potential complicated grief (CG), and its contributing factors, we conducted a nationwide random sampling survey of Japanese adults aged 18 or older (N = 1,343) using a self-rating Japanese-language version of the Complicated Grief Brief Screen. Among them, 37.0% experienced their most significant…

  17. Optimal Sampling of Units in Three-Level Cluster Randomized Designs: An Ancova Framework

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2011-01-01

    Field experiments with nested structures assign entire groups such as schools to treatment and control conditions. Key aspects of such cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. The units at each level of the…

  18. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    ERIC Educational Resources Information Center

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  19. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster.

    PubMed

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and diminishes environmental variance. This method was compared with a traditional egg collection method where eggs are collected directly from the medium. Within each method the observed and expected standard deviations of egg-to-adult viability were compared, whereby the difference in the randomness of the samples between the two methods was assessed. The method presented here was superior to the traditional method. Only 14% of the samples had a standard deviation higher than expected, as compared with 58% in the traditional method. To reduce bias in the estimation of the variance and the mean of a trait and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila.

  20. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    NASA Astrophysics Data System (ADS)

    Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.

    2013-05-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.

  1. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    SciTech Connect

    Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H; Barber, Samuel K; Bouet, Nathalie; McKinney, Wayne R; Takacs, Peter Z; Voronov, Dmitriy L

    2010-09-17

    Verification of the reliability of metrology data from high quality x-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [Proc. SPIE 7077-7 (2007), Opt. Eng. 47(7), 073602-1-5 (2008)} and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [Nucl. Instr. and Meth. A 616, 172-82 (2010)]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.

  2. Characterization of Electron Microscopes with Binary Pseudo-random Multilayer Test Samples

    SciTech Connect

    V Yashchuk; R Conley; E Anderson; S Barber; N Bouet; W McKinney; P Takacs; D Voronov

    2011-12-31

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  3. On analysis-based two-step interpolation methods for randomly sampled seismic data

    NASA Astrophysics Data System (ADS)

    Yang, Pengliang; Gao, Jinghuai; Chen, Wenchao

    2013-02-01

    Interpolating the missing traces of regularly or irregularly sampled seismic record is an exceedingly important issue in the geophysical community. Many modern acquisition and reconstruction methods are designed to exploit the transform domain sparsity of the few randomly recorded but informative seismic data using thresholding techniques. In this paper, to regularize randomly sampled seismic data, we introduce two accelerated, analysis-based two-step interpolation algorithms, the analysis-based FISTA (fast iterative shrinkage-thresholding algorithm) and the FPOCS (fast projection onto convex sets) algorithm from the IST (iterative shrinkage-thresholding) algorithm and the POCS (projection onto convex sets) algorithm. A MATLAB package is developed for the implementation of these thresholding-related interpolation methods. Based on this package, we compare the reconstruction performance of these algorithms, using synthetic and real seismic data. Combined with several thresholding strategies, the accelerated convergence of the proposed methods is also highlighted.

  4. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    PubMed

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  5. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    PubMed

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  6. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    PubMed

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  7. Risk factors for cutaneous malignant melanoma among aircrews and a random sample of the population

    PubMed Central

    Rafnsson, V; Hrafnkelsson, J; Tulinius, H; Sigurgeirsson, B; Hjaltalin, O

    2003-01-01

    Aims: To evaluate whether a difference in the prevalence of risk factors for malignant melanoma in a random sample of the population and among pilots and cabin attendants could explain the increased incidence of malignant melanoma which had been found in previous studies of aircrews. Methods: A questionnaire was used to collect information on hair colour, eye colour, freckles, number of naevi, family history of skin cancer and naevi, skin type, history of sunburn, sunbed, all sunscreen use, and number of sunny vacations. Results: The 239 pilots were all males and there were 856 female cabin attendants, which were compared with 454 males and 1464 females of the same age drawn randomly from the general population. The difference in constitutional and behavioural risk factors for malignant melanoma between the aircrews and the population sample was not substantial. The aircrews had more often used sunscreen and had taken more sunny vacations than the other men and women. The predictive values for use of sunscreen were 0.88 for pilots and 0.85 for cabin attendants and the predictive values for sunny vacation were 1.36 and 1.34 respectively. Conclusion: There was no substantial difference between the aircrew and the random sample of the population with respect to prevalence of risk factors for malignant melanoma. Thus it is unlikely that the increased incidence of malignant melanoma found in previous studies of pilots and cabin attendants can be solely explained by excessive sun exposure. PMID:14573711

  8. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    SciTech Connect

    Yashchuk, V.V.; Conley, R.; Anderson, E.H.; Barber, S.K.; Bouet, N.; McKinney, W.R.; Takacs, P.Z. and Voronov, D.L.

    2010-12-08

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binarypseudo-random (BPR) gratings and arrays has been suggested and and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer. Here we describe the details of development of binarypseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML testsamples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  9. A novel 3D Cartesian random sampling strategy for Compressive Sensing Magnetic Resonance Imaging.

    PubMed

    Valvano, Giuseppe; Martini, Nicola; Santarelli, Maria Filomena; Chiappino, Dante; Landini, Luigi

    2015-01-01

    In this work we propose a novel acquisition strategy for accelerated 3D Compressive Sensing Magnetic Resonance Imaging (CS-MRI). This strategy is based on a 3D cartesian sampling with random switching of the frequency encoding direction with other K-space directions. Two 3D sampling strategies are presented. In the first strategy, the frequency encoding direction is randomly switched with one of the two phase encoding directions. In the second strategy, the frequency encoding direction is randomly chosen between all the directions of the K-Space. These strategies can lower the coherence of the acquisition, in order to produce reduced aliasing artifacts and to achieve a better image quality after Compressive Sensing (CS) reconstruction. Furthermore, the proposed strategies can reduce the typical smoothing of CS due to the limited sampling of high frequency locations. We demonstrated by means of simulations that the proposed acquisition strategies outperformed the standard Compressive Sensing acquisition. This results in a better quality of the reconstructed images and in a greater achievable acceleration.

  10. Power and Sample Size for Randomized Phase III Survival Trials under the Weibull Model

    PubMed Central

    Wu, Jianrong

    2015-01-01

    Two parametric tests are proposed for designing randomized two-arm phase III survival trials under the Weibull model. The properties of the two parametric tests are compared with the non-parametric log-rank test through simulation studies. Power and sample size formulas of the two parametric tests are derived. The impact on sample size under mis-specification of the Weibull shape parameter is also investigated. The study can be designed by planning the study duration and handling nonuniform entry and loss to follow-up under the Weibull model using either the proposed parametric tests or the well known non-parametric log-rank test. PMID:24895942

  11. Randomly dividing homologous samples leads to overinflated accuracies for emotion recognition.

    PubMed

    Liu, Shuang; Zhang, Di; Xu, Minpeng; Qi, Hongzhi; He, Feng; Zhao, Xin; Zhou, Peng; Zhang, Lixin; Ming, Dong

    2015-04-01

    There are numerous studies measuring the brain emotional status by analyzing EEGs under the emotional stimuli that have occurred. However, they often randomly divide the homologous samples into training and testing groups, known as randomly dividing homologous samples (RDHS), despite considering the impact of the non-emotional information among them, which would inflate the recognition accuracy. This work proposed a modified method, the integrating homologous samples (IHS), where the homologous samples were either used to build a classifier, or to be tested. The results showed that the classification accuracy was much lower for the IHS than for the RDHS. Furthermore, a positive correlation was found between the accuracy and the overlapping rate of the homologous samples. These findings implied that the overinflated accuracy did exist in those previous studies where the RDHS method was employed for emotion recognition. Moreover, this study performed a feature selection for the IHS condition based on the support vector machine-recursive feature elimination, after which the average accuracies were greatly improved to 85.71% and 77.18% in the picture-induced and video-induced tasks, respectively.

  12. Sample size determination for testing equality in a cluster randomized trial with noncompliance.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2011-01-01

    For administrative convenience or cost efficiency, we may often employ a cluster randomized trial (CRT), in which randomized units are clusters of patients rather than individual patients. Furthermore, because of ethical reasons or patient's decision, it is not uncommon to encounter data in which there are patients not complying with their assigned treatments. Thus, the development of a sample size calculation procedure for a CRT with noncompliance is important and useful in practice. Under the exclusion restriction model, we have developed an asymptotic test procedure using a tanh(-1)(x) transformation for testing equality between two treatments among compliers for a CRT with noncompliance. We have further derived a sample size formula accounting for both noncompliance and the intraclass correlation for a desired power 1 - β at a nominal α level. We have employed Monte Carlo simulation to evaluate the finite-sample performance of the proposed test procedure with respect to type I error and the accuracy of the derived sample size calculation formula with respect to power in a variety of situations. Finally, we use the data taken from a CRT studying vitamin A supplementation to reduce mortality among preschool children to illustrate the use of sample size calculation proposed here. PMID:21191850

  13. Allowable sampling period for consensus control of multiple general linear dynamical agents in random networks

    NASA Astrophysics Data System (ADS)

    Zhang, Ya; Tian, Yu-Ping

    2010-11-01

    This article studies the consensus problem for a group of sampled-data general linear dynamical agents over random communication networks. Dynamic output feedback protocols are applied to solve the consensus problem. When the sampling period is sufficiently small, it is shown that as long as the mean topology has globally reachable nodes, the mean square consensus can be achieved by selecting protocol parameters so that n - 1 specified subsystems are simultaneously stabilised. However, when the sampling period is comparatively large, it is revealed that differing from low-order integrator multi-agent systems the consensus problem may be unsolvable. By using the hybrid dynamical system theory, an allowable upper bound of sampling period is further proposed. Two approaches to designing protocols are also provided. Simulations are given to illustrate the validity of the proposed approaches.

  14. Improving Ambulatory Saliva-Sampling Compliance in Pregnant Women: A Randomized Controlled Study

    PubMed Central

    Moeller, Julian; Lieb, Roselind; Meyer, Andrea H.; Loetscher, Katharina Quack; Krastel, Bettina; Meinlschmidt, Gunther

    2014-01-01

    Objective Noncompliance with scheduled ambulatory saliva sampling is common and has been associated with biased cortisol estimates in nonpregnant subjects. This study is the first to investigate in pregnant women strategies to improve ambulatory saliva-sampling compliance, and the association between sampling noncompliance and saliva cortisol estimates. Methods We instructed 64 pregnant women to collect eight scheduled saliva samples on two consecutive days each. Objective compliance with scheduled sampling times was assessed with a Medication Event Monitoring System and self-reported compliance with a paper-and-pencil diary. In a randomized controlled study, we estimated whether a disclosure intervention (informing women about objective compliance monitoring) and a reminder intervention (use of acoustical reminders) improved compliance. A mixed model analysis was used to estimate associations between women's objective compliance and their diurnal cortisol profiles, and between deviation from scheduled sampling and the cortisol concentration measured in the related sample. Results Self-reported compliance with a saliva-sampling protocol was 91%, and objective compliance was 70%. The disclosure intervention was associated with improved objective compliance (informed: 81%, noninformed: 60%), F(1,60)  = 17.64, p<0.001, but not the reminder intervention (reminders: 68%, without reminders: 72%), F(1,60)  = 0.78, p = 0.379. Furthermore, a woman's increased objective compliance was associated with a higher diurnal cortisol profile, F(2,64) = 8.22, p<0.001. Altered cortisol levels were observed in less objective compliant samples, F(1,705) = 7.38, p = 0.007, with delayed sampling associated with lower cortisol levels. Conclusions The results suggest that in pregnant women, objective noncompliance with scheduled ambulatory saliva sampling is common and is associated with biased cortisol estimates. To improve sampling compliance, results suggest

  15. The effect of sampling rate on observed statistics in a correlated random walk

    PubMed Central

    Rosser, G.; Fletcher, A. G.; Maini, P. K.; Baker, R. E.

    2013-01-01

    Tracking the movement of individual cells or animals can provide important information about their motile behaviour, with key examples including migrating birds, foraging mammals and bacterial chemotaxis. In many experimental protocols, observations are recorded with a fixed sampling interval and the continuous underlying motion is approximated as a series of discrete steps. The size of the sampling interval significantly affects the tracking measurements, the statistics computed from observed trajectories, and the inferences drawn. Despite the widespread use of tracking data to investigate motile behaviour, many open questions remain about these effects. We use a correlated random walk model to study the variation with sampling interval of two key quantities of interest: apparent speed and angle change. Two variants of the model are considered, in which reorientations occur instantaneously and with a stationary pause, respectively. We employ stochastic simulations to study the effect of sampling on the distributions of apparent speeds and angle changes, and present novel mathematical analysis in the case of rapid sampling. Our investigation elucidates the complex nature of sampling effects for sampling intervals ranging over many orders of magnitude. Results show that inclusion of a stationary phase significantly alters the observed distributions of both quantities. PMID:23740484

  16. Symmetry and random sampling of symmetry independent configurations for the simulation of disordered solids.

    PubMed

    D'Arco, Philippe; Mustapha, Sami; Ferrabone, Matteo; Noël, Yves; De La Pierre, Marco; Dovesi, Roberto

    2013-09-01

    A symmetry-adapted algorithm producing uniformly at random the set of symmetry independent configurations (SICs) in disordered crystalline systems or solid solutions is presented here. Starting from Pólya's formula, the role of the conjugacy classes of the symmetry group in uniform random sampling is shown. SICs can be obtained for all the possible compositions or for a chosen one, and symmetry constraints can be applied. The approach yields the multiplicity of the SICs and allows us to operate configurational statistics in the reduced space of the SICs. The present low-memory demanding implementation is briefly sketched. The probability of finding a given SIC or a subset of SICs is discussed as a function of the number of draws and their precise estimate is given. The method is illustrated by application to a binary series of carbonates and to the binary spinel solid solution Mg(Al,Fe)2O4.

  17. Spatially coherent colour image reconstruction from a trichromatic mosaic with random arrangement of chromatic samples.

    PubMed

    Alleysson, David

    2010-09-01

    Recent high resolution imaging of the human retina confirms that the trichromatic cone mosaic follows a random arrangement. Moreover, both the cones' arrangements and proportion widely differ from individual to individual. These findings provide new insights to our understanding of colour vision as most of the previous vision models ignored the mosaic sampling. Here, we propose a cone mosaic sampling simulation applied to colour images. From the simulation, we can infer the processing needs for retrieving spatial and chromatic information from the mosaic without spatial ambiguity. In particular, the focus is on the ability of the visual system to reconstruct coherent spatial information from a plurality of local neighbourhoods. We show that normalized linear processing allows the recovery of achromatic and chromatic information from a mosaic of trichromatic samples arranged randomly. Also, low frequency components of achromatic information can serve to coarsely estimate orientation, which in turn improves the interpolation of chromatic information. An implication for the visual system is the possibility that, in the cortex, the low frequency achromatic spatial information of the magnocellular pathway helps separate chromatic information from the mixed achromatic/chromatic information carried by the parvocellular pathway. PMID:20883332

  18. Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data

    NASA Technical Reports Server (NTRS)

    Sree, David

    1992-01-01

    Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.

  19. Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data

    NASA Astrophysics Data System (ADS)

    Sree, David

    1992-09-01

    Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.

  20. Random sampling of skewed distributions implies Taylor's power law of fluctuation scaling.

    PubMed

    Cohen, Joel E; Xu, Meng

    2015-06-23

    Taylor's law (TL), a widely verified quantitative pattern in ecology and other sciences, describes the variance in a species' population density (or other nonnegative quantity) as a power-law function of the mean density (or other nonnegative quantity): Approximately, variance = a(mean)(b), a > 0. Multiple mechanisms have been proposed to explain and interpret TL. Here, we show analytically that observations randomly sampled in blocks from any skewed frequency distribution with four finite moments give rise to TL. We do not claim this is the only way TL arises. We give approximate formulae for the TL parameters and their uncertainty. In computer simulations and an empirical example using basal area densities of red oak trees from Black Rock Forest, our formulae agree with the estimates obtained by least-squares regression. Our results show that the correlated sampling variation of the mean and variance of skewed distributions is statistically sufficient to explain TL under random sampling, without the intervention of any biological or behavioral mechanisms. This finding connects TL with the underlying distribution of population density (or other nonnegative quantity) and provides a baseline against which more complex mechanisms of TL can be compared. PMID:25852144

  1. Characteristics of a random sample of emergency food program users in New York: I. Food pantries.

    PubMed

    Clancy, K L; Bowering, J; Poppendieck, J

    1991-07-01

    Food pantry users throughout New York State were studied and many demographic differences found between New York City and Upstate New York respondents. Seven percent of households had no income and median income as percent of the poverty level was 59 percent. Slightly more than 40 percent were spending over 60 percent of their incomes on housing. The data from this survey, the first in New York State to employ a random sampling design, demonstrate a sizable gap between household needs and available resources.

  2. On the number of crossings of a strip by sample paths of a random walk

    SciTech Connect

    Lotov, V I; Orlova, N G

    2003-06-30

    Exact expressions are obtained for the distribution of the total number of crossings of a strip by sample paths of a random walk whose jumps have a two-sided geometric distribution. The distribution of the number of crossings during a finite time interval is found in explicit form for walks with jumps taking the values {+-}1. A limit theorem is proved for the joint distribution of the number of crossings of an expanding strip on a finite (increasing) time interval and the position of the walk at the end of this interval, and the corresponding limit distribution is found.

  3. Convergence Properties of Crystal Structure Prediction by Quasi-Random Sampling

    PubMed Central

    2015-01-01

    Generating sets of trial structures that sample the configurational space of crystal packing possibilities is an essential step in the process of ab initio crystal structure prediction (CSP). One effective methodology for performing such a search relies on low-discrepancy, quasi-random sampling, and our implementation of such a search for molecular crystals is described in this paper. Herein we restrict ourselves to rigid organic molecules and, by considering their geometric properties, build trial crystal packings as starting points for local lattice energy minimization. We also describe a method to match instances of the same structure, which we use to measure the convergence of our packing search toward completeness. The use of these tools is demonstrated for a set of molecules with diverse molecular characteristics and as representative of areas of application where CSP has been applied. An important finding is that the lowest energy crystal structures are typically located early and frequently during a quasi-random search of phase space. It is usually the complete sampling of higher energy structures that requires extended sampling. We show how the procedure can first be refined, through targetting the volume of the generated crystal structures, and then extended across a range of space groups to make a full CSP search and locate experimentally observed and lists of hypothetical polymorphs. As the described method has also been created to lie at the base of more involved approaches to CSP, which are being developed within the Global Lattice Energy Explorer (Glee) software, a few of these extensions are briefly discussed. PMID:26716361

  4. Insulation workers in Belfast. 1. Comparison of a random sample with a control population1

    PubMed Central

    Wallace, William F. M.; Langlands, Jean H. M.

    1971-01-01

    Wallace, W. F. M., and Langlands, J. H. M. (1971).Brit. J. industr. Med.,28, 211-216. Insulation workers in Belfast. 1. Comparison of a random sample with a control population. A sample of 50 men was chosen at random from the population of asbestos insulators in Belfast and matched with a control series of men of similar occupational group with respect to age, height, and smoking habit. Significantly more of the insulators complained of cough and sputum and had basal rales on examination. Clubbing was assessed by means of measurements of the hyponychial angle of both index fingers. These angles were significantly greater in the group of insulators. Twenty-one insulators had ϰ-rays which showed pleural calcification with or without pulmonary fibrosis; one control ϰ-ray showed pulmonary fibrosis. The insulators had no evidence of airways obstruction but static lung volume was reduced and their arterial oxygen tension was lower than that of the controls and their alveolar-arterial oxygen gradient was greater. PMID:5557841

  5. Random sample community-based health surveys: does the effort to reach participants matter?

    PubMed Central

    Messiah, Antoine; Castro, Grettel; Rodríguez de la Vega, Pura; Acuna, Juan M

    2014-01-01

    Objectives Conducting health surveys with community-based random samples are essential to capture an otherwise unreachable population, but these surveys can be biased if the effort to reach participants is insufficient. This study determines the desirable amount of effort to minimise such bias. Design A household-based health survey with random sampling and face-to-face interviews. Up to 11 visits, organised by canvassing rounds, were made to obtain an interview. Setting Single-family homes in an underserved and understudied population in North Miami-Dade County, Florida, USA. Participants Of a probabilistic sample of 2200 household addresses, 30 corresponded to empty lots, 74 were abandoned houses, 625 households declined to participate and 265 could not be reached and interviewed within 11 attempts. Analyses were performed on the 1206 remaining households. Primary outcome Each household was asked if any of their members had been told by a doctor that they had high blood pressure, heart disease including heart attack, cancer, diabetes, anxiety/ depression, obesity or asthma. Responses to these questions were analysed by the number of visit attempts needed to obtain the interview. Results Return per visit fell below 10% after four attempts, below 5% after six attempts and below 2% after eight attempts. As the effort increased, household size decreased, while household income and the percentage of interviewees active and employed increased; proportion of the seven health conditions decreased, four of which did so significantly: heart disease 20.4–9.2%, high blood pressure 63.5–58.1%, anxiety/depression 24.4–9.2% and obesity 21.8–12.6%. Beyond the fifth attempt, however, cumulative percentages varied by less than 1% and precision varied by less than 0.1%. Conclusions In spite of the early and steep drop, sustaining at least five attempts to reach participants is necessary to reduce selection bias. PMID:25510887

  6. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each...

  7. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each...

  8. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each...

  9. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each...

  10. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each...

  11. Mendelian randomization studies for a continuous exposure under case-control sampling.

    PubMed

    Dai, James Y; Zhang, Xinyi Cindy

    2015-03-15

    In this article, we assess the impact of case-control sampling on mendelian randomization analyses with a dichotomous disease outcome and a continuous exposure. The 2-stage instrumental variables (2SIV) method uses the prediction of the exposure given genotypes in the logistic regression for the outcome and provides a valid test and an approximation of the causal effect. Under case-control sampling, however, the first stage of the 2SIV procedure becomes a secondary trait association, which requires proper adjustment for the biased sampling. Through theoretical development and simulations, we compare the naïve estimator, the inverse probability weighted estimator, and the maximum likelihood estimator for the first-stage association and, more importantly, the resulting 2SIV estimates of the causal effect. We also include in our comparison the causal odds ratio estimate derived from structural mean models by double-logistic regression. Our results suggest that the naïve estimator is substantially biased under the alternative, yet it remains unbiased under the null hypothesis of no causal effect; the maximum likelihood estimator yields smaller variance and mean squared error than other estimators; and the structural mean models estimator delivers the smallest bias, though generally incurring a larger variance and sometimes having issues in algorithm stability and convergence.

  12. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    PubMed

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability. PMID:25257023

  13. TemperSAT: A new efficient fair-sampling random k-SAT solver

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.

    The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.

  14. Notes on interval estimation of the generalized odds ratio under stratified random sampling.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2013-05-01

    It is not rare to encounter the patient response on the ordinal scale in a randomized clinical trial (RCT). Under the assumption that the generalized odds ratio (GOR) is homogeneous across strata, we consider four asymptotic interval estimators for the GOR under stratified random sampling. These include the interval estimator using the weighted-least-squares (WLS) approach with the logarithmic transformation (WLSL), the interval estimator using the Mantel-Haenszel (MH) type of estimator with the logarithmic transformation (MHL), the interval estimator using Fieller's theorem with the MH weights (FTMH) and the interval estimator using Fieller's theorem with the WLS weights (FTWLS). We employ Monte Carlo simulation to evaluate the performance of these interval estimators by calculating the coverage probability and the average length. To study the bias of these interval estimators, we also calculate and compare the noncoverage probabilities in the two tails of the resulting confidence intervals. We find that WLSL and MHL can generally perform well, while FTMH and FTWLS can lose either precision or accuracy. We further find that MHL is likely the least biased. Finally, we use the data taken from a study of smoking status and breathing test among workers in certain industrial plants in Houston, Texas, during 1974 to 1975 to illustrate the use of these interval estimators.

  15. Object motion tracking in the NDE laboratory by random sample iterative closest point

    NASA Astrophysics Data System (ADS)

    Radkowski, Rafael; Wehr, David; Gregory, Elizabeth; Holland, Stephen D.

    2016-02-01

    We present a computationally efficient technique for real-time motion tracking in the NDE laboratory. Our goal is to track object shapes in an flash thermography test stand to determine the position and orientation of the specimen which facilitates to register thermography data to a 3D part model. Object shapes can be different specimens and fixtures. Specimens can be manually aligned at any test stand, the position and orientation of every a-priori known shape can be computed and forwarded to the data management software. Our technique relies on the random sample consensus (RANSAC) approach to the iterative closest point (ICP) problem for identifying object shapes, thus, it is robust in different situations. The paper introduces the computational techniques and experiments along with the results.

  16. Random sampling of the Green’s Functions for reversible reactions with an intermediate state

    SciTech Connect

    Plante, Ianik; Devroye, Luc; Cucinotta, Francis A.

    2013-06-01

    Exact random variate generators were developed to sample Green’s functions used in Brownian Dynamics (BD) algorithms for the simulations of chemical systems. These algorithms, which use less than a kilobyte of memory, provide a useful alternative to the table look-up method that has been used in similar work. The cases that are studied with this approach are (1) diffusion-influenced reactions; (2) reversible diffusion-influenced reactions and (3) reactions with an intermediate state such as enzymatic catalysis. The results are validated by comparison with those obtained by the Independent Reaction Times (IRT) method. This work is part of our effort in developing models to understand the role of radiation chemistry in the radiation effects on human body and may eventually be included in event-based models of space radiation risk.

  17. Growth by random walker sampling and scaling of the dielectric breakdown model

    NASA Astrophysics Data System (ADS)

    Somfai, Ellák; Goold, Nicholas R.; Ball, Robin C.; Devita, Jason P.; Sander, Leonard M.

    2004-11-01

    Random walkers absorbing on a boundary sample the harmonic measure linearly and independently: we discuss how the recurrence times between impacts enable nonlinear moments of the measure to be estimated. From this we derive a technique to simulate dielectric breakdown model growth, which is governed nonlinearly by the harmonic measure. For diffusion-limited aggregation, recurrence times are shown to be accurate and effective in probing the multifractal growth measure in its active region. For the dielectric breakdown model our technique grows large clusters efficiently and we are led to significantly revise earlier exponent estimates. Previous results by two conformal mapping techniques were less converged than expected, and in particular a recent theoretical suggestion of superuniversality is firmly refuted.

  18. Control Capacity and A Random Sampling Method in Exploring Controllability of Complex Networks

    PubMed Central

    Jia, Tao; Barabási, Albert-László

    2013-01-01

    Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. This algorithm not only provides a statistical estimate of the control capacity, but also bridges the gap between multiple microscopic control configurations and macroscopic properties of the network under control. We demonstrate that the possibility of being a driver node decreases with a node's in-degree and is independent of its out-degree. Given the inherent multiplicity of MDS's, our findings offer tools to explore control in various complex systems. PMID:23912679

  19. A descriptive study of access to services in a random sample of Canadian rural emergency departments

    PubMed Central

    Fleet, Richard; Poitras, Julien; Maltais-Giguère, Julie; Villa, Julie; Archambault, Patrick

    2013-01-01

    Objective To examine 24/7 access to services and consultants in a sample of Canadian rural emergency departments (EDs). Design Cross-sectional study—mixed methods (structured interview, survey and government data bases) with random sampling of hospitals. Setting Canadian rural EDs (rural small town (RST) definition—Statistics Canada). Participants 28% (95/336) of Canadian rural EDs providing 24/7 physician coverage located in hospitals with acute care hospitalisation beds. Main outcome measures General characteristics of the rural EDs, information about 24/7 access to consultants, equipment and services, and the proportion of rural hospitals more than 300 km from levels 1 and 2 trauma centres. Results Of the 336 rural EDs identified, 122 (36%) were randomly selected and contacted. Overall, 95 EDs participated in the study (participation rate, 78%). Hospitals had, on an average, 23 acute care beds, 7 ED stretchers and 13 500 annual ED visits. The proportion of rural hospitals with local access to the following 24/7 services was paediatrician, 5%; obstetrician, 10%; psychiatrist, 11%; internist, 12%; intensive care unit, 17%; CT scanner, 20%; surgeon, 26%; ultrasound, 28%; basic X-ray, 97% and laboratory services, 99%. Forty-four per cent and 54% of the RST EDs were more than 300 km from a level 1 and level 2 trauma centre, respectively. Conclusions This is the first study describing the services available in Canadian rural EDs. Apart from basic laboratory and X-ray services, most rural EDs have limited access to consultants, advanced imaging and critical care services. A detailed study is needed to evaluate the impact of these limited services on patient outcomes, costs and interfacility transport demands. PMID:24285633

  20. Random Photon Absorption Model Elucidates How Early Gain Control in Fly Photoreceptors Arises from Quantal Sampling.

    PubMed

    Song, Zhuoyi; Zhou, Yu; Juusola, Mikko

    2016-01-01

    Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779

  1. Random Photon Absorption Model Elucidates How Early Gain Control in Fly Photoreceptors Arises from Quantal Sampling

    PubMed Central

    Song, Zhuoyi; Zhou, Yu; Juusola, Mikko

    2016-01-01

    Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779

  2. The contribution of simple random sampling to observed variations in faecal egg counts.

    PubMed

    Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I

    2012-09-10

    It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided.

  3. Notes on interval estimation of the gamma correlation under stratified random sampling.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2012-07-01

    We have developed four asymptotic interval estimators in closed forms for the gamma correlation under stratified random sampling, including the confidence interval based on the most commonly used weighted-least-squares (WLS) approach (CIWLS), the confidence interval calculated from the Mantel-Haenszel (MH) type estimator with the Fisher-type transformation (CIMHT), the confidence interval using the fundamental idea of Fieller's Theorem (CIFT) and the confidence interval derived from a monotonic function of the WLS estimator of Agresti's α with the logarithmic transformation (MWLSLR). To evaluate the finite-sample performance of these four interval estimators and note the possible loss of accuracy in application of both Wald's confidence interval and MWLSLR using pooled data without accounting for stratification, we employ Monte Carlo simulation. We use the data taken from a general social survey studying the association between the income level and job satisfaction with strata formed by genders in black Americans published elsewhere to illustrate the practical use of these interval estimators. PMID:22622622

  4. Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials.

    PubMed

    Carlisle, J B; Dexter, F; Pandit, J J; Shafer, S L; Yentis, S M

    2015-07-01

    In a previous paper, one of the authors (JBC) used a chi-squared method to analyse the means (SD) of baseline variables, such as height or weight, from randomised controlled trials by Fujii et al., concluding that the probabilities that the reported distributions arose by chance were infinitesimally small. Subsequent testing of that chi-squared method, using simulation, suggested that the method was incorrect. This paper corrects the chi-squared method and tests its performance and the performance of Monte Carlo simulations and ANOVA to analyse the probability of random sampling. The corrected chi-squared method and ANOVA method became inaccurate when applied to means that were reported imprecisely. Monte Carlo simulations confirmed that baseline data from 158 randomised controlled trials by Fujii et al. were different to those from 329 trials published by other authors and that the distribution of Fujii et al.'s data were different to the expected distribution, both p < 10(-16) . The number of Fujii randomised controlled trials with unlikely distributions was less with Monte Carlo simulation than with the 2012 chi-squared method: 102 vs 117 trials with p < 0.05; 60 vs 86 for p < 0.01; 30 vs 56 for p < 0.001; and 12 vs 24 for p < 0.00001, respectively. The Monte Carlo analysis nevertheless confirmed the original conclusion that the distribution of the data presented by Fujii et al. was extremely unlikely to have arisen from observed data. The Monte Carlo analysis may be an appropriate screening tool to check for non-random (i.e. unreliable) data in randomised controlled trials submitted to journals.

  5. Reality Check for the Chinese Microblog Space: A Random Sampling Approach

    PubMed Central

    Fu, King-wa; Chau, Michael

    2013-01-01

    Chinese microblogs have drawn global attention to this online application’s potential impact on the country’s social and political environment. However, representative and reliable statistics on Chinese microbloggers are limited. Using a random sampling approach, this study collected Chinese microblog data from the service provider, analyzing the profile and the pattern of usage for 29,998 microblog accounts. From our analysis, 57.4% (95% CI 56.9%,58.0%) of the accounts’ timelines were empty. Among the 12,774 non-zero statuses samples, 86.9% (95% CI 86.2%,87.4%) did not make original post in a 7-day study period. By contrast, 0.51% (95% CI 0.4%,0.65%) wrote twenty or more original posts and 0.45% (95% CI 0.35%,0.60%) reposted more than 40 unique messages within the 7-day period. A small group of microbloggers created a majority of contents and drew other users’ attention. About 4.8% (95% CI 4.4%,5.2%) of the 12,774 users contributed more than 80% (95% CI,78.6%,80.3%) of the original posts and about 4.8% (95% CI 4.5%,5.2%) managed to create posts that were reposted or received comments at least once. Moreover, a regression analysis revealed that volume of followers is a key determinant of creating original microblog posts, reposting messages, being reposted, and receiving comments. Volume of friends is found to be linked only with the number of reposts. Gender differences and regional disparities in using microblogs in China are also observed. PMID:23520502

  6. Reality check for the Chinese microblog space: a random sampling approach.

    PubMed

    Fu, King-wa; Chau, Michael

    2013-01-01

    Chinese microblogs have drawn global attention to this online application's potential impact on the country's social and political environment. However, representative and reliable statistics on Chinese microbloggers are limited. Using a random sampling approach, this study collected Chinese microblog data from the service provider, analyzing the profile and the pattern of usage for 29,998 microblog accounts. From our analysis, 57.4% (95% CI 56.9%,58.0%) of the accounts' timelines were empty. Among the 12,774 non-zero statuses samples, 86.9% (95% CI 86.2%,87.4%) did not make original post in a 7-day study period. By contrast, 0.51% (95% CI 0.4%,0.65%) wrote twenty or more original posts and 0.45% (95% CI 0.35%,0.60%) reposted more than 40 unique messages within the 7-day period. A small group of microbloggers created a majority of contents and drew other users' attention. About 4.8% (95% CI 4.4%,5.2%) of the 12,774 users contributed more than 80% (95% CI,78.6%,80.3%) of the original posts and about 4.8% (95% CI 4.5%,5.2%) managed to create posts that were reposted or received comments at least once. Moreover, a regression analysis revealed that volume of followers is a key determinant of creating original microblog posts, reposting messages, being reposted, and receiving comments. Volume of friends is found to be linked only with the number of reposts. Gender differences and regional disparities in using microblogs in China are also observed.

  7. Model-wise and point-wise random sample consensus for robust regression and outlier detection.

    PubMed

    El-Melegy, Moumen T

    2014-11-01

    Popular regression techniques often suffer at the presence of data outliers. Most previous efforts to solve this problem have focused on using an estimation algorithm that minimizes a robust M-estimator based error criterion instead of the usual non-robust mean squared error. However the robustness gained from M-estimators is still low. This paper addresses robust regression and outlier detection in a random sample consensus (RANSAC) framework. It studies the classical RANSAC framework and highlights its model-wise nature for processing the data. Furthermore, it introduces for the first time a point-wise strategy of RANSAC. New estimation algorithms are developed following both the model-wise and point-wise RANSAC concepts. The proposed algorithms' theoretical robustness and breakdown points are investigated in a novel probabilistic setting. While the proposed concepts and algorithms are generic and general enough to adopt many regression machineries, the paper focuses on multilayered feed-forward neural networks in solving regression problems. The algorithms are evaluated on synthetic and real data, contaminated with high degrees of outliers, and compared to existing neural network training algorithms. Furthermore, to improve the time performance, parallel implementations of the two algorithms are developed and assessed to utilize the multiple CPU cores available on nowadays computers. PMID:25047916

  8. Neurofeedback Against Binge Eating: A Randomized Controlled Trial in a Female Subclinical Threshold Sample.

    PubMed

    Schmidt, Jennifer; Martin, Alexandra

    2016-09-01

    Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. PMID:27121224

  9. Discriminative motif discovery via simulated evolution and random under-sampling.

    PubMed

    Song, Tao; Gu, Hong

    2014-01-01

    Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs) training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.

  10. Misperceptions of spoken words: Data from a random sample of American English words

    PubMed Central

    Albert Felty, Robert; Buchwald, Adam; Gruenenfelder, Thomas M.; Pisoni, David B.

    2013-01-01

    This study reports a detailed analysis of incorrect responses from an open-set spoken word recognition experiment of 1428 words designed to be a random sample of the entire American English lexicon. The stimuli were presented in six-talker babble to 192 young, normal-hearing listeners at three signal-to-noise ratios (0, +5, and +10 dB). The results revealed several patterns: (1) errors tended to have a higher frequency of occurrence than did the corresponding target word, and frequency of occurrence of error responses was significantly correlated with target frequency of occurrence; (2) incorrect responses were close to the target words in terms of number of phonemes and syllables but had a mean edit distance of 3; (3) for syllables, substitutions were much more frequent than either deletions or additions; for phonemes, deletions were slightly more frequent than substitutions; both were more frequent than additions; and (4) for errors involving just a single segment, substitutions were more frequent than either deletions or additions. The raw data are being made available to other researchers as supplementary material to form the beginnings of a database of speech errors collected under controlled laboratory conditions. PMID:23862832

  11. Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks

    PubMed Central

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789

  12. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    SciTech Connect

    Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  13. Discriminative Motif Discovery via Simulated Evolution and Random Under-Sampling

    PubMed Central

    Song, Tao; Gu, Hong

    2014-01-01

    Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs) training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes. PMID:24551063

  14. Neurofeedback Against Binge Eating: A Randomized Controlled Trial in a Female Subclinical Threshold Sample.

    PubMed

    Schmidt, Jennifer; Martin, Alexandra

    2016-09-01

    Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.

  15. Sample-to-sample fluctuations of power spectrum of a random motion in a periodic Sinai model

    NASA Astrophysics Data System (ADS)

    Dean, David S.; Iorio, Antonio; Marinari, Enzo; Oshanin, Gleb

    2016-09-01

    The Sinai model of a tracer diffusing in a quenched Brownian potential is a much-studied problem exhibiting a logarithmically slow anomalous diffusion due to the growth of energy barriers with the system size. However, if the potential is random but periodic, the regime of anomalous diffusion crosses over to one of normal diffusion once a tracer has diffused over a few periods of the system. Here we consider a system in which the potential is given by a Brownian bridge on a finite interval (0 ,L ) and then periodically repeated over the whole real line and study the power spectrum S (f ) of the diffusive process x (t ) in such a potential. We show that for most of realizations of x (t ) in a given realization of the potential, the low-frequency behavior is S (f ) ˜A /f2 , i.e., the same as for standard Brownian motion, and the amplitude A is a disorder-dependent random variable with a finite support. Focusing on the statistical properties of this random variable, we determine the moments of A of arbitrary, negative, or positive order k and demonstrate that they exhibit a multifractal dependence on k and a rather unusual dependence on the temperature and on the periodicity L , which are supported by atypical realizations of the periodic disorder. We finally show that the distribution of A has a log-normal left tail and exhibits an essential singularity close to the right edge of the support, which is related to the Lifshitz singularity. Our findings are based both on analytic results and on extensive numerical simulations of the process x (t ) .

  16. Entropic sampling via Wang-Landau random walks in dominant energy subspaces

    NASA Astrophysics Data System (ADS)

    Malakis, A.; Martinos, S. S.; Hadjiagapiou, I. A.; Fytas, N. G.; Kalozoumis, P.

    2005-12-01

    Dominant energy subspaces of statistical systems are defined with the help of restrictive conditions on various characteristics of the energy distribution, such as the probability density and the fourth order Binder’s cumulant. Our analysis generalizes the ideas of the critical minimum energy subspace (CRMES) technique, applied previously to study the specific heat’s finite-size scaling. Here, we illustrate alternatives that are useful for the analysis of further finite-size anomalies and the behavior of the corresponding dominant subspaces is presented for the two-dimensional (2D) Baxter-Wu and the 2D and 3D Ising models. In order to show that a CRMES technique is adequate for the study of magnetic anomalies, we study and test simple methods which provide the means for an accurate determination of the energy-order-parameter (E,M) histograms via Wang-Landau random walks. The 2D Ising model is used as a test case and it is shown that high-level Wang-Landau sampling schemes yield excellent estimates for all magnetic properties. Our estimates compare very well with those of the traditional Metropolis method. The relevant dominant energy subspaces and dominant magnetization subspaces scale as expected with exponents α/ν and γ/ν , respectively. Using the Metropolis method we examine the time evolution of the corresponding dominant magnetization subspaces and we uncover the reasons behind the inadequacy of the Metropolis method to produce a reliable estimation scheme for the tail regime of the order-parameter distribution.

  17. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is

  18. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    ERIC Educational Resources Information Center

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  19. Experiments with central-limit properties of spatial samples from locally covariant random fields

    USGS Publications Warehouse

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  20. Estimates of food and macronutrient intake in a random sample of Northern Ireland adolescents.

    PubMed

    Strain, J J; Robson, P J; Livingstone, M B; Primrose, E D; Savage, J M; Cran, G W; Boreham, C A

    1994-09-01

    Estimates of food consumption and macronutrient intake were obtained from a randomly selected population sample (2%) of 1015 adolescents aged 12 and 15 years in Northern Ireland during the 1990/1991 school year. Dietary intake was assessed by diet history with photographic album to estimate portion size. Reported median energy intakes were 11.0 and 13.1 MJ/d for boys aged 12 and 15 years respectively and 9.2 and 9.1 MJ/d for girls of these ages. Protein, carbohydrate and total sugars intakes as a percentage of total energy varied little between the age and sex groups and were approximately 11, 49 and 20% respectively of daily total energy intakes. Median dietary fibre intakes were approximately 20 and 24 g/d for boys aged 12 and 15 years respectively and 18 and 19 g/d for girls of these ages. Major food sources of energy (as a percentage of total energy intakes) were bread and cereals (15-18%), cakes and biscuits (12-14%), chips and crisps (13-14%), dairy products (9-11%), meat and meat products (9-11%) and confectionery (9%). Fruit and vegetable intakes were low at about 2.5% and 1.5% respectively of total energy intakes. Median fat intakes were high at 39% of total daily energy intakes. Major food sources of fat as a percentage of total fat intakes were from the food groupings: chips and crisps (16-19%), meat and meat products (14-17%), fats and oils (14-16%), cakes and biscuits (13-16%) and dairy products (12-15%). Median intakes of saturated fatty acids were also high at approximately 15% of daily total energy intake while intakes of monounsaturated fatty acids averaged 12% of daily total energy intake. Median polyunsaturated fatty acid (PUFA) intakes were low, comprising 5.2 and 5.5% of daily total energy intake for boys aged 12 and 15 years respectively and were lower than the PUFA intakes (5.9 and 6.3% of daily total energy intake) for girls of these ages. About 1.3% for boys and 1.4% for girls of daily total energy intake was in the form of n-3 PUFA. Ca and

  1. Computerized stratified random site-selection approaches for design of a ground-water-quality sampling network

    USGS Publications Warehouse

    Scott, J.C.

    1990-01-01

    Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.

  2. Identifying the origin of groundwater samples in a multi-layer aquifer system with Random Forest classification

    NASA Astrophysics Data System (ADS)

    Baudron, Paul; Alonso-Sarría, Francisco; García-Aróstegui, José Luís; Cánovas-García, Fulgencio; Martínez-Vicente, David; Moreno-Brotóns, Jesús

    2013-08-01

    Accurate identification of the origin of groundwater samples is not always possible in complex multilayered aquifers. This poses a major difficulty for a reliable interpretation of geochemical results. The problem is especially severe when the information on the tubewells design is hard to obtain. This paper shows a supervised classification method based on the Random Forest (RF) machine learning technique to identify the layer from where groundwater samples were extracted. The classification rules were based on the major ion composition of the samples. We applied this method to the Campo de Cartagena multi-layer aquifer system, in southeastern Spain. A large amount of hydrogeochemical data was available, but only a limited fraction of the sampled tubewells included a reliable determination of the borehole design and, consequently, of the aquifer layer being exploited. Added difficulty was the very similar compositions of water samples extracted from different aquifer layers. Moreover, not all groundwater samples included the same geochemical variables. Despite of the difficulty of such a background, the Random Forest classification reached accuracies over 90%. These results were much better than the Linear Discriminant Analysis (LDA) and Decision Trees (CART) supervised classification methods. From a total of 1549 samples, 805 proceeded from one unique identified aquifer, 409 proceeded from a possible blend of waters from several aquifers and 335 were of unknown origin. Only 468 of the 805 unique-aquifer samples included all the chemical variables needed to calibrate and validate the models. Finally, 107 of the groundwater samples of unknown origin could be classified. Most unclassified samples did not feature a complete dataset. The uncertainty on the identification of training samples was taken in account to enhance the model. Most of the samples that could not be identified had an incomplete dataset.

  3. Wavefront watermarking of technical and biological samples using digital random phase modulation and phase retrieval

    NASA Astrophysics Data System (ADS)

    Carpio, Justine Patricia L.; Almoro, Percival F.

    2014-10-01

    A technique for digital watermarking of smooth object wavefronts using digital random phase modulation and multiple-plane iterative phase retrieval is demonstrated experimentally. A complex-valued watermark is first encrypted using two random phase masks of known distributions before being superposed onto a set of host wavefront intensity patterns. Encryption scaling factor and depth of randomization of the masks are optimized such that the amplitude and phase watermarks are decrypted successfully and are not distorting the host wavefront. Given that the watermarked intensity patterns and the numerous decryption keys are available (i.e. distances between recording planes, light source wavelength, pixel size, random phase masks and their distances to the planes are all known), increasing the number of watermarked patterns used results in enhanced quality of decrypted watermarks. The main advantage of wavefront watermarking via the phase retrieval approach compared to the holographic approach is the avoidance of reference wave-induced aberration. Watermarking of wavefronts from lenses and unstained human cheek cells demonstrate the effectiveness of the technique.

  4. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  5. Random sparse sampling strategy using stochastic simulation and estimation for a population pharmacokinetic study

    PubMed Central

    Huang, Xiao-hui; Wang, Kun; Huang, Ji-han; Xu, Ling; Li, Lu-jin; Sheng, Yu-cheng; Zheng, Qing-shan

    2013-01-01

    The purpose of this study was to use the stochastic simulation and estimation method to evaluate the effects of sample size and the number of samples per individual on the model development and evaluation. The pharmacokinetic parameters and inter- and intra-individual variation were obtained from a population pharmacokinetic model of clinical trials of amlodipine. Stochastic simulation and estimation were performed to evaluate the efficiencies of different sparse sampling scenarios to estimate the compartment model. Simulated data were generated a 1000 times and three candidate models were used to fit the 1000 data sets. Fifty-five kinds of sparse sampling scenarios were investigated and compared. The results showed that, 60 samples with three points and 20 samples with five points are recommended, and the quantitative methodology of stochastic simulation and estimation is valuable for efficiently estimating the compartment model and can be used for other similar model development and evaluation approaches. PMID:24493975

  6. Polytobacco use and multiple-product smoking among a random community sample of African-American adults

    PubMed Central

    Corral, Irma; Landrine, Hope; Simms, Denise Adams; Bess, Jukelia J

    2013-01-01

    Objectives Little is known about polytobacco use among African-American adults. This study is the first to explore this among a random, statewide, community sample of African-American adults. Setting Community-based sampling obtained a random, household-probability sample of African-American adults from California, surveyed door to door in randomly selected census tracts statewide. Participants Participants were a statewide, random-household sample of N=2118 African-American adults from California who completed a survey on past 30-day smoking of cigarettes, blunts, bidis, kreteks, cigarillos, marijuana and cigars. Results Almost half (49.3%) of the African-American cigarette-smokers and 14.9% of the cigarette non-smokers had smoked at least one non-cigarette product in the past 30 days. Smokers had a substantial prevalence of smoking cigarillos (28.7%) and blunts (27.7%). Logistic regressions revealed that the odds of smoking most of the non-cigarette products were higher for cigarette smokers and men, inversely related to age, and unrelated to socioeconomic status. However, smoking of blunts, bidis and kreteks was not predicted by cigarette smoking. Conclusions Smoking of cigarillos (eg, Phillies, Black & Mild) and blunts may be prevalent among African-American cigarette-smokers and non-smokers alike, but such products are not examined in most population-level smoking research. Smoking of these products should be included in surveillance studies, in cancer prevention programmes and in healthcare provider-assessment of smoking, and addressed in smoking cessation programmes as well. PMID:24334154

  7. Analysis of apoB and apoC-II gene polymorphism in random sample and CHD patients from Moscow

    SciTech Connect

    Pogoda, T.V.; Nikonova, A.; Perova, N.V.

    1994-09-01

    We have analyzed the allele frequency distributions of the 3{prime} apoB gene minisatellite and apoC-II gene microsatellite in random sample of coronary heart disease (CHD) patients. For this purpose we used the PCR technique followed by high-resolution PAGE. It was revealed that the apoB allele, harboring 30 repeats (apoB 30), as well as the apoC-II allele harboring 30 repeats (apoC-II 30), were less frequent in patients at the same time as the frequency of the apoB 32 and apoC-II 17 alleles was greater in patients. The greater frequency of apoB alleles which were larger in size than apoB 46 (defined as `long` - L) was observed in patients with high apoB levels (>160mg dl). The analysis of apoB genotype distribution showed that in a random sample the most common genotype was apoB 34,36 (a combination of the most frequent alleles in the random sample). In patients with high apoB levels, it was twice less frequent, and the most common genotype was apoB 36,L (43% versus 12% in the random sample). Analysis of data on a lipid spectrum of subjects from the random sample with different apoB and apoC-II 17 alleles were associated with atherogenic shifts in the lipid profile, at the same time as apoB 30 and apoC-II 30 alleles - with an apparently favorable lipid profile. The increment of the disease-related risk was observed for subjects with a combination of apoB 32 allele or apoB 36,L genotype with the apoC-11 17 allele. Alternatively, combination of these apoB variants with the apoC-II 30 allele resulted in decreased related risk. In conclusion, simultaneous analysis of two candidate gene variants demonstrated interaction in their influence on the lipid spectrum.

  8. Code to generate random identifiers and select QA/QC samples

    USGS Publications Warehouse

    Mehnert, Edward

    1992-01-01

    SAMPLID is a PC-based, FORTRAN-77 code which generates unique numbers for identification of samples, selection of QA/QC samples, and generation of labels. These procedures are tedious, but using a computer code such as SAMPLID can increase efficiency and reduce or eliminate errors and bias. The algorithm, used in SAMPLID, for generation of pseudorandom numbers is free of statistical flaws present in commonly available algorithms.

  9. Hybrid random walk-linear discriminant analysis method for unwrapping quantitative phase microscopy images of biological samples

    PubMed Central

    Kim, Diane N. H.; Teitell, Michael A.; Reed, Jason; Zangle, Thomas A.

    2015-01-01

    Abstract. Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples. PMID:26305212

  10. Cloud Removal from SENTINEL-2 Image Time Series Through Sparse Reconstruction from Random Samples

    NASA Astrophysics Data System (ADS)

    Cerra, D.; Bieniarz, J.; Müller, R.; Reinartz, P.

    2016-06-01

    In this paper we propose a cloud removal algorithm for scenes within a Sentinel-2 satellite image time series based on synthetisation of the affected areas via sparse reconstruction. For this purpose, a clouds and clouds shadow mask must be given. With respect to previous works, the process has an increased automation degree. Several dictionaries, on the basis of which the data are reconstructed, are selected randomly from cloud-free areas around the cloud, and for each pixel the dictionary yielding the smallest reconstruction error in non-corrupted images is chosen for the restoration. The values below a cloudy area are therefore estimated by observing the spectral evolution in time of the non-corrupted pixels around it. The proposed restoration algorithm is fast and efficient, requires minimal supervision and yield results with low overall radiometric and spectral distortions.

  11. Assessing the feasibility and sample quality of a national random-digit dialing cellular phone survey of young adults.

    PubMed

    Gundersen, Daniel A; ZuWallack, Randal S; Dayton, James; Echeverría, Sandra E; Delnevo, Cristine D

    2014-01-01

    The majority of adults aged 18-34 years have only cellular phones, making random-digit dialing of landline telephones an obsolete methodology for surveillance of this population. However, 95% of this group has cellular phones. This article reports on the 2011 National Young Adult Health Survey (NYAHS), a pilot study conducted in the 50 US states and Washington, DC, that used random-digit dialing of cellular phones and benchmarked this methodology against that of the 2011 Behavioral Risk Factor Surveillance System (BRFSS). Comparisons of the demographic distributions of subjects in the NYAHS and BRFSS (aged 18-34 years) with US Census data revealed adequate reach for all demographic subgroups. After adjustment for design factors, the mean absolute deviations across demographic groups were 3 percentage points for the NYAHS and 2.8 percentage points for the BRFSS, nationally, and were comparable for each census region. Two-sided z tests comparing cigarette smoking prevalence revealed no significant differences between NYAHS and BRFSS participants overall or by subgroups. The design effects of the sampling weight were 2.09 for the NYAHS and 3.26 for the BRFSS. Response rates for the NYAHS and BRFSS cellular phone sampling frames were comparable. Our assessment of the NYAHS methodology found that random-digit dialing of cellular phones is a feasible methodology for surveillance of young adults.

  12. Assessing the Feasibility and Sample Quality of a National Random-digit Dialing Cellular Phone Survey of Young Adults

    PubMed Central

    Gundersen, Daniel A.; ZuWallack, Randal S.; Dayton, James; Echeverría, Sandra E.; Delnevo, Cristine D.

    2014-01-01

    The majority of adults aged 18–34 years have only cellular phones, making random-digit dialing of landline telephones an obsolete methodology for surveillance of this population. However, 95% of this group has cellular phones. This article reports on the 2011 National Young Adult Health Survey (NYAHS), a pilot study conducted in the 50 US states and Washington, DC, that used random-digit dialing of cellular phones and benchmarked this methodology against that of the 2011 Behavioral Risk Factor Surveillance System (BRFSS). Comparisons of the demographic distributions of subjects in the NYAHS and BRFSS (aged 18–34 years) with US Census data revealed adequate reach for all demographic subgroups. After adjustment for design factors, the mean absolute deviations across demographic groups were 3 percentage points for the NYAHS and 2.8 percentage points for the BRFSS, nationally, and were comparable for each census region. Two-sided z tests comparing cigarette smoking prevalence revealed no significant differences between NYAHS and BRFSS participants overall or by subgroups. The design effects of the sampling weight were 2.09 for the NYAHS and 3.26 for the BRFSS. Response rates for the NYAHS and BRFSS cellular phone sampling frames were comparable. Our assessment of the NYAHS methodology found that random-digit dialing of cellular phones is a feasible methodology for surveillance of young adults. PMID:24100957

  13. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    SciTech Connect

    Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.

    2010-07-09

    We discuss the results of SEM and TEM measurements with the BPRML test samples fabricated from a BPRML (WSi2/Si with fundamental layer thickness of 3 nm) with a Dual Beam FIB (focused ion beam)/SEM technique. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.

  14. Sampling reactive pathways with random walks in chemical space: Applications to molecular dissociation and catalysis

    NASA Astrophysics Data System (ADS)

    Habershon, Scott

    2015-09-01

    Automatically generating chemical reaction pathways is a significant computational challenge, particularly in the case where a given chemical system can exhibit multiple reactants and products, as well as multiple pathways connecting these. Here, we outline a computational approach to allow automated sampling of chemical reaction pathways, including sampling of different chemical species at the reaction end-points. The key features of this scheme are (i) introduction of a Hamiltonian which describes a reaction "string" connecting reactant and products, (ii) definition of reactant and product species as chemical connectivity graphs, and (iii) development of a scheme for updating the chemical graphs associated with the reaction end-points. By performing molecular dynamics sampling of the Hamiltonian describing the complete reaction pathway, we are able to sample multiple different paths in configuration space between given chemical products; by periodically modifying the connectivity graphs describing the chemical identities of the end-points we are also able to sample the allowed chemical space of the system. Overall, this scheme therefore provides a route to automated generation of a "roadmap" describing chemical reactivity. This approach is first applied to model dissociation pathways in formaldehyde, H2CO, as described by a parameterised potential energy surface (PES). A second application to the HCo(CO)3 catalyzed hydroformylation of ethene (oxo process), using density functional tight-binding to model the PES, demonstrates that our graph-based approach is capable of sampling the intermediate paths in the commonly accepted catalytic mechanism, as well as several secondary reactions. Further algorithmic improvements are suggested which will pave the way for treating complex multi-step reaction processes in a more efficient manner.

  15. Sampling reactive pathways with random walks in chemical space: Applications to molecular dissociation and catalysis.

    PubMed

    Habershon, Scott

    2015-09-01

    Automatically generating chemical reaction pathways is a significant computational challenge, particularly in the case where a given chemical system can exhibit multiple reactants and products, as well as multiple pathways connecting these. Here, we outline a computational approach to allow automated sampling of chemical reaction pathways, including sampling of different chemical species at the reaction end-points. The key features of this scheme are (i) introduction of a Hamiltonian which describes a reaction "string" connecting reactant and products, (ii) definition of reactant and product species as chemical connectivity graphs, and (iii) development of a scheme for updating the chemical graphs associated with the reaction end-points. By performing molecular dynamics sampling of the Hamiltonian describing the complete reaction pathway, we are able to sample multiple different paths in configuration space between given chemical products; by periodically modifying the connectivity graphs describing the chemical identities of the end-points we are also able to sample the allowed chemical space of the system. Overall, this scheme therefore provides a route to automated generation of a "roadmap" describing chemical reactivity. This approach is first applied to model dissociation pathways in formaldehyde, H2CO, as described by a parameterised potential energy surface (PES). A second application to the HCo(CO)3 catalyzed hydroformylation of ethene (oxo process), using density functional tight-binding to model the PES, demonstrates that our graph-based approach is capable of sampling the intermediate paths in the commonly accepted catalytic mechanism, as well as several secondary reactions. Further algorithmic improvements are suggested which will pave the way for treating complex multi-step reaction processes in a more efficient manner.

  16. Random sampling of the Central European bat fauna reveals the existence of numerous hitherto unknown adenoviruses.

    PubMed

    Vidovszky, Márton; Kohl, Claudia; Boldogh, Sándor; Görföl, Tamás; Wibbelt, Gudrun; Kurth, Andreas; Harrach, Balázs

    2015-12-01

    From over 1250 extant species of the order Chiroptera, 25 and 28 are known to occur in Germany and Hungary, respectively. Close to 350 samples originating from 28 bat species (17 from Germany, 27 from Hungary) were screened for the presence of adenoviruses (AdVs) using a nested PCR that targets the DNA polymerase gene of AdVs. An additional PCR was designed and applied to amplify a fragment from the gene encoding the IVa2 protein of mastadenoviruses. All German samples originated from organs of bats found moribund or dead. The Hungarian samples were excrements collected from colonies of known bat species, throat or rectal swab samples, taken from live individuals that had been captured for faunistic surveys and migration studies, as well as internal organs of dead specimens. Overall, 51 samples (14.73%) were found positive. We detected 28 seemingly novel and six previously described bat AdVs by sequencing the PCR products. The positivity rate was the highest among the guano samples of bat colonies. In phylogeny reconstructions, the AdVs detected in bats clustered roughly, but not perfectly, according to the hosts' families (Vespertilionidae, Rhinolophidae, Hipposideridae, Phyllostomidae and Pteropodidae). In a few cases, identical sequences were derived from animals of closely related species. On the other hand, some bat species proved to harbour more than one type of AdV. The high prevalence of infection and the large number of chiropteran species worldwide make us hypothesise that hundreds of different yet unknown AdV types might circulate in bats. PMID:26599097

  17. The revised Temperament and Character Inventory: normative data by sex and age from a Spanish normal randomized sample

    PubMed Central

    Labad, Javier; Martorell, Lourdes; Gaviria, Ana; Bayón, Carmen; Vilella, Elisabet; Cloninger, C. Robert

    2015-01-01

    Objectives. The psychometric properties regarding sex and age for the revised version of the Temperament and Character Inventory (TCI-R) and its derived short version, the Temperament and Character Inventory (TCI-140), were evaluated with a randomized sample from the community. Methods. A randomized sample of 367 normal adult subjects from a Spanish municipality, who were representative of the general population based on sex and age, participated in the current study. Descriptive statistics and internal consistency according to α coefficient were obtained for all of the dimensions and facets. T-tests and univariate analyses of variance, followed by Bonferroni tests, were conducted to compare the distributions of the TCI-R dimension scores by age and sex. Results. On both the TCI-R and TCI-140, women had higher scores for Harm Avoidance, Reward Dependence and Cooperativeness than men, whereas men had higher scores for Persistence. Age correlated negatively with Novelty Seeking, Reward Dependence and Cooperativeness and positively with Harm Avoidance and Self-transcendence. Young subjects between 18 and 35 years had higher scores than older subjects in NS and RD. Subjects between 51 and 77 years scored higher in both HA and ST. The alphas for the dimensions were between 0.74 and 0.87 for the TCI-R and between 0.63 and 0.83 for the TCI-140. Conclusion. Results, which were obtained with a randomized sample, suggest that there are specific distributions of personality traits by sex and age. Overall, both the TCI-R and the abbreviated TCI-140 were reliable in the ‘good-to-excellent’ range. A strength of the current study is the representativeness of the sample. PMID:26713237

  18. Power and Sample Size Calculations for Multivariate Linear Models with Random Explanatory Variables

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2005-01-01

    This article considers the problem of power and sample size calculations for normal outcomes within the framework of multivariate linear models. The emphasis is placed on the practical situation that not only the values of response variables for each subject are just available after the observations are made, but also the levels of explanatory…

  19. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    PubMed

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  20. Autosomal STR allele frequencies for the CODIS system from a large random population sample in Chile.

    PubMed

    Vergara, Ismael A; Villouta, Pamela; Herrera, Sandra; Melo, Francisco

    2012-05-01

    The thirteen autosomal STR loci of the CODIS system were typed from DNA of 732 unrelated male individuals sampled from different locations in Chile. This is the first report of allele frequencies for the thirteen STRs loci defined in the CODIS system from the Chilean population.

  1. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  2. Active Learning Not Associated with Student Learning in a Random Sample of College Biology Courses

    PubMed Central

    Andrews, T. M.; Leonard, M. J.; Colgrove, C. A.; Kalinowski, S. T.

    2011-01-01

    Previous research has suggested that adding active learning to traditional college science lectures substantially improves student learning. However, this research predominantly studied courses taught by science education researchers, who are likely to have exceptional teaching expertise. The present study investigated introductory biology courses randomly selected from a list of prominent colleges and universities to include instructors representing a broader population. We examined the relationship between active learning and student learning in the subject area of natural selection. We found no association between student learning gains and the use of active-learning instruction. Although active learning has the potential to substantially improve student learning, this research suggests that active learning, as used by typical college biology instructors, is not associated with greater learning gains. We contend that most instructors lack the rich and nuanced understanding of teaching and learning that science education researchers have developed. Therefore, active learning as designed and implemented by typical college biology instructors may superficially resemble active learning used by education researchers, but lacks the constructivist elements necessary for improving learning. PMID:22135373

  3. Active learning not associated with student learning in a random sample of college biology courses.

    PubMed

    Andrews, T M; Leonard, M J; Colgrove, C A; Kalinowski, S T

    2011-01-01

    Previous research has suggested that adding active learning to traditional college science lectures substantially improves student learning. However, this research predominantly studied courses taught by science education researchers, who are likely to have exceptional teaching expertise. The present study investigated introductory biology courses randomly selected from a list of prominent colleges and universities to include instructors representing a broader population. We examined the relationship between active learning and student learning in the subject area of natural selection. We found no association between student learning gains and the use of active-learning instruction. Although active learning has the potential to substantially improve student learning, this research suggests that active learning, as used by typical college biology instructors, is not associated with greater learning gains. We contend that most instructors lack the rich and nuanced understanding of teaching and learning that science education researchers have developed. Therefore, active learning as designed and implemented by typical college biology instructors may superficially resemble active learning used by education researchers, but lacks the constructivist elements necessary for improving learning.

  4. Seroincidence of non-typhoid Salmonella infections: convenience vs. random community-based sampling.

    PubMed

    Emborg, H-D; Simonsen, J; Jørgensen, C S; Harritshøj, L H; Krogfelt, K A; Linneberg, A; Mølbak, K

    2016-01-01

    The incidence of reported infections of non-typhoid Salmonella is affected by biases inherent to passive laboratory surveillance, whereas analysis of blood sera may provide a less biased alternative to estimate the force of Salmonella transmission in humans. We developed a mathematical model that enabled a back-calculation of the annual seroincidence of Salmonella based on measurements of specific antibodies. The aim of the present study was to determine the seroincidence in two convenience samples from 2012 (Danish blood donors, n = 500, and pregnant women, n = 637) and a community-based sample of healthy individuals from 2006 to 2007 (n = 1780). The lowest antibody levels were measured in the samples from the community cohort and the highest in pregnant women. The annual Salmonella seroincidences were 319 infections/1000 pregnant women [90% credibility interval (CrI) 210-441], 182/1000 in blood donors (90% CrI 85-298) and 77/1000 in the community cohort (90% CrI 45-114). Although the differences between study populations decreased when accounting for different age distributions the estimates depend on the study population. It is important to be aware of this issue and define a certain population under surveillance in order to obtain consistent results in an application of serological measures for public health purposes.

  5. Is the prevalence of child sexual abuse decreasing? Evidence from a random sample of 750 young adult women.

    PubMed

    Bagley, C

    1990-06-01

    An adult recall study in Calgary, Alberta in 1988-89 of child sexual abuse used stratified, random sampling to identify 750 women ages 18 to 27 yr., divided into 10 age cohorts of approximately 75 each. Those aged 18 and 19 yr. recalled significantly less contact abuse up to age 16 than those aged 20 to 27 yr. Experiencing print or visual media on the topic of sexual abuse was associated with a decreased prevalence. It is argued that in a climate of publicity and greater understanding of help sources and harmful effects, the actual prevalence of child sexual abuse may be decreasing.

  6. Sequential sampling model for multiattribute choice alternatives with random attention time and processing order.

    PubMed

    Diederich, Adele; Oswald, Peter

    2014-01-01

    A sequential sampling model for multiattribute binary choice options, called multiattribute attention switching (MAAS) model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered-the attention time-influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability p 0 > 0 of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process. PMID:25249963

  7. SubPatch: random kd-tree on a sub-sampled patch set for nearest neighbor field estimation

    NASA Astrophysics Data System (ADS)

    Pedersoli, Fabrizio; Benini, Sergio; Adami, Nicola; Okuda, Masahiro; Leonardi, Riccardo

    2015-02-01

    We propose a new method to compute the approximate nearest-neighbors field (ANNF) between image pairs using random kd-tree and patch set sub-sampling. By exploiting image coherence we demonstrate that it is possible to reduce the number of patches on which we compute the ANNF, while maintaining high overall accuracy on the final result. Information on missing patches is then recovered by interpolation and propagation of good matches. The introduction of the sub-sampling factor on patch sets also allows for setting the desired trade off between accuracy and speed, providing a flexibility that lacks in state-of-the-art methods. Tests conducted on a public database prove that our algorithm achieves superior performance with respect to PatchMatch (PM) and Coherence Sensitivity Hashing (CSH) algorithms in a comparable computational time.

  8. Global Stratigraphy of Venus: Analysis of a Random Sample of Thirty-Six Test Areas

    NASA Technical Reports Server (NTRS)

    Basilevsky, Alexander T.; Head, James W., III

    1995-01-01

    The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. These units and structures form a major stratigraphic and geologic sequence (from oldest to youngest): (1) tessera terrain; (2) densely fractured terrains associated with coronae and in the form of remnants among plains; (3) fractured and ridged plains and ridge belts; (4) plains with wrinkle ridges; (5) ridges associated with coronae annulae and ridges of arachnoid annulae which are contemporary with wrinkle ridges of the ridged plains; (6) smooth and lobate plains; (7) fractures of coronae annulae, and fractures not related to coronae annulae, which disrupt ridged and smooth plains; (8) rift-associated fractures; and (9) craters with associated dark paraboloids, which represent the youngest 1O% of the Venus impact crater population (Campbell et al.), and are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; surficial streaks and patches are approximately contemporary with dark-paraboloid craters. Mapping of such units and structures in 36 randomly distributed large regions (each approximately 10(exp 6) sq km) shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky) is the earliest event detected. In the terminal stages of tessera fon

  9. Structural Effects of Network Sampling Coverage I: Nodes Missing at Random1

    PubMed Central

    Smith, Jeffrey A.; Moody, James

    2013-01-01

    Network measures assume a census of a well-bounded population. This level of coverage is rarely achieved in practice, however, and we have only limited information on the robustness of network measures to incomplete coverage. This paper examines the effect of node-level missingness on 4 classes of network measures: centrality, centralization, topology and homophily across a diverse sample of 12 empirical networks. We use a Monte Carlo simulation process to generate data with known levels of missingness and compare the resulting network scores to their known starting values. As with past studies (Borgatti et al 2006; Kossinets 2006), we find that measurement bias generally increases with more missing data. The exact rate and nature of this increase, however, varies systematically across network measures. For example, betweenness and Bonacich centralization are quite sensitive to missing data while closeness and in-degree are robust. Similarly, while the tau statistic and distance are difficult to capture with missing data, transitivity shows little bias even with very high levels of missingness. The results are also clearly dependent on the features of the network. Larger, more centralized networks are generally more robust to missing data, but this is especially true for centrality and centralization measures. More cohesive networks are robust to missing data when measuring topological features but not when measuring centralization. Overall, the results suggest that missing data may have quite large or quite small effects on network measurement, depending on the type of network and the question being posed. PMID:24311893

  10. Loneliness and Ethnic Composition of the School Class: A Nationally Random Sample of Adolescents.

    PubMed

    Madsen, Katrine Rich; Damsgaard, Mogens Trab; Rubin, Mark; Jervelund, Signe Smith; Lasgaard, Mathias; Walsh, Sophie; Stevens, Gonneke G W J M; Holstein, Bjørn E

    2016-07-01

    Loneliness is a public health concern that increases the risk for several health, behavioral and academic problems among adolescents. Some studies have suggested that adolescents with an ethnic minority background have a higher risk for loneliness than adolescents from the majority population. The increasing numbers of migrant youth around the world mean growing numbers of heterogeneous school environments in many countries. Even though adolescents spend a substantial amount of time at school, there is currently very little non-U.S. research that has examined the importance of the ethnic composition of school classes for loneliness in adolescence. The present research aimed to address this gap by exploring the association between loneliness and three dimensions of the ethnic composition in the school class: (1) membership of ethnic majority in the school class, (2) the size of own ethnic group in the school class, and (3) the ethnic diversity of the school class. We used data from the Danish 2014 Health Behaviour in School-aged Children survey: a nationally representative sample of 4383 (51.2 % girls) 11-15-year-olds. Multilevel logistic regression analyses revealed that adolescents who did not belong to the ethnic majority in the school class had increased odds for loneliness compared to adolescents that belonged to the ethnic majority. Furthermore, having more same-ethnic classmates lowered the odds for loneliness. We did not find any statistically significant association between the ethnic diversity of the school classes and loneliness. The study adds novel and important findings to how ethnicity in a school class context, as opposed to ethnicity per se, influences adolescents' loneliness. PMID:26861709

  11. VASCULAR RISK FACTORS AND COGNITIVE DECLINE IN A POPULATION SAMPLE

    PubMed Central

    Ganguli, Mary; Fu, Bo; Snitz, Beth E.; Unverzagt, Frederick W.; Loewenstein, David A.; Hughes, Tiffany F.; Chang, Chung-Chou H.

    2014-01-01

    We examined several vascular factors in relation to rates of decline in five cognitive domains in a population-based cohort. In an age-stratified random sample (N=1982) aged 65+ years, we assessed at baseline the cognitive domains of attention, executive function, memory, language, and visuospatial function, and also vascular, inflammatory, and metabolic indices. Random effects models generated slopes of cognitive decline over the next four years; linear models identified vascular factors associated with these slopes, adjusting for demographics, baseline cognition, and potential interactions. Several vascular risk factors (history of stroke, diabetes, central obesity, C-Reactive Protein), although associated with lower baseline cognitive performance, did not predict rate of subsequent decline. APOE*4 genotype was associated with accelerated decline in language, memory, and executive functions. Homocysteine elevation was associated with faster decline in executive function. Hypertension (history or systolic blood pressure >140 mm) was associated with slower decline in memory. Baseline alcohol consumption was associated with slower decline in attention, language, and memory. Different indices of vascular risk are associated with low performance and with rates of decline in different cognitive domains. Cardiovascular mechanisms explain at least some of the variance in cognitive decline. Selective survival may also play a role. PMID:24126216

  12. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge

    PubMed Central

    Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Frédéric; Petignat, Patrick

    2015-01-01

    Background Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. Methods A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY) or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA). After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET). HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic. Results HPV prevalence for high-risk types was 62.3% (95%CI: 53.7–70.2) detected by s-DRY, 56.2% (95%CI: 47.6–64.4) by Dr-WET, and 54.6% (95%CI: 46.1–62.9) by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34), and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56). Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+) were: 64.0% (95%CI: 44.5–79.8) for s-FTA, 84.6% (95%CI: 66.5–93.9) for s-DRY, and 76.9% (95%CI: 58.0–89.0) for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%). Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD)/per card vs. ~1 USD/per swab). Conclusion Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): 43310942 PMID:26630353

  13. Mamu-DQA1 allele and genotype frequencies in a randomly sampled breeding colony of rhesus macaques (Macaca mulatta).

    PubMed

    Rolfs, B K; Lorenz, J G; Wu, C C; Lerche, N W; Smith, D G

    2001-04-01

    We studied the allelic and genotypic distribution of the major histocompatibility class-II locus DQA1 observed in a random sample of Indian rhesus macaques (Macaca mulatta) from a major breeding facility in the United States. The DNA was isolated from whole blood samples collected between 1991 and 1994 from 65 Indian rhesus monkeys. Polymerase chain reaction-restriction fragment length polymorphism analysis (PCR-RFLP), which involves use of specific amplification of DQA1 exon 2 and subsequent restriction digestion of the 242-base pair fragment, was used to genotype the animals for the 20 known macaque (Mamu)-DQA1 alleles. Frequencies for four alleles (DQA1*240x, *2502, *2503 and *0102) differed significantly from those reported in a smaller sample of rhesus macaques from the German Primate Center. The modest genetic survey of Mamu-DQA1 genotypes presented here will be particularly useful in designing epidemiologic studies that investigate associations between immunogenetic background and disease susceptibility in macaque models of human disease.

  14. BEHAVIORAL RISK DISPARITIES IN A RANDOM SAMPLE OF SELF-IDENTIFYING GAY AND NON-GAY MALE UNIVERSITY STUDENTS

    PubMed Central

    Rhodes, Scott D.; McCoy, Thomas P.; Wilkin, Aimee M.; Wolfson, Mark

    2013-01-01

    This internet-based study was designed to compare health risk behaviors of gay and non-gay university students from stratified random cross-sectional samples of undergraduate students. Mean age of the 4,167 male participants was 20.5 (±2.7) years. Of these, 206 (4.9%) self-identified as gay and 3,961 (95.1%) self-identified as heterosexual. After adjusting for selected characteristics and clustering within university, gay men had higher odds of reporting: multiple sexual partners; cigarette smoking; methamphetamine use; gamma-hydroxybutyrate (GHB) use; other illicit drug use within the past 30 days and during lifetime; and intimate partner violence (IPV). Understanding the health risk behaviors of gay and heterosexual men is crucial to identifying associated factors and intervening upon them using appropriate and tailored strategies to reduce behavioral risk disparities and improve health outcomes. PMID:19882428

  15. The effects of quercetin supplementation on cognitive functioning in a community sample: a randomized, placebo-controlled trial

    PubMed Central

    Canu, Will H.; Trout, Krystal L.; Nieman, David C.

    2012-01-01

    Background: The purpose of the present study was to examine the effects of quercetin supplementation on neurocognitive functioning. Methods: A large community sample (n = 941) completed a 12-week supplementation protocol, and participants were randomly assigned to receive 500 mg/day or 1000 mg/day quercetin, or placebo. Results: Results failed to indicate significant effects of quercetin on memory, psychomotor speed, reaction time, attention, or cognitive flexibility, despite large increases in plasma quercetin levels among the quercetin treatment groups. Discussion: Consistent with recent research, this study raises concerns regarding the generalizability of positive findings of in vitro and animal quercetin research, and provides evidence that quercetin may not have an ergogenic effect on neurocognitive functioning in humans. PMID:23983966

  16. Enhancing positive parent-child interactions and family functioning in a poverty sample: a randomized control trial.

    PubMed

    Negrão, Mariana; Pereira, Mariana; Soares, Isabel; Mesman, Judi

    2014-01-01

    This study tested the attachment-based intervention program Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) in a randomized controlled trial with poor families of toddlers screened for professional's concerns about the child's caregiving environment. The VIPP-SD is an evidence-based intervention, but has not yet been tested in the context of poverty. The sample included 43 families with 1- to 4-year-old children: mean age at the pretest was 29 months and 51% were boys. At the pretest and posttest, mother-child interactions were observed at home, and mothers reported on family functioning. The VIPP-SD proved to be effective in enhancing positive parent-child interactions and positive family relations in a severely deprived context. Results are discussed in terms of implications for support services provided to such poor families in order to reduce intergenerational risk transmission. PMID:24972101

  17. Comparison of risk-based versus random sampling in the monitoring of antimicrobial residues in Danish finishing pigs.

    PubMed

    Alban, Lis; Rugbjerg, Helene; Petersen, Jesper Valentin; Nielsen, Liza Rosenbaum

    2016-06-01

    more residue cases with higher cost-effectiveness than random monitoring. Sampling 7500 HR pigs and 5000 LR pigs resulted in the most cost-effective monitoring among the alternative scenarios. The associated costs would increase by 4%. A scenario involving testing of 5000 HR and 5000 LR animals would result in slightly fewer positives, but 17% savings in costs. The advantages of using HPLC LC-MS/MS compared to the bioassay are a fast response and a high sensitivity for all relevant substances used in pigs. The Danish abattoir companies have implemented a risk-based monitoring similar to the above per January 2016. PMID:27237394

  18. A Random Sample

    ERIC Educational Resources Information Center

    Cochran, Wendell

    1976-01-01

    Presented is a review of papers presented at the 25th International Geological Congress held August 16-25, 1976, Sydney, Australia. Topics include precambrian geology, tectonics, biostratigraphy, geochemistry, quaternary geology, engineering geology, planetology, geological education, and stress environments. (SL)

  19. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  20. 454 Pyrosequencing Analysis on Faecal Samples from a Randomized DBPC Trial of Colicky Infants Treated with Lactobacillus reuteri DSM 17938

    PubMed Central

    Roos, Stefan; Dicksved, Johan; Tarasco, Valentina; Locatelli, Emanuela; Ricceri, Fulvio; Grandin, Ulf; Savino, Francesco

    2013-01-01

    Objective To analyze the global microbial composition, using large-scale DNA sequencing of 16 S rRNA genes, in faecal samples from colicky infants given L. reuteri DSM 17938 or placebo. Methods Twenty-nine colicky infants (age 10–60 days) were enrolled and randomly assigned to receive either Lactobacillus reuteri (108 cfu) or a placebo once daily for 21 days. Responders were defined as subjects with a decrease of 50% in daily crying time at day 21 compared with the starting point. The microbiota of faecal samples from day 1 and 21 were analyzed using 454 pyrosequencing. The primers: Bakt_341F and Bakt_805R, complemented with 454 adapters and sample specific barcodes were used for PCR amplification of the 16 S rRNA genes. The structure of the data was explored by using permutational multivariate analysis of variance and effects of different variables were visualized with ordination analysis. Results The infants’ faecal microbiota were composed of Proteobacteria, Firmicutes, Actinobacteria and Bacteroidetes as the four main phyla. The composition of the microbiota in infants with colic had very high inter-individual variability with Firmicutes/Bacteroidetes ratios varying from 4000 to 0.025. On an individual basis, the microbiota was, however, relatively stable over time. Treatment with L. reuteri DSM 17938 did not change the global composition of the microbiota, but when comparing responders with non-responders the group responders had an increased relative abundance of the phyla Bacteroidetes and genus Bacteroides at day 21 compared with day 0. Furthermore, the phyla composition of the infants at day 21 could be divided into three enterotype groups, dominated by Firmicutes, Bacteroidetes, and Actinobacteria, respectively. Conclusion L. reuteri DSM 17938 did not affect the global composition of the microbiota. However, the increase of Bacteroidetes in the responder infants indicated that a decrease in colicky symptoms was linked to changes of the microbiota

  1. Rationale, design, methodology and sample characteristics for the Vietnam pre-conceptual micronutrient supplementation trial (PRECONCEPT): a randomized controlled study

    PubMed Central

    2012-01-01

    Background Low birth weight and maternal anemia remain intractable problems in many developing countries. The adequacy of the current strategy of providing iron-folic acid (IFA) supplements only during pregnancy has been questioned given many women enter pregnancy with poor iron stores, the substantial micronutrient demand by maternal and fetal tissues, and programmatic issues related to timing and coverage of prenatal care. Weekly IFA supplementation for women of reproductive age (WRA) improves iron status and reduces the burden of anemia in the short term, but few studies have evaluated subsequent pregnancy and birth outcomes. The Preconcept trial aims to determine whether pre-pregnancy weekly IFA or multiple micronutrient (MM) supplementation will improve birth outcomes and maternal and infant iron status compared to the current practice of prenatal IFA supplementation only. This paper provides an overview of study design, methodology and sample characteristics from baseline survey data and key lessons learned. Methods/design We have recruited 5011 WRA in a double-blind stratified randomized controlled trial in rural Vietnam and randomly assigned them to receive weekly supplements containing either: 1) 2800 μg folic acid 2) 60 mg iron and 2800 μg folic acid or 3) MM. Women who become pregnant receive daily IFA, and are being followed through pregnancy, delivery, and up to three months post-partum. Study outcomes include birth outcomes and maternal and infant iron status. Data are being collected on household characteristics, maternal diet and mental health, anthropometry, infant feeding practices, morbidity and compliance. Discussion The study is timely and responds to the WHO Global Expert Consultation which identified the need to evaluate the long term benefits of weekly IFA and MM supplementation in WRA. Findings will generate new information to help guide policy and programs designed to reduce the burden of anemia in women and children and improve maternal

  2. The Association between Childhood and Adolescent Sexual Abuse and Proxies for Sexual Risk Behavior: A Random Sample of the General Population of Sweden

    ERIC Educational Resources Information Center

    Steel, Jennifer L.; Herlitz, Claes A.

    2005-01-01

    Objective: Several studies with small and ''high risk'' samples have demonstrated that a history of childhood or adolescent sexual abuse (CASA) is associated with sexual risk behaviors (SRBs). However, few studies with large random samples from the general population have specifically examined the relationship between CASA and SRBs with a…

  3. MSurvPow: a FORTRAN program to calculate the sample size and power for cluster-randomized clinical trials with survival outcomes.

    PubMed

    Gao, Feng; Manatunga, Amita K; Chen, Shande

    2005-04-01

    Manatunga and Chen [A.K. Manatunga, S. Chen, Sample size estimation for survival outcomes in cluster-randomized studies with small cluster sizes, Biometrics 56 (2000) 616-621] proposed a method to estimate sample size and power for cluster-randomized studies where the primary outcome variable was survival time. The sample size formula was constructed by considering a bivariate marginal distribution (Clayton-Oakes model) with univariate exponential marginal distributions. In this paper, a user-friendly FORTRAN 90 program was provided to implement this method and a simple example was used to illustrate the features of the program.

  4. A literature review on the representativeness of randomized controlled trial samples and implications for the external validity of trial results.

    PubMed

    Kennedy-Martin, Tessa; Curtis, Sarah; Faries, Douglas; Robinson, Susan; Johnston, Joseph

    2015-01-01

    Randomized controlled trials (RCTs) are conducted under idealized and rigorously controlled conditions that may compromise their external validity. A literature review was conducted of published English language articles that reported the findings of studies assessing external validity by a comparison of the patient sample included in RCTs reporting on pharmaceutical interventions with patients from everyday clinical practice. The review focused on publications in the fields of cardiology, mental health, and oncology. A range of databases were interrogated (MEDLINE; EMBASE; Science Citation Index; Cochrane Methodology Register). Double-abstract review and data extraction were performed as per protocol specifications. Out of 5,456 de-duplicated abstracts, 52 studies met the inclusion criteria (cardiology, n = 20; mental health, n = 17; oncology, n = 15). Studies either performed an analysis of the baseline characteristics (demographic, socioeconomic, and clinical parameters) of RCT-enrolled patients compared with a real-world population, or assessed the proportion of real-world patients who would have been eligible for RCT inclusion following the application of RCT inclusion/exclusion criteria. Many of the included studies concluded that RCT samples are highly selected and have a lower risk profile than real-world populations, with the frequent exclusion of elderly patients and patients with co-morbidities. Calculation of ineligibility rates in individual studies showed that a high proportion of the general disease population was often excluded from trials. The majority of studies (n = 37 [71.2 %]) explicitly concluded that RCT samples were not broadly representative of real-world patients and that this may limit the external validity of the RCT. Authors made a number of recommendations to improve external validity. Findings from this review indicate that there is a need to improve the external validity of RCTs such that physicians treating patients

  5. A cross-sectional, randomized cluster sample survey of household vulnerability to extreme heat among slum dwellers in ahmedabad, india.

    PubMed

    Tran, Kathy V; Azhar, Gulrez S; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy

    2013-06-18

    Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile.

  6. Diagnosis and treatment of presumed STIs at Mexican pharmacies: survey results from a random sample of Mexico City pharmacy attendants

    PubMed Central

    Turner, A; Ellertson, C; Thomas, S; Garcia, S

    2003-01-01

    Objectives: People in developing countries often seek medical advice for common ailments from pharmacies. As one example, pharmacists routinely diagnose and treat symptomatic sexually transmitted infections (STIs). We aimed to assess the quality of advice provided in Mexico City pharmacies by presenting hypothetical STI related syndromes and recording pharmacy attendants' suggested diagnoses and treatments. Methods: We interviewed the first available attendant in each of a 5% random sample of Mexico City's pharmacies. We inquired about the training, age, and experience of the attendant and about the typical number of clients coming for treatment of suspected STIs. After considering three hypothetical case studies, attendants recommended diagnoses, treatments, and, sometimes, physician follow up. Results: Most Mexico City "pharmacists" are actually clerks, with trained pharmacists rarely available on the premises. The average pharmacy attendant was 32 years old, with a median of 5 years' experience at that pharmacy, but very limited (if any) training. 62% reported seeing 10 or more clients with genital or vaginal infections per month. Depending on the case study, attendants provided appropriate diagnoses in 0–12% of cases, recommended appropriate treatments in 12–16% of cases, and suggested physician follow up for 26–67% of cases. Conclusions: In general, surveyed pharmacy personnel were unable to diagnose accurately or offer appropriate treatment advice when presented with classic, common STI symptoms. Given the volume of clients seeking advice from this source, training pharmacy attendants could significantly help to reduce the burden of disease associated with STIs in Mexico City. PMID:12794207

  7. Optimal selection of sib pairs from random samples for linkage analysis of a QTL using the EDAC test.

    PubMed

    Dolan, C V; Boomsma, D I

    1998-05-01

    Percentages of extremely concordant and extremely discordant sib pairs are calculated that maximize the power to detect a quantitative trait locus (QTL) under a variety of circumstances using the EDAC test. We assume a large fixed number of randomly sampled sib pairs, such as one would hope to find in the large twin registries, and limited resources to genotype a certain number of selected sib pairs. Our aim is to investigate whether optimal selection can be achieved when prior knowledge concerning the QTL gene action, QTL allele frequency, QTL effect size, and background (residual) sib correlation is limited or absent. To this end we calculate the best selection percentages for a large number of models, which differ in QTL gene action allele frequency, background correlation, and QTL effect size. By averaging these percentages over gene action, over allele frequency, over gene action, and over allele frequencies, we arrive at general recommendations concerning selection percentages. The soundness of these recommendations is subsequently in a number of test cases. PMID:9670595

  8. Mental health impact of the 2010 Haiti earthquake on the Miami Haitian population: A random-sample survey

    PubMed Central

    Messiah, Antoine; Acuna, Juan M; Castro, Grettel; de la Vega, Pura Rodríguez; Vaiva, Guillaume; Shultz, James; Neria, Yuval; De La Rosa, Mario

    2015-01-01

    This study examined the mental health consequences of the January 2010 Haiti earthquake on Haitians living in Miami-Dade County, Florida, 2–3 years following the event. A random-sample household survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants (N = 421) were assessed for their earthquake exposure and its impact on family, friends, and household finances; and for symptoms of posttraumatic stress disorder (PTSD), anxiety, and major depression; using standardized screening measures and thresholds. Exposure was considered as “direct” if the interviewee was in Haiti during the earthquake. Exposure was classified as “indirect” if the interviewee was not in Haiti during the earthquake but (1) family members or close friends were victims of the earthquake, and/or (2) family members were hosted in the respondent's household, and/or (3) assets or jobs were lost because of the earthquake. Interviewees who did not qualify for either direct or indirect exposure were designated as “lower” exposure. Eight percent of respondents qualified for direct exposure, and 63% qualified for indirect exposure. Among those with direct exposure, 19% exceeded threshold for PTSD, 36% for anxiety, and 45% for depression. Corresponding percentages were 9%, 22% and 24% for respondents with indirect exposure, and 6%, 14%, and 10% for those with lower exposure. A majority of Miami Haitians were directly or indirectly exposed to the earthquake. Mental health distress among them remains considerable two to three years post-earthquake. PMID:26753105

  9. Rationale, design, methodology and sample characteristics for the family partners for health study: a cluster randomized controlled study

    PubMed Central

    2012-01-01

    Background Young children who are overweight are at increased risk of becoming obese and developing type 2 diabetes and cardiovascular disease later in life. Therefore, early intervention is critical. This paper describes the rationale, design, methodology, and sample characteristics of a 5-year cluster randomized controlled trial being conducted in eight elementary schools in rural North Carolina, United States. Methods/Design The first aim of the trial is to examine the effects of a two-phased intervention on weight status, adiposity, nutrition and exercise health behaviors, and self-efficacy in overweight or obese 2nd, 3 rd, and 4th grade children and their overweight or obese parents. The primary outcome in children is stabilization of BMI percentile trajectory from baseline to 18 months. The primary outcome in parents is a decrease in BMI from baseline to 18 months. Secondary outcomes for both children and parents include adiposity, nutrition and exercise health behaviors, and self-efficacy from baseline to 18 months. A secondary aim of the trial is to examine in the experimental group, the relationships between parents and children's changes in weight status, adiposity, nutrition and exercise health behaviors, and self-efficacy. An exploratory aim is to determine whether African American, Hispanic, and non-Hispanic white children and parents in the experimental group benefit differently from the intervention in weight status, adiposity, health behaviors, and self-efficacy. A total of 358 African American, non-Hispanic white, and bilingual Hispanic children with a BMI ≥ 85th percentile and 358 parents with a BMI ≥ 25 kg/m2 have been inducted over 3 1/2 years and randomized by cohort to either an experimental or a wait-listed control group. The experimental group receives a 12-week intensive intervention of nutrition and exercise education, coping skills training and exercise (Phase I), 9 months of continued monthly contact (Phase II) and then 6 months

  10. Tobacco Smoking Surveillance: Is Quota Sampling an Efficient Tool for Monitoring National Trends? A Comparison with a Random Cross-Sectional Survey

    PubMed Central

    Guignard, Romain; Wilquin, Jean-Louis; Richard, Jean-Baptiste; Beck, François

    2013-01-01

    Objectives It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. Design / Outcome Measures In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs “mobile-only”), and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew “hard-to-reach” people on the prevalence found. Results Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old) than in the quota sample (respectively 30.2% and 25.3%). In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey). The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. Conclusion Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations. PMID:24194924

  11. A therapeutic application of the experience sampling method in the treatment of depression: a randomized controlled trial.

    PubMed

    Kramer, Ingrid; Simons, Claudia J P; Hartmann, Jessica A; Menne-Lothmann, Claudia; Viechtbauer, Wolfgang; Peeters, Frenk; Schruers, Koen; van Bemmel, Alex L; Myin-Germeys, Inez; Delespaul, Philippe; van Os, Jim; Wichers, Marieke

    2014-02-01

    In depression, the ability to experience daily life positive affect predicts recovery and reduces relapse rates. Interventions based on the experience sampling method (ESM-I) are ideally suited to provide insight in personal, contextualized patterns of positive affect. The aim of this study was to examine whether add-on ESM-derived feedback on personalized patterns of positive affect is feasible and useful to patients, and results in a reduction of depressive symptomatology. Depressed outpatients (n=102) receiving pharmacological treatment participated in a randomized controlled trial with three arms: an experimental group receiving add-on ESM-derived feedback, a pseudo-experimental group participating in ESM but receiving no feedback, and a control group. The experimental group participated in an ESM procedure (three days per week over a 6-week period) using a palmtop. This group received weekly standardized feedback on personalized patterns of positive affect. Hamilton Depression Rating Scale - 17 (HDRS) and Inventory of Depressive Symptoms (IDS) scores were obtained before and after the intervention. During a 6-month follow-up period, five HDRS and IDS assessments were completed. Add-on ESM-derived feedback resulted in a significant and clinically relevant stronger decrease in HDRS score relative to the control group (p<0.01; -5.5 point reduction in HDRS at 6 months). Compared to the pseudo-experimental group, a clinically relevant decrease in HDRS score was apparent at 6 months (B=-3.6, p=0.053). Self-reported depressive complaints (IDS) yielded the same pattern over time. The use of ESM-I was deemed acceptable and the provided feedback easy to understand. Patients attempted to apply suggestions from ESM-derived feedback to daily life. These data suggest that the efficacy of traditional passive pharmacological approach to treatment of major depression can be enhanced by using person-tailored daily life information regarding positive affect. PMID:24497255

  12. Chronic neck and shoulder pain, age, and working conditions: longitudinal results from a large random sample in France

    PubMed Central

    Cassou, B; Derriennic, F; Monfort, C; Norton, J; Touranchet, A

    2002-01-01

    Aims: To analyse the effects of age and occupational factors on both the incidence and the disappearance of chronic neck and shoulder pain after a five year follow up period. Methods: A prospective longitudinal investigation (ESTEV) was carried out in 1990 and 1995 in seven regions of France. A random sample of male and female workers born in 1938, 1943, 1948, and 1953 was selected from the occupational physicians' files. In 1990, 21 378 subjects were interviewed (88% of those contacted), and 87% were interviewed again in 1995. Chronic neck and shoulder pain satisfying specific criteria, and psychosocial working conditions were investigated by a structured self administered questionnaire and a clinical examination. Results: Prevalence (men 7.8%, women 14.8% in 1990) and incidence (men 7.3%, women 12.5% for the period 1990–95) of chronic neck and shoulder pain increased with age, and were more frequent among women than men in every birth cohort. The disappearance rate of chronic neck and shoulder pain decreased with age. Some adverse working conditions (repetitive work under time constraints, awkward work for men, repetitive work for women) contributed to the development of these disorders, independently of age. Psychosocial factors seemed to play a role in both the development and disappearance of chronic neck and shoulder pain. Data did not show specific interactions between age and working conditions. Conclusions: The aging of the workforce appears to contribute to the widespread concern about chronic neck and shoulder pain. A better understanding of work activity regulation of older workers can open up new preventive prospects. PMID:12151610

  13. Dietary patterns and their associations with demographic, lifestyle and health variables in a random sample of British adults.

    PubMed

    Whichelow, M J; Prevost, A T

    1996-07-01

    The present study aimed to identify dietary patterns, from the frequency of consumption of food items and some semi-quantitative data, in a random sample of 9003 British adults, and to examine the associations of the main dietary patterns with demographic factors, lifestyle habits, measures of self-reported health and mortality. Principal component analysis was used to identify four main dietary patterns, and analysis of variance employed to examine the characteristics associated with them. The four components explained, respectively, 10.2, 7.3, 5.1 and 4.9% of the total dietary variation. Component 1, frequent fruit, salad and vegetable consumption with infrequent consumption of high-fat foods, was associated with middle age, non-manual socio-economic groups, non- and ex-smokers, 'sensible' drinkers, small households, the south of the country, and self-assessed 'excellent' or 'good' health. Component 2, frequent consumption of high-starch foods, most vegetables and meat, was popular with young men, older men and women, large households, non-smokers, non-drinkers and those who viewed themselves as healthy. Component 3, frequent consumption of high-fat foods, was predominantly consumed by young people, smoking women, 'high-risk' drinkers, and men reporting many illness and/or malaise symptoms. Component 4, high positive loadings for sweets, biscuits and cakes, with negative weightings for vegetables, was most favoured by students, the elderly, those living alone, residents in Scotland, but not those in central England, and those who did not smoke. For women only the first component was associated with low all-cause mortality, and the third component with excess mortality.

  14. Chinese My Trauma Recovery, A Web-Based Intervention for Traumatized Persons in Two Parallel Samples: Randomized Controlled Trial

    PubMed Central

    Wang, Zhiyun; Maercker, Andreas

    2013-01-01

    Background Guided self-help interventions for PTSD (post-traumatic stress disorder) are a promising tool for the dissemination of contemporary psychological treatment. Objective This study investigated the efficacy of the Chinese version of the My Trauma Recovery (CMTR) website. Methods In an urban context, 90 survivors of different trauma types were recruited via Internet advertisements and allocated to a randomized controlled trial (RCT) with a waiting list control condition. In addition, in a rural context, 93 survivors mainly of the 2008 Sichuan earthquake were recruited in-person for a parallel RCT in which the website intervention was conducted in a counseling center and guided by volunteers. Assessment was completed online on a professional Chinese survey website. The primary outcome measure was the Post-traumatic Diagnostic Scale (PDS); secondary outcome measures were Symptom Checklist 90-Depression (SCL-D), Trauma Coping Self-Efficacy Scale (CSE), Post-traumatic Cognitive Changes (PCC), and Social Functioning Impairment (SFI) questionnaires adopted from the My Trauma Recovery website. Results For the urban sample, findings indicated a significant group×time interaction in post-traumatic symptom severity (F 1,88=7.65, P=.007). CMTR reduced post-traumatic symptoms significantly with high effect size after one month of treatment (F 1,45=15.13, Cohen’s d=0.81, P<.001) and the reduction was sustained over a 3-month follow-up (F 1,45=17.29, Cohen’s d=0.87, P<.001). In the rural sample, the group×time interaction was also significant in post-traumatic symptom severity (F 1,91=5.35, P=.02). Post-traumatic symptoms decreased significantly after treatment (F 1,48=43.97, Cohen’s d=1.34, P<.001) and during the follow-up period (F 1,48=24.22, Cohen’s d=0.99, P<.001). Additional outcome measures (post-traumatic cognitive changes, depression) indicated a range of positive effects, in particular in the urban sample (group×time interactions: F 1

  15. From Planning to Implementation: An Examination of Changes in the Research Design, Sample Size, and Precision of Group Randomized Trials Launched by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica

    2013-01-01

    This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…

  16. Cranial Capacity Related to Sex, Rank, and Race in a Stratified Random Sample of 6,325 U.S. Military Personnel.

    ERIC Educational Resources Information Center

    Rushton, J. Philippe

    1992-01-01

    Cranial capacities were calculated from external head measurements reported for a stratified random sample of 6,325 Army personnel measured in 1988. Data suggest that human populations differ in brain size by race and sex. The major source of variation in data was sex; race was second and rank last. (Author/SLD)

  17. Feeding patterns and dietary intake in a random sample of a Swedish population of insured-dogs.

    PubMed

    Sallander, Marie; Hedhammar, Ake; Rundgren, Margareta; Lindberg, Jan E

    2010-07-01

    We used a validated mail and telephone questionnaire to investigate baseline data on feeding patterns and dietary intake in a random sample of 460 Swedish dogs. In 1999, purebred individuals 1-3 years old in the largest insurance database of Sweden completed the study. Most dogs were fed restricted amounts twice a day, and the feeding patterns seldom were changed after the age of 6 months. Typically, the main constituent of the meals was dry food [representing 69% of dry matter (DM)]. Four out of five dogs also got foods that (in descending order of the amount of energy provided) consisted of vegetable oil, meat, sour milk, bread, potatoes, pasta, lard/tallow, sausage, cheese, rice and fish. The heavier the dog (kg), the more dry dog food was consumed (g DM/d). The dry-food intakes (g DM/d) increased linearly with body weight (BW, in kg): intake=-15.3+8.33 BW (P=0.0001; r=0.998), a clear relationship that was not observed for other commercial foods. The non-commercial part of the diet had higher fat (13 and 8 g/megajoule, MJ, respectively; P=0.00001) and lower protein (12 and 16 g/MJ, respectively; P=0.00001) compared to the commercial part of the diet. Six out of ten dogs were given treats, and one-fourth was given vitamin/mineral supplements (most commonly daily). Most dogs consumed diets that were nutritionally balanced. No dogs in the study consumed diets that supplied lower amounts of protein than recommended by the NRC (2006). Only two individuals (<1%) were given total diets that were lower than the nutrient profiles in fat. Few dogs consumed total diets that were lower than recommended by the NRC (2006) in calcium, phosphorus, and vitamins A, D and E (2, 1, 3, 5, and 3% of the individuals, respectively). A few individuals consumed higher levels of vitamins A and D (<1 and 4%, respectively) than recommended. Diets that deviated from recommended levels were those consisting of only table foods with no supplements (too-low in vitamins and minerals) or

  18. Changes in prevalence of, and risk factors for, lameness in random samples of English sheep flocks: 2004-2013.

    PubMed

    Winter, Joanne R; Kaler, Jasmeet; Ferguson, Eamonn; KilBride, Amy L; Green, Laura E

    2015-11-01

    The aims of this study were to update the prevalence of lameness in sheep in England and identify novel risk factors. A total of 1260 sheep farmers responded to a postal survey. The survey captured detailed information on the period prevalence of lameness from May 2012-April 2013 and the prevalence and farmer naming of lesions attributable to interdigital dermatitis (ID), severe footrot (SFR), contagious ovine digital dermatitis (CODD) and shelly hoof (SH), management and treatment of lameness, and farm and flock details. The global mean period prevalence of lameness fell between 2004 and 2013 from 10.6% to 4.9% and the geometric mean period prevalence of lameness fell from 5.4% (95% CL: 4.7%-6.0%) to 3.5% (95% CI: 3.3%-3.7%). In 2013, more farmers were using vaccination and antibiotic treatment for ID and SFR and fewer farmers were using foot trimming as a routine or therapeutic treatment than in 2004. Two over-dispersed Poisson regression models were developed with the outcome the period prevalence of lameness, one investigated associations with farmer estimates of prevalence of the four foot lesions and one investigated associations with management practices to control and treat lameness and footrot. A prevalence of ID>10%, SFR>2.5% and CODD>2.5% were associated with a higher prevalence of lameness compared with those lesions being absent, however, the prevalence of SH was not associated with a change in risk of lameness. A key novel management risk associated with higher prevalence of lameness was the rate of feet bleeding/100 ewes trimmed/year. In addition, vaccination of ewes once per year and selecting breeding replacements from never-lame ewes were associated with a decreased risk of lameness. Other factors associated with a lower risk of lameness for the first time in a random sample of farmers and a full risk model were: recognising lameness in sheep at locomotion score 1 compared with higher scores, treatment of the first lame sheep in a group compared

  19. Random versus fixed-site sampling when monitoring relative abundance of fishes in headwater streams of the upper Colorado River basin

    USGS Publications Warehouse

    Quist, M.C.; Gerow, K.G.; Bower, M.R.; Hubert, W.A.

    2006-01-01

    Native fishes of the upper Colorado River basin (UCRB) have declined in distribution and abundance due to habitat degradation and interactions with normative fishes. Consequently, monitoring populations of both native and nonnative fishes is important for conservation of native species. We used data collected from Muddy Creek, Wyoming (2003-2004), to compare sample size estimates using a random and a fixed-site sampling design to monitor changes in catch per unit effort (CPUE) of native bluehead suckers Catostomus discobolus, flannelmouth suckers C. latipinnis, roundtail chub Gila robusta, and speckled dace Rhinichthys osculus, as well as nonnative creek chub Semotilus atromaculatus and white suckers C. commersonii. When one-pass backpack electrofishing was used, detection of 10% or 25% changes in CPUE (fish/100 m) at 60% statistical power required 50-1,000 randomly sampled reaches among species regardless of sampling design. However, use of a fixed-site sampling design with 25-50 reaches greatly enhanced the ability to detect changes in CPUE. The addition of seining did not appreciably reduce required effort. When detection of 25-50% changes in CPUE of native and nonnative fishes is acceptable, we recommend establishment of 25-50 fixed reaches sampled by one-pass electrofishing in Muddy Creek. Because Muddy Creek has habitat and fish assemblages characteristic of other headwater streams in the UCRB, our results are likely to apply to many other streams in the basin. ?? Copyright by the American Fisheries Society 2006.

  20. Differentiating emotions across contexts: comparing adults with and without social anxiety disorder using random, social interaction, and daily experience sampling.

    PubMed

    Kashdan, Todd B; Farmer, Antonina S

    2014-06-01

    The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning.

  1. Differentiating Emotions Across Contexts: Comparing Adults with and without Social Anxiety Disorder Using Random, Social Interaction, and Daily Experience Sampling

    PubMed Central

    Kashdan, Todd B.; Farmer, Antonina S.

    2014-01-01

    The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning. PMID:24512246

  2. Effects of media campaign messages targeting parents on adolescent sexual beliefs: a randomized controlled trial with a national sample.

    PubMed

    Palen, Lori-Ann; Ashley, Olivia Silber; Gard, Jennifer C; Kan, Marni L; Davis, Kevin C; Evans, W Douglas

    2011-01-01

    Using a randomized controlled trial, this study evaluated the effects of media messages targeting parents on the sexual beliefs of 404 adolescents. The messages aimed to increase parent-child communication about waiting to initiate sexual activity. Compared with children of unexposed parents, children of parents exposed to media messages were more likely to believe that teen sexual activity is psychologically harmful. However, effects varied by parent and adolescent gender; treatment effects were only significant among adolescents whose opposite-sex parent was exposed. Parent exposure strengthened beliefs that teen sexual activity is physically harmful only among adolescents with at least 1 sexually active friend.

  3. Effects of media campaign messages targeting parents on adolescent sexual beliefs: a randomized controlled trial with a national sample.

    PubMed

    Palen, Lori-Ann; Ashley, Olivia Silber; Gard, Jennifer C; Kan, Marni L; Davis, Kevin C; Evans, W Douglas

    2011-01-01

    Using a randomized controlled trial, this study evaluated the effects of media messages targeting parents on the sexual beliefs of 404 adolescents. The messages aimed to increase parent-child communication about waiting to initiate sexual activity. Compared with children of unexposed parents, children of parents exposed to media messages were more likely to believe that teen sexual activity is psychologically harmful. However, effects varied by parent and adolescent gender; treatment effects were only significant among adolescents whose opposite-sex parent was exposed. Parent exposure strengthened beliefs that teen sexual activity is physically harmful only among adolescents with at least 1 sexually active friend. PMID:21135626

  4. Effects of music therapy on pain responses induced by blood sampling in premature infants: A randomized cross-over trial

    PubMed Central

    Shabani, Fidan; Nayeri, Nahid Dehghan; Karimi, Roghiyeh; Zarei, Khadijeh; Chehrazi, Mohammad

    2016-01-01

    Background: Premature infants are subjected to many painful procedures during care and treatment. The aim of this study was to assess the effect of music therapy on physiological and behavioral pain responses of premature infants during and after blood sampling. Materials and Methods: This study was a cross-over clinical trial conducted on 20 infants in a hospital affiliated to Tehran University of Medical Sciences for a 5-month period in 2011. In the experimental group, Transitions music was played from 5 min before until 10 min after blood sampling. The infants’ facial expressions and physiological measures were recorded from 10 min before until 10 min after sampling. All steps and measurements, except music therapy, were the same for the control group. Data were analyzed using SAS and SPSS software through analysis of variance (ANOVA) and Chi-square tests. Results: There were significant differences between the experimental and control groups (P = 0.022) in terms of heart rate during needle extraction and at the first 5 min after sampling (P = 0.005). Considering the infant's sleep–wake state in the second 5 min before sampling, the statistical difference was significant (P = 0.044). Difference was significant (P = 0.045) during injection of the needle, in the first 5 min after sampling (P = 0.002), and in the second 5 min after sampling (P = 0.005). There were significant difference in infants’ facial expressions of pain in the first 5 min after sampling (P = 0.001). Conclusions: Music therapy reduces the physiological and behavioral responses of pain during and after blood sampling. PMID:27563323

  5. Family constellation seminars improve psychological functioning in a general population sample: results of a randomized controlled trial.

    PubMed

    Weinhold, Jan; Hunger, Christina; Bornhäuser, Annette; Link, Leoni; Rochon, Justine; Wild, Beate; Schweitzer, Jochen

    2013-10-01

    The study examined the efficacy of nonrecurring family constellation seminars on psychological health. We conducted a monocentric, single-blind, stratified, and balanced randomized controlled trial (RCT). After choosing their roles for participating in a family constellation seminar as either active participant (AP) or observing participant (OP), 208 adults (M = 48 years, SD = 10; 79% women) from the general population were randomly allocated to the intervention group (IG; 3-day family constellation seminar; 64 AP, 40 OP) or a wait-list control group (WLG; 64 AP, 40 OP). It was predicted that family constellation seminars would improve psychological functioning (Outcome Questionnaire OQ-45.2) at 2-week and 4-month follow-ups. In addition, we assessed the impact of family constellation seminars on psychological distress and motivational incongruence. The IG showed significantly improved psychological functioning (d = 0.45 at 2-week follow-up, p = .003; d = 0.46 at 4-month follow-up, p = .003). Results were confirmed for psychological distress and motivational incongruence. No adverse events were reported. This RCT provides evidence for the efficacy of family constellation in a nonclinical population. The implications of the findings are discussed.

  6. The Healthy Young Men's Study: Sampling Methods to Recruit a Random Cohort of Young Men Who Have Sex with Men.

    PubMed

    Ford, Wesley L; Weiss, George; Kipke, Michele D; Ritt-Olson, Anamara; Iverson, Ellen; Lopez, Donna

    2009-10-01

    Recruiting a scientifically sound cohort of young men who have sex with men (YMSM) is an enduring research challenge. The few cohort studies that have been conducted to date on YMSM have relied on non-probability sampling methods to construct their cohorts. While these studies have provided valuable information about HIV risk behaviors among YMSM, their generalizability to broader YMSM populations is limited.In this paper the authors describe a venue-based sampling methodology used to recruit a large and diverse cohort of YMSM from public venues in Los Angeles County. Venue-based sampling is a multi-stage, probability sampling design that uses standard outreach techniques and standard survey methods to systematically enumerate, sample, and survey hard-to-reach populations. The study design allowed the authors to estimate individual, familial and interpersonal psychosocial factors associated with HIV risk and health seeking behaviors for a cohort of YMSM with known properties. Study participants completed an extensive baseline survey and over a two year period will complete four follow-up surveys at six-month intervals. The baseline survey was administered in both English and Spanish.

  7. Open-label randomized trial of titrated disease management for patients with hypertension: Study design and baseline sample characteristics.

    PubMed

    Jackson, George L; Weinberger, Morris; Kirshner, Miriam A; Stechuchak, Karen M; Melnyk, Stephanie D; Bosworth, Hayden B; Coffman, Cynthia J; Neelon, Brian; Van Houtven, Courtney; Gentry, Pamela W; Morris, Isis J; Rose, Cynthia M; Taylor, Jennifer P; May, Carrie L; Han, Byungjoo; Wainwright, Christi; Alkon, Aviel; Powell, Lesa; Edelman, David

    2016-09-01

    Despite the availability of efficacious treatments, only half of patients with hypertension achieve adequate blood pressure (BP) control. This paper describes the protocol and baseline subject characteristics of a 2-arm, 18-month randomized clinical trial of titrated disease management (TDM) for patients with pharmaceutically-treated hypertension for whom systolic blood pressure (SBP) is not controlled (≥140mmHg for non-diabetic or ≥130mmHg for diabetic patients). The trial is being conducted among patients of four clinic locations associated with a Veterans Affairs Medical Center. An intervention arm has a TDM strategy in which patients' hypertension control at baseline, 6, and 12months determines the resource intensity of disease management. Intensity levels include: a low-intensity strategy utilizing a licensed practical nurse to provide bi-monthly, non-tailored behavioral support calls to patients whose SBP comes under control; medium-intensity strategy utilizing a registered nurse to provide monthly tailored behavioral support telephone calls plus home BP monitoring; and high-intensity strategy utilizing a pharmacist to provide monthly tailored behavioral support telephone calls, home BP monitoring, and pharmacist-directed medication management. Control arm patients receive the low-intensity strategy regardless of BP control. The primary outcome is SBP. There are 385 randomized (192 intervention; 193 control) veterans that are predominately older (mean age 63.5years) men (92.5%). 61.8% are African American, and the mean baseline SBP for all subjects is 143.6mmHg. This trial will determine if a disease management program that is titrated by matching the intensity of resources to patients' BP control leads to superior outcomes compared to a low-intensity management strategy.

  8. Random DNA fragmentation allows detection of single-copy, single-exon alterations of copy number by oligonucleotide array CGH in clinical FFPE samples.

    PubMed

    Hostetter, Galen; Kim, Su Young; Savage, Stephanie; Gooden, Gerald C; Barrett, Michael; Zhang, Jian; Alla, Lalitamba; Watanabe, April; Einspahr, Janine; Prasad, Anil; Nickoloff, Brian J; Carpten, John; Trent, Jeffrey; Alberts, David; Bittner, Michael

    2010-01-01

    Genomic technologies, such as array comparative genomic hybridization (aCGH), increasingly offer definitive gene dosage profiles in clinical samples. Historically, copy number profiling was limited to large fresh-frozen tumors where intact DNA could be readily extracted. Genomic analyses of pre-neoplastic tumors and diagnostic biopsies are often limited to DNA processed by formalin-fixation and paraffin-embedding (FFPE). We present specialized protocols for DNA extraction and processing from FFPE tissues utilizing DNase processing to generate randomly fragmented DNA. The protocols are applied to FFPE clinical samples of varied tumor types, from multiple institutions and of varied block age. Direct comparative analyses with regression coefficient were calculated on split-sample (portion fresh/portion FFPE) of colorectal tumor samples. We show equal detection of a homozygous loss of SMAD4 at the exon-level in the SW480 cell line and gene-specific alterations in the split tumor samples. aCGH application to a set of archival FFPE samples of skin squamous cell carcinomas detected a novel hemizygous deletion in INPP5A on 10q26.3. Finally we present data on derivative of log ratio, a particular sensitive detector of measurement variance, for 216 sequential hybridizations to assess protocol reliability over a wide range of FFPE samples.

  9. Sample size calculations for intervention trials in primary care randomizing by primary care group: an empirical illustration from one proposed intervention trial.

    PubMed

    Eldridge, S; Cryer, C; Feder, G; Underwood, M

    2001-02-15

    Because of the central role of the general practice in the delivery of British primary care, intervention trials in primary care often use the practice as the unit of randomization. The creation of primary care groups (PCGs) in April 1999 changed the organization of primary care and the commissioning of secondary care services. PCGs will directly affect the organization and delivery of primary, secondary and social care services. The PCG therefore becomes an appropriate target for organizational and educational interventions. Trials testing these interventions should involve randomization by PCG. This paper discusses the sample size required for a trial in primary care assessing the effect of a falls prevention programme among older people. In this trial PCGs will be randomized. The sample size calculations involve estimating intra-PCG correlation in primary outcome: fractured femur rate for those 65 years and over. No data on fractured femur rate were available at PCG level. PCGs are, however, similar in size and often coterminous with local authorities. Therefore, intra-PCG correlation in fractured femur rate was estimated from the intra-local authority correlation calculated from routine data. Three alternative trial designs are considered. In the first design, PCGs are selected for inclusion in the trial from the total population of England (eight regions). In the second design, PCGs are selected from two regions only. The third design is similar to the second except that PCGs are stratified by region and baseline value of fracture rate. Intracluster correlation is estimated for each of these designs using two methods: an approximation which assumes cluster sizes are equal and an alternative method which takes account of the fact that cluster sizes vary. Estimates of sample size required vary between 26 and 7 PCGs in each intervention group, depending on the trial design and the method used to calculate sample size. Not unexpectedly, stratification by baseline

  10. Covariate adjustment for two-sample treatment comparisons in randomized clinical trials: a principled yet flexible approach.

    PubMed

    Tsiatis, Anastasios A; Davidian, Marie; Zhang, Min; Lu, Xiaomin

    2008-10-15

    There is considerable debate regarding whether and how covariate-adjusted analyses should be used in the comparison of treatments in randomized clinical trials. Substantial baseline covariate information is routinely collected in such trials, and one goal of adjustment is to exploit covariates associated with outcome to increase precision of estimation of the treatment effect. However, concerns are routinely raised over the potential for bias when the covariates used are selected post hoc and the potential for adjustment based on a model of the relationship between outcome, covariates, and treatment to invite a 'fishing expedition' for that leading to the most dramatic effect estimate. By appealing to the theory of semiparametrics, we are led naturally to a characterization of all treatment effect estimators and to principled, practically feasible methods for covariate adjustment that yield the desired gains in efficiency and that allow covariate relationships to be identified and exploited while circumventing the usual concerns. The methods and strategies for their implementation in practice are presented. Simulation studies and an application to data from an HIV clinical trial demonstrate the performance of the techniques relative to the existing methods. PMID:17960577

  11. A novel approach to non-biased systematic random sampling: a stereologic estimate of Purkinje cells in the human cerebellum.

    PubMed

    Agashiwala, Rajiv M; Louis, Elan D; Hof, Patrick R; Perl, Daniel P

    2008-10-21

    Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well. PMID:18725208

  12. [Position statement. Protein/creatinine in a randomly obtained urine sample in the diagnosis of proteinuria in pregnant patients with arterial hypertension].

    PubMed

    2012-01-01

    Leaños Miranda and collaborators published that the measurement of protein/creatinine ratio in a single random urine sample is a reliable indicator of significant proteinuria and may be reasonably used as alternative to the 24-hours urine collection method as a diagnostic criteria for urinary protein, and it is also a criterion for identifying the disease severity. This leads us to present this successful result of the investigation as a position statement in the care of pregnant women with hypertension.

  13. Positive random variables with a discrete probability mass at the origin: Parameter estimation for left-censored samples with application to air quality monitoring data

    SciTech Connect

    Gogolak, C.V.

    1986-11-01

    The concentration of a contaminant measured in a particular medium might be distributed as a positive random variable when it is present, but it may not always be present. If there is a level below which the concentration cannot be distinguished from zero by the analytical apparatus, a sample from such a population will be censored on the left. The presence of both zeros and positive values in the censored portion of such samples complicates the problem of estimating the parameters of the underlying positive random variable and the probability of a zero observation. Using the method of maximum likelihood, it is shown that the solution to this estimation problem reduces largely to that of estimating the parameters of the distribution truncated at the point of censorship. The maximum likelihood estimate of the proportion of zero values follows directly. The derivation of the maximum likelihood estimates for a lognormal population with zeros is given in detail, and the asymptotic properties of the estimates are examined. The estimation method was used to fit several different distributions to a set of severely censored /sup 85/Kr monitoring data from six locations at the Savannah River Plant chemical separations facilities.

  14. Comparison of Address-based Sampling and Random-digit Dialing Methods for Recruiting Young Men as Controls in a Case-Control Study of Testicular Cancer Susceptibility

    PubMed Central

    Clagett, Bartholt; Nathanson, Katherine L.; Ciosek, Stephanie L.; McDermoth, Monique; Vaughn, David J.; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A.

    2013-01-01

    Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-phone numbers and address-based sampling (ABS), to recruit primarily white men aged 18–55 years into a study of testicular cancer susceptibility conducted in the Philadelphia, Pennsylvania, metropolitan area between 2009 and 2012. With few exceptions, eligible and enrolled controls recruited by means of RDD and ABS were similar with regard to characteristics for which data were collected on the screening survey. While we find ABS to be a comparably effective method of recruiting young males compared with landline RDD, we acknowledge the potential impact that selection bias may have had on our results because of poor overall response rates, which ranged from 11.4% for landline RDD to 1.7% for ABS. PMID:24008901

  15. A sero-survey of rinderpest in nomadic pastoral systems in central and southern Somalia from 2002 to 2003, using a spatially integrated random sampling approach.

    PubMed

    Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M

    2010-12-01

    A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.

  16. What's in a name? The challenge of describing interventions in systematic reviews: analysis of a random sample of reviews of non-pharmacological stroke interventions

    PubMed Central

    Hoffmann, Tammy C; Walker, Marion F; Langhorne, Peter; Eames, Sally; Thomas, Emma; Glasziou, Paul

    2015-01-01

    Objective To assess, in a sample of systematic reviews of non-pharmacological interventions, the completeness of intervention reporting, identify the most frequently missing elements, and assess review authors’ use of and beliefs about providing intervention information. Design Analysis of a random sample of systematic reviews of non-pharmacological stroke interventions; online survey of review authors. Data sources and study selection The Cochrane Library and PubMed were searched for potentially eligible systematic reviews and a random sample of these assessed for eligibility until 60 (30 Cochrane, 30 non-Cochrane) eligible reviews were identified. Data collection In each review, the completeness of the intervention description in each eligible trial (n=568) was assessed by 2 independent raters using the Template for Intervention Description and Replication (TIDieR) checklist. All review authors (n=46) were invited to complete a survey. Results Most reviews were missing intervention information for the majority of items. The most incompletely described items were: modifications, fidelity, materials, procedure and tailoring (missing from all interventions in 97%, 90%, 88%, 83% and 83% of reviews, respectively). Items that scored better, but were still incomplete for the majority of reviews, were: ‘when and how much’ (in 31% of reviews, adequate for all trials; in 57% of reviews, adequate for some trials); intervention mode (in 22% of reviews, adequate for all trials; in 38%, adequate for some trials); and location (in 19% of reviews, adequate for all trials). Of the 33 (71%) authors who responded, 58% reported having further intervention information but not including it, and 70% tried to obtain information. Conclusions Most focus on intervention reporting has been directed at trials. Poor intervention reporting in stroke systematic reviews is prevalent, compounded by poor trial reporting. Without adequate intervention descriptions, the conduct, usability and

  17. Changes in brain volume and cognition in a randomized trial of exercise and social interaction in a community-based sample of non-demented Chinese elders.

    PubMed

    Mortimer, James A; Ding, Ding; Borenstein, Amy R; DeCarli, Charles; Guo, Qihao; Wu, Yougui; Zhao, Qianhua; Chu, Shugang

    2012-01-01

    Physical exercise has been shown to increase brain volume and improve cognition in randomized trials of non-demented elderly. Although greater social engagement was found to reduce dementia risk in observational studies, randomized trials of social interventions have not been reported. A representative sample of 120 elderly from Shanghai, China was randomized to four groups (Tai Chi, Walking, Social Interaction, No Intervention) for 40 weeks. Two MRIs were obtained, one before the intervention period, the other after. A neuropsychological battery was administered at baseline, 20 weeks, and 40 weeks. Comparison of changes in brain volumes in intervention groups with the No Intervention group were assessed by t-tests. Time-intervention group interactions for neuropsychological measures were evaluated with repeated-measures mixed models. Compared to the No Intervention group, significant increases in brain volume were seen in the Tai Chi and Social Intervention groups (p < 0.05). Improvements also were observed in several neuropsychological measures in the Tai Chi group, including the Mattis Dementia Rating Scale score (p = 0.004), the Trailmaking Test A (p = 0.002) and B (p = 0.0002), the Auditory Verbal Learning Test (p = 0.009), and verbal fluency for animals (p = 0.01). The Social Interaction group showed improvement on some, but fewer neuropsychological indices. No differences were observed between the Walking and No Intervention groups. The findings differ from previous clinical trials in showing increases in brain volume and improvements in cognition with a largely non-aerobic exercise (Tai Chi). In addition, intellectual stimulation through social interaction was associated with increases in brain volume as well as with some cognitive improvements.

  18. Cytogenetic abnormalities in multiple myeloma: poor prognosis linked to concomitant detection in random and focal lesion bone marrow samples and associated with high-risk gene expression profile.

    PubMed

    Zhou, Yiming; Nair, Bijay; Shaughnessy, John D; Cartron, Marie-Astrid; Haessler, Jeff; Anaissie, Elias; van Rhee, Frits; Crowley, John; Barlogie, Bart

    2009-06-01

    The clinical significance of cytogenetic abnormalities (CA) present in randomly sampled (RS) or focal lesion (FL) bone marrow sites was examined in 419 untreated myeloma patients. Among 290 patients with gene expression profiling (GEP) data generated from RS sites, GEP-defined high-risk was present in 52% of the RS+/FL+ group but in only 9% of the remainder (P < 0.001). The RS+/FL+ constellation (18%) was an independent predictor of poor survival, also after adjusting for GEP-derived risk and TP53 status (Hazard ratio = 2.42, P = 0.004). The prevalence of high-risk myeloma in the RS+/FL+ group may reflect a dissemination-prone condition not shared by the other three groups. PMID:19344415

  19. Determinants of serum zinc in a random population sample of four Belgian towns with different degrees of environmental exposure to cadmium

    PubMed Central

    Thijs, Lutgarde; Staessen, Jan; Amery, Antoon; Bruaux, Pierre; Buchet, Jean-Pierre; Claeys, FranÇoise; De Plaen, Pierre; Ducoffre, Geneviève; Lauwerys, Robert; Lijnen, Paul; Nick, Laurence; Remy, Annie Saint; Roels, Harry; Rondia, Désiré; Sartor, Francis

    1992-01-01

    This report investigated the distribution of serum zinc and the factors determining serum zinc concentration in a large random population sample. The 1977 participants (959 men and 1018 women), 20–80 years old, constituted a stratified random sample of the population of four Belgian districts, representing two areas with low and two with high environmental exposure to cadmium. For each exposure level, a rural and an urban area were selected. The serum concentration of zinc, frequently used as an index for zinc status in human subjects, was higher in men (13.1 μmole/L, range 6.5–23.0 μmole/L) than in women (12.6 μmole/L, range 6.3–23.2 μmole/L). In men, 20% of the variance of serum zinc was explained by age (linear and squared term, R = 0.29), diurnal variation (r = 0.29), and total cholesterol (r = 0.16). After adjustment for these covariates, a negative relationship was observed between serum zinc and both blood (r = −0.10) and urinary cadmium (r = −0.14). In women, 11% of the variance could be explained by age (linear and squared term, R = 0.15), diurnal variation in serum zinc (r = 0.27), creatinine clearance (r = −0.11), log γ-glutamyltranspeptidase (r = 0.08), cholesterol (r = 0.07), contraceptive pill intake (r = −0.07), and log serum ferritin (r = 0.06). Before and after adjustment for significant covariates, serum zinc was, on average, lowest in the two districts where the body burden of cadmium, as assessed by urinary cadmium excretion, was highest. These results were not altered when subjects exposed to heavy metals at work were excluded from analysis. PMID:1486857

  20. Systematic screening with information and home sampling for genital Chlamydia trachomatis infections in young men and women in Norway: a randomized controlled trial

    PubMed Central

    2013-01-01

    Background As most genital Chlamydia trachomatis infections are asymptomatic, many patients do not seek health care for testing. Infections remain undiagnosed and untreated. We studied whether screening with information and home sampling resulted in more young people getting tested, diagnosed and treated for chlamydia in the three months following the intervention compared to the current strategy of testing in the health care system. Method We conducted a population based randomized controlled trial among all persons aged 18–25 years in one Norwegian county (41 519 persons). 10 000 persons (intervention) received an invitation by mail with chlamydia information and a mail-back urine sampling kit. 31 519 persons received no intervention and continued with usual care (control). All samples from both groups were analysed in the same laboratory. Information on treatment was obtained from the Norwegian Prescription Database (NorPD). We estimated risk ratios and risk differences of being tested, diagnosed and treated in the intervention group compared to the control group. Results In the intervention group 16.5% got tested and in the control group 3.4%, risk ratio 4.9 (95% CI 4.5-5.2). The intervention led to 2.6 (95% CI 2.0-3.4) times as many individuals being diagnosed and 2.5 (95% CI 1.9-3.4) times as many individuals receiving treatment for chlamydia compared to no intervention in the three months following the intervention. Conclusion In Norway, systematic screening with information and home sampling results in more young people being tested, diagnosed and treated for chlamydia in the three months following the intervention than the current strategy of testing in the health care system. However, the study has not established that the intervention will reduce the chlamydia prevalence or the risk of complications from chlamydia. Trial registration ClinicalTrials.gov IDNCT00283127 PMID:23343391

  1. Random Sampling of Squamate Reptiles in Spanish Natural Reserves Reveals the Presence of Novel Adenoviruses in Lacertids (Family Lacertidae) and Worm Lizards (Amphisbaenia).

    PubMed

    Szirovicza, Leonóra; López, Pilar; Kopena, Renáta; Benkő, Mária; Martín, José; Pénzes, Judit J

    2016-01-01

    Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs) in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni), nine Iberian worm lizards (Blanus cinereus), and two Iberian green lizards (Lacerta schreiberi), respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses. PMID:27399970

  2. Random Sampling of Squamate Reptiles in Spanish Natural Reserves Reveals the Presence of Novel Adenoviruses in Lacertids (Family Lacertidae) and Worm Lizards (Amphisbaenia)

    PubMed Central

    Szirovicza, Leonóra; López, Pilar; Kopena, Renáta; Benkő, Mária; Martín, José; Pénzes, Judit J.

    2016-01-01

    Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs) in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni), nine Iberian worm lizards (Blanus cinereus), and two Iberian green lizards (Lacerta schreiberi), respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses. PMID:27399970

  3. Random Sampling of Squamate Reptiles in Spanish Natural Reserves Reveals the Presence of Novel Adenoviruses in Lacertids (Family Lacertidae) and Worm Lizards (Amphisbaenia).

    PubMed

    Szirovicza, Leonóra; López, Pilar; Kopena, Renáta; Benkő, Mária; Martín, José; Pénzes, Judit J

    2016-01-01

    Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs) in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni), nine Iberian worm lizards (Blanus cinereus), and two Iberian green lizards (Lacerta schreiberi), respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses.

  4. Characterizing stand-level forest canopy cover and height using Landsat time series, samples of airborne LiDAR, and the Random Forest algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.

    2015-03-01

    Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.

  5. Application of bimodal distribution to the detection of changes in uranium concentration in drinking water collected by random daytime sampling method from a large water supply zone.

    PubMed

    Garboś, Sławomir; Święcicka, Dorota

    2015-11-01

    The random daytime (RDT) sampling method was used for the first time in the assessment of average weekly exposure to uranium through drinking water in a large water supply zone. Data set of uranium concentrations determined in 106 RDT samples collected in three runs from the water supply zone in Wroclaw (Poland), cannot be simply described by normal or log-normal distributions. Therefore, a numerical method designed for the detection and calculation of bimodal distribution was applied. The extracted two distributions containing data from the summer season of 2011 and the winter season of 2012 (nI=72) and from the summer season of 2013 (nII=34) allowed to estimate means of U concentrations in drinking water: 0.947 μg/L and 1.23 μg/L, respectively. As the removal efficiency of uranium during applied treatment process is negligible, the effect of increase in uranium concentration can be explained by higher U concentration in the surface-infiltration water used for the production of drinking water. During the summer season of 2013, heavy rains were observed in Lower Silesia region, causing floods over the territory of the entire region. Fluctuations in uranium concentrations in surface-infiltration water can be attributed to releases of uranium from specific sources - migration from phosphate fertilizers and leaching from mineral deposits. Thus, exposure to uranium through drinking water may increase during extreme rainfall events. The average chronic weekly intakes of uranium through drinking water, estimated on the basis of central values of the extracted normal distributions, accounted for 3.2% and 4.1% of tolerable weekly intake.

  6. Association between Spouse/Child Separation and Migration-Related Stress among a Random Sample of Rural-to-Urban Migrants in Wuhan, China

    PubMed Central

    Guo, Yan; Chen, Xinguang; Gong, Jie; Li, Fang; Zhu, Chaoyang; Yan, Yaqiong; Wang, Liang

    2016-01-01

    Background Millions of people move from rural areas to urban areas in China to pursue new opportunities while leaving their spouses and children at rural homes. Little is known about the impact of migration-related separation on mental health of these rural migrants in urban China. Methods Survey data from a random sample of rural-to-urban migrants (n = 1113, aged 18–45) from Wuhan were analyzed. The Domestic Migration Stress Questionnaire (DMSQ), an instrument with four subconstructs, was used to measure migration-related stress. The relationship between spouse/child separation and stress was assessed using survey estimation methods to account for the multi-level sampling design. Results 16.46% of couples were separated from their spouses (spouse-separation only), 25.81% of parents were separated from their children (child separation only). Among the participants who married and had children, 5.97% were separated from both their spouses and children (double separation). Spouse-separation only and double separation did not scored significantly higher on DMSQ than those with no separation. Compared to parents without child separation, parents with child separation scored significantly higher on DMSQ (mean score = 2.88, 95% CI: [2.81, 2.95] vs. 2.60 [2.53, 2.67], p < .05). Stratified analysis by separation type and by gender indicated that the association was stronger for child-separation only and for female participants. Conclusion Child-separation is an important source of migration-related stress, and the effect is particularly strong for migrant women. Public policies and intervention programs should consider these factors to encourage and facilitate the co-migration of parents with their children to mitigate migration-related stress. PMID:27124768

  7. Body fat standards and individual physical readiness in a randomized Army sample: screening weights, methods of fat assessment, and linkage to physical fitness.

    PubMed

    Friedl, Karl E; Leu, John R

    2002-12-01

    Body fat standards have been used by the military services since the early 1980s to prevent obesity and motivate good fitness habits. The Army Weight Control Program has continued to undergo evaluation and incorporate improvements based on emerging scientific findings. Recently drafted revisions of Department of Defense-wide procedures address issues of consistency and validity raised by external oversight groups. This study evaluated the impact of three proposed refinements of the Army Weight Control Program. Anthropometric measurements and fitness test performance were obtained in a randomized sample of 1,038 male and 347 nonpregnant female soldiers at three Army posts. Of this sample, 11% of men and 17% of women were overweight and overfat; 6.3 and 9.8%, respectively, were currently on the Army Weight Control Program. Screening weight tables that ensure women are not inappropriately striving to meet weights more stringent than "healthy" weight (i.e., body mass index < 25 kg/m2) still correctly identified all women for evaluation for the age-specific body fat standards. Body fat estimation using more valid DoD body fat equations that include an abdominal circumference for women reduced the number of female soldiers currently classified as exceeding fat standards, coincidentally resulting in a comparable prevalence of male and female soldiers over the fat standards (12%). A body fat allowance for young soldiers who scored very well on the physical fitness test could have benefited one-fourth of the soldiers exceeding fat standards and acknowledges biological variability in body fat thresholds. Whereas this linkage may motivate fitness habits, it complicates enforcement of reasonably achievable body fat standards. The proposed changes in fat screening and measurement methods are appropriate, but the impact to health and physical readiness of the Force cannot be accurately predicted or measured because of the absence of comprehensive baseline data and tracking

  8. A profile of US-Mexico border mobility among a stratified random sample of Hispanics living in the El Paso-Juarez area.

    PubMed

    Lapeyrouse, L M; Morera, O; Heyman, J M C; Amaya, M A; Pingitore, N E; Balcazar, H

    2012-04-01

    Examination of border-specific characteristics such as trans-border mobility and transborder health service illuminates the heterogeneity of border Hispanics and may provide greater insight toward understanding differential health behaviors and status among these populations. In this study, we create a descriptive profile of the concept of trans-border mobility by exploring the relationship between mobility status and a series of demographic, economic and socio-cultural characteristics among mobile and non-mobile Hispanics living in the El Paso-Juarez border region. Using a two-stage stratified random sampling design, bilingual interviewers collected survey data from border residents (n = 1,002). Findings show that significant economic, cultural, and behavioral differences exist between mobile and non-mobile respondents. While non-mobile respondents were found to have higher social economic status than their mobile counterparts, mobility across the border was found to offer less acculturated and poorer Hispanics access to alternative sources of health care and other services.

  9. Random Selection for Drug Screening

    SciTech Connect

    Center for Human Reliability Studies

    2007-05-01

    Simple random sampling is generally the starting point for a random sampling process. This sampling technique ensures that each individual within a group (population) has an equal chance of being selected. There are a variety of ways to implement random sampling in a practical situation.

  10. Validity of Footprint Analysis to Determine Flatfoot Using Clinical Diagnosis as the Gold Standard in a Random Sample Aged 40 Years and Older

    PubMed Central

    Pita-Fernández, Salvador; González-Martín, Cristina; Seoane-Pillado, Teresa; López-Calviño, Beatriz; Pértega-Díaz, Sonia; Gil-Guillén, Vicente

    2015-01-01

    Background Research is needed to determine the prevalence and variables associated with the diagnosis of flatfoot, and to evaluate the validity of three footprint analysis methods for diagnosing flatfoot, using clinical diagnosis as a benchmark. Methods We conducted a cross-sectional study of a population-based random sample ≥40 years old (n = 1002) in A Coruña, Spain. Anthropometric variables, Charlson’s comorbidity score, and podiatric examination (including measurement of Clarke’s angle, the Chippaux-Smirak index, and the Staheli index) were used for comparison with a clinical diagnosis method using a podoscope. Multivariate regression was performed. Informed patient consent and ethical review approval were obtained. Results Prevalence of flatfoot in the left and right footprint, measured using the podoscope, was 19.0% and 18.9%, respectively. Variables independently associated with flatfoot diagnosis were age (OR 1.07), female gender (OR 3.55) and BMI (OR 1.39). The area under the receiver operating characteristic curve (AUC) showed that Clarke’s angle is highly accurate in predicting flatfoot (AUC 0.94), followed by the Chippaux-Smirak (AUC 0.83) and Staheli (AUC 0.80) indices. Sensitivity values were 89.8% for Clarke’s angle, 94.2% for the Chippaux-Smirak index, and 81.8% for the Staheli index, with respective positive likelihood ratios or 9.7, 2.1, and 2.0. Conclusions Age, gender, and BMI were associated with a flatfoot diagnosis. The indices studied are suitable for diagnosing flatfoot in adults, especially Clarke’s angle, which is highly accurate for flatfoot diagnosis in this population. PMID:25382154

  11. Effectiveness of Housing First with Intensive Case Management in an Ethnically Diverse Sample of Homeless Adults with Mental Illness: A Randomized Controlled Trial

    PubMed Central

    Stergiopoulos, Vicky; Gozdzik, Agnes; Misir, Vachan; Skosireva, Anna; Connelly, Jo; Sarang, Aseefa; Whisler, Adam; Hwang, Stephen W.; O’Campo, Patricia; McKenzie, Kwame

    2015-01-01

    Housing First (HF) is being widely disseminated in efforts to end homelessness among homeless adults with psychiatric disabilities. This study evaluates the effectiveness of HF with Intensive Case Management (ICM) among ethnically diverse homeless adults in an urban setting. 378 participants were randomized to HF with ICM or treatment-as-usual (TAU) in Toronto (Canada), and followed for 24 months. Measures of effectiveness included housing stability, physical (EQ5D-VAS) and mental (CSI, GAIN-SS) health, social functioning (MCAS), quality of life (QoLI20), and health service use. Two-thirds of the sample (63%) was from racialized groups and half (50%) were born outside Canada. Over the 24 months of follow-up, HF participants spent a significantly greater percentage of time in stable residences compared to TAU participants (75.1% 95% CI 70.5 to 79.7 vs. 39.3% 95% CI 34.3 to 44.2, respectively). Similarly, community functioning (MCAS) improved significantly from baseline in HF compared to TAU participants (change in mean difference = +1.67 95% CI 0.04 to 3.30). There was a significant reduction in the number of days spent experiencing alcohol problems among the HF compared to TAU participants at 24 months (ratio of rate ratios = 0.47 95% CI 0.22 to 0.99) relative to baseline, a reduction of 53%. Although the number of emergency department visits and days in hospital over 24 months did not differ significantly between HF and TAU participants, fewer HF participants compared to TAU participants had 1 or more hospitalizations during this period (70.4% vs. 81.1%, respectively; P=0.044). Compared to non-racialized HF participants, racialized HF participants saw an increase in the amount of money spent on alcohol (change in mean difference = $112.90 95% CI 5.84 to 219.96) and a reduction in physical community integration (ratio of rate ratios = 0.67 95% CI 0.47 to 0.96) from baseline to 24 months. Secondary analyses found a significant reduction in the number of days

  12. Children’s Quality of Life Based on the KIDSCREEN-27: Child Self-Report, Parent Ratings and Child-Parent Agreement in a Swedish Random Population Sample

    PubMed Central

    Berman, Anne H.; Liu, Bojing; Ullman, Sara; Jadbäck, Isabel; Engström, Karin

    2016-01-01

    Background The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL), with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11–16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured. Methods A random population sample consisting of 600 children aged 11–16, 100 per age group and one of their parents (N = 1200), were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK) coefficient for ordinal data (PABAK-OS); dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots. Results Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU) but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77), Parent relations and autonomy (55.1/49.99), Social Support and peers (54.1/49.94) and School (55.8/50.01). Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences

  13. Confidence Intervals, Power Calculation, and Sample Size Estimation for the Squared Multiple Correlation Coefficient under the Fixed and Random Regression Models: A Computer Program and Useful Standard Tables.

    ERIC Educational Resources Information Center

    Mendoza, Jorge L.; Stafford, Karen L.

    2001-01-01

    Introduces a computer package written for Mathematica, the purpose of which is to perform a number of difficult iterative functions with respect to the squared multiple correlation coefficient under the fixed and random models. These functions include computation of the confidence interval upper and lower bounds, power calculation, calculation of…

  14. Rationale, design, and sample characteristics of a practical randomized trial to assess a weight loss intervention for low-income women: the Weight-Wise II Program.

    PubMed

    Samuel-Hodge, Carmen D; Garcia, Beverly A; Johnston, Larry F; Kraschnewski, Jennifer L; Gustafson, Alison A; Norwood, Arnita F; Glasgow, Russell E; Gold, Alison D; Graham, John W; Evenson, Kelly R; Stearns, Sally C; Gizlice, Ziya; Keyserling, Thomas C

    2012-01-01

    Obesity is common among low-income mid-life women, yet most published weight loss studies have not focused on this population and have been highly resourced efficacy trials. Thus, practical type 2 translational studies are needed to evaluate weight loss interventions for low-income women. In this paper, we present the rationale, study design, and baseline characteristics of a type 2 translational study that evaluates both the processes and outcomes of a weight loss intervention for low-income women given at 6 county health departments in North Carolina. Key features of this study include random selection of study sites, intervention delivery by current staff at study sites, efforts to integrate the intervention with local community resources, a focus on evaluating the processes of translation using the RE-AIM framework, use of an evidence-based weight loss intervention, a detailed description of participant recruitment and representativeness, and a practical randomized trial designed to assess the effectiveness of the intervention. Of 81 health departments invited to participate, 30 (37%) were eligible and willing, and 6 were selected at random to deliver the intervention. Of 432 potential participants screened by phone, 213 (49%) were eligible and of these, 189 (89%) completed baseline measures and were randomized to receive a 5-month weight loss intervention or a delayed intervention. The mean age was 51, mean BMI 37 kg/m(2), 53% were African American, and 43% had no health insurance. The results of this study should be informative to key stakeholders interested in real world weight loss interventions for low-income mid-life women.

  15. Investigating the Randomness of Numbers

    ERIC Educational Resources Information Center

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  16. Effect of sample size on multi-parametric prediction of tissue outcome in acute ischemic stroke using a random forest classifier

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Fiehler, Jens

    2015-03-01

    The tissue outcome prediction in acute ischemic stroke patients is highly relevant for clinical and research purposes. It has been shown that the combined analysis of diffusion and perfusion MRI datasets using high-level machine learning techniques leads to an improved prediction of final infarction compared to single perfusion parameter thresholding. However, most high-level classifiers require a previous training and, until now, it is ambiguous how many subjects are required for this, which is the focus of this work. 23 MRI datasets of acute stroke patients with known tissue outcome were used in this work. Relative values of diffusion and perfusion parameters as well as the binary tissue outcome were extracted on a voxel-by- voxel level for all patients and used for training of a random forest classifier. The number of patients used for training set definition was iteratively and randomly reduced from using all 22 other patients to only one other patient. Thus, 22 tissue outcome predictions were generated for each patient using the trained random forest classifiers and compared to the known tissue outcome using the Dice coefficient. Overall, a logarithmic relation between the number of patients used for training set definition and tissue outcome prediction accuracy was found. Quantitatively, a mean Dice coefficient of 0.45 was found for the prediction using the training set consisting of the voxel information from only one other patient, which increases to 0.53 if using all other patients (n=22). Based on extrapolation, 50-100 patients appear to be a reasonable tradeoff between tissue outcome prediction accuracy and effort required for data acquisition and preparation.

  17. Random Selection for Drug Screening

    SciTech Connect

    Center for Human Reliability Studies

    2007-05-01

    Sampling is the process of choosing some members out of a group or population. Probablity sampling, or random sampling, is the process of selecting members by chance with a known probability of each individual being chosen.

  18. The association of health-care use and hepatitis C virus infection in a random sample of urban slum community residents in southern India.

    PubMed

    Marx, Melissa A; Murugavel, K G; Sivaram, Sudha; Balakrishnan, P; Steinhoff, Mark; Anand, S; Thomas, David L; Solomon, Suniti; Celentano, David D

    2003-02-01

    To determine whether health-care use was associated with prevalent hepatitis C virus (HCV) infection in Chennai, India, 1,947 adults from 30 slum communities were randomly selected to be interviewed about parenteral and sexual risks for HCV infection and to provide biological specimens for HCV and sexually transmitted infection (STI) testing. Prevalent HCV infection was detected in 2.4% of non-injection drug using (IDU) participants. Controlling for other associated factors, and excluding IDU, men who used informal health-care providers were five times as likely to be HCV infected as those who did not use informal providers (Adjusted Odds Ratio, AOR = 5.83; 95% confidence interval [CI]: 1.57, 21.6), a finding not detected in women. More research is needed to determine the extent to which HCV infection is associated with reuse of contaminated injection equipment in health-care settings in developing countries. PMID:12641422

  19. Quantitative culture of endotracheal aspirate and BAL fluid samples in the management of patients with ventilator-associated pneumonia: a randomized clinical trial* **

    PubMed Central

    Corrêa, Ricardo de Amorim; Luna, Carlos Michel; dos Anjos, José Carlos Fernandez Versiani; Barbosa, Eurípedes Alvarenga; de Rezende, Cláudia Juliana; Rezende, Adriano Pereira; Pereira, Fernando Henrique; Rocha, Manoel Otávio da Costa

    2014-01-01

    OBJECTIVE: To compare 28-day mortality rates and clinical outcomes in ICU patients with ventilator-associated pneumonia according to the diagnostic strategy used. METHODS: This was a prospective randomized clinical trial. Of the 73 patients included in the study, 36 and 37 were randomized to undergo BAL or endotracheal aspiration (EA), respectively. Antibiotic therapy was based on guidelines and was adjusted according to the results of quantitative cultures. RESULTS: The 28-day mortality rate was similar in the BAL and EA groups (25.0% and 37.8%, respectively; p = 0.353). There were no differences between the groups regarding the duration of mechanical ventilation, antibiotic therapy, secondary complications, VAP recurrence, or length of ICU and hospital stay. Initial antibiotic therapy was deemed appropriate in 28 (77.8%) and 30 (83.3%) of the patients in the BAL and EA groups, respectively (p = 0.551). The 28-day mortality rate was not associated with the appropriateness of initial therapy in the BAL and EA groups (appropriate therapy: 35.7% vs. 43.3%; p = 0.553; and inappropriate therapy: 62.5% vs. 50.0%; p = 1.000). Previous use of antibiotics did not affect the culture yield in the EA or BAL group (p = 0.130 and p = 0.484, respectively). CONCLUSIONS: In the context of this study, the management of VAP patients, based on the results of quantitative endotracheal aspirate cultures, led to similar clinical outcomes to those obtained with the results of quantitative BAL fluid cultures. PMID:25610505

  20. Does the "Marriage Benefit" Extend to Partners in Gay and Lesbian Relationships?: Evidence from a Random Sample of Sexually Active Adults

    ERIC Educational Resources Information Center

    Wienke, Chris; Hill, Gretchen J.

    2009-01-01

    Prior research indicates that the married enjoy higher levels of well-being than the unmarried, including unmarried cohabiters. Yet, comparisons of married and unmarried persons routinely exclude partnered gays and lesbians. Using a large probability sample, this study assessed how the well-being of partnered gays and lesbians (282) compares with…

  1. Selection of Common Items as an Unrecognized Source of Variability in Test Equating: A Bootstrap Approximation Assuming Random Sampling of Common Items

    ERIC Educational Resources Information Center

    Michaelides, Michalis P.; Haertel, Edward H.

    2014-01-01

    The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…

  2. Home-based HPV self-sampling improves participation by never-screened and under-screened women: Results from a large randomized trial (iPap) in Australia.

    PubMed

    Sultana, Farhana; English, Dallas R; Simpson, Julie A; Drennan, Kelly T; Mullins, Robyn; Brotherton, Julia M L; Wrede, C David; Heley, Stella; Saville, Marion; Gertig, Dorota M

    2016-07-15

    We conducted a randomized controlled trial to determine whether HPV self-sampling increases participation in cervical screening by never- and under-screened (not screened in past 5 years) women when compared with a reminder letter for a Pap test. Never- or under-screened Victorian women aged 30-69 years, not pregnant and with no prior hysterectomy were eligible. Within each stratum (never-screened and under-screened), we randomly allocated 7,140 women to self-sampling and 1,020 to Pap test reminders. The self-sampling kit comprised a nylon tipped flocked swab enclosed in a dry plastic tube. The primary outcome was participation, as indicated by returning a swab or undergoing a Pap test; the secondary outcome, for women in the self-sampling arm with a positive HPV test, was undergoing appropriate clinical investigation. The Roche Cobas® 4800 test was used to measure presence of HPV DNA. Participation was higher for the self-sampling arm: 20.3 versus 6.0% for never-screened women (absolute difference 14.4%, 95% CI: 12.6-16.1%, p < 0.001) and 11.5 versus 6.4% for under-screened women (difference 5.1%, 95% CI: 3.4-6.8%, p < 0.001). Of the 1,649 women who returned a swab, 45 (2.7%) were positive for HPV16/18 and 95 (5.8%) were positive for other high-risk HPV types. Within 6 months, 28 (62.2%) women positive for HPV16/18 had colposcopy as recommended and nine (20%) had cytology only. Of women positive for other high-risk HPV types, 78 (82.1%) had a Pap test as recommended. HPV self-sampling improves participation in cervical screening for never- and under-screened women and most women with HPV detected have appropriate clinical investigation.

  3. Parent-Child Associations in Pedometer-Determined Physical Activity and Sedentary Behaviour on Weekdays and Weekends in Random Samples of Families in the Czech Republic

    PubMed Central

    Sigmundová, Dagmar; Sigmund, Erik; Vokáčová, Jana; Kopčáková, Jaroslava

    2014-01-01

    This study investigates whether more physically active parents bring up more physically active children and whether parents’ level of physical activity helps children achieve step count recommendations on weekdays and weekends. The participants (388 parents aged 35–45 and their 485 children aged 9–12) were randomly recruited from 21 Czech government-funded primary schools. The participants recorded pedometer step counts for seven days (≥10 h a day) during April–May and September–October of 2013. Logistic regression (Enter method) was used to examine the achievement of the international recommendations of 11,000 steps/day for girls and 13,000 steps/day for boys. The children of fathers and mothers who met the weekend recommendation of 10,000 steps were 5.48 (95% confidence interval: 1.65; 18.19; p < 0.01) and 3.60 times, respectively (95% confidence interval: 1.21; 10.74; p < 0.05) more likely to achieve the international weekend recommendation than the children of less active parents. The children of mothers who reached the weekday pedometer-based step count recommendation were 4.94 times (95% confidence interval: 1.45; 16.82; p < 0.05) more likely to fulfil the step count recommendation on weekdays than the children of less active mothers. PMID:25026084

  4. Effect of initial seed and number of samples on simple-random and Latin-Hypercube Monte Carlo probabilities (confidence interval considerations)

    SciTech Connect

    ROMERO,VICENTE J.

    2000-05-04

    In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.

  5. Randomized Trial of Mediastinal Lymph Node Sampling Versus Complete Lymphadenectomy During Pulmonary Resection in the Patient with N0 or N1 (Less Than Hilar) Non-Small Cell Carcinoma: Results of the ACOSOG Z0030 Trial

    PubMed Central

    Darling, Gail E.; Allen, Mark S.; Decker, Paul A.; Ballman, Karla; Malthaner, Richard A.; Inculet, Richard.; Jones, David R.; McKenna, Robert J.; Landreneau, Rodney J.; Rusch, Valerie W.; Putnam, Joe B.

    2016-01-01

    Objective To determine if mediastinal lymph node dissection (MLND) improves survival compared to mediastinal lymph node sampling (MLNS) in patients undergoing resection for N0 or non-hilar N1, T1 or T2 non-small cell lung cancer (NSCLC). Methods Patients with NSCLC underwent sampling of 2R, 4R, 7 and 10R for right sided tumors, and 5, 6, 7 and 10L for left sided tumors. If all were negative for malignancy, patients were randomized to no further lymph node sampling (MLNS) or complete MLND. Results Of 1,111 patients randomized, 1,023 (498 MLNS, 525 MLND) were eligible/evaluable. There were no significant differences between the two groups in terms of demographics, ECOG status, histology, location of the cancer, type or extent of resection, or pathological stage. Occult N2 disease was found in 21 patients in the MLND group. At median follow-up of 6.5 years, 435 (43%) patients have died; (MLNS: 217 (44%);MLND:218 (42%)). The median survival for MLNS is8.1 years, and 8.5 years for MLND (p=0.25). The 5-year disease free survival rate was 69% (95% CI: 64%-74%) in the MLNS group versus 68%(95% CI: 64%-73%) years in the MLND group (p=0.92). There was no difference for local (p=0.52), regional (p=0.10), or distant (p=0.76) recurrence between the two groups. Conclusions If systematic, thorough presection sampling of the mediastinal and hilar lymph nodes is negative, MLND does not improve survival in patients with early stage NSCLC but these results are not generalizable to patients staged radiographically or those with higher stage tumors. PMID:21335122

  6. Anomalous Random Telegraph Signal Extractions from a Very Large Number of n-Metal Oxide Semiconductor Field-Effect Transistors Using Test Element Groups with 0.47 Hz-3.0 MHz Sampling Frequency

    NASA Astrophysics Data System (ADS)

    Abe, Kenichi; Fujisawa, Takafumi; Teramoto, Akinobu; Watabe, Shunichi; Sugawa, Shigetoshi; Ohmi, Tadahiro

    2009-04-01

    Random telegraph signal (RTS) noise in small gate area metal oxide semiconductor (MOS) transistors occurs frequently and causes serious problems in the field of flash memories and complementary MOS (CMOS) image sensors. The trap in the gate insulator, which is considered the origin of RTS, varies widely in terms of spatial location and energy level, so that RTS characteristics including the amplitude and time constants have large variability by nature and statistical analysis of RTS should become indispensable. In this paper, we propose a high-speed RTS measurement system with a newly developed test circuit and discuss the drain current and temperature dependences of RTS amplitude distributions. Moreover, we expand the sampling frequency between 0.47 Hz-3.0 MHz and the observation length up to about 4 h and can thereby observe some anomalous RTSs such as ones with long time constants, ones generated abruptly, and ones disappearing.

  7. A Comparison of the Number of Men Who Have Sex with Men among Rural-To-Urban Migrants with Non-Migrant Rural and Urban Residents in Wuhan, China: A GIS/GPS-Assisted Random Sample Survey Study

    PubMed Central

    Chen, Xinguang; Yu, Bin; Zhou, Dunjin; Zhou, Wang; Gong, Jie; Li, Shiyue; Stanton, Bonita

    2015-01-01

    Background Mobile populations and men who have sex with men (MSM) play an increasing role in the current HIV epidemic in China and across the globe. While considerable research has addressed both of these at-risk populations, more effective HIV control requires accurate data on the number of MSM at the population level, particularly MSM among migrant populations. Methods Survey data from a random sample of male rural-to-urban migrants (aged 18-45, n=572) in Wuhan, China were analyzed and compared with those of randomly selected non-migrant urban (n=566) and rural counterparts (580). The GIS/GPS technologies were used for sampling and the survey estimation method was used for data analysis. Results HIV-related risk behaviors among rural-to-urban migrants were similar to those among the two comparison groups. The estimated proportion of MSM among migrants [95% CI] was 5.8% [4.7, 6.8], higher than 2.8% [1.2, 4.5] for rural residents and 1.0% [0.0, 2.4] for urban residents, respectively. Among these migrants, the MSM were more likely than non-MSM to be older in age, married, and migrated to more cities. They were also more likely to co-habit with others in rental properties located in new town and neighborhoods with fewer old acquaintances and more entertainment establishments. In addition, they were more likely to engage in commercial sex and less likely to consistently use condoms. Conclusion Findings of this study indicate that compared to rural and urban populations, the migrant population in Wuhan consists of a higher proportion of MSM who also exhibit higher levels of HIV-related risk behaviors. More effective interventions should target this population with a focus on neighborhood factors, social capital and collective efficacy for risk reduction. PMID:26241900

  8. SAMPLING OSCILLOSCOPE

    DOEpatents

    Sugarman, R.M.

    1960-08-30

    An oscilloscope is designed for displaying transient signal waveforms having random time and amplitude distributions. The oscilloscopc is a sampling device that selects for display a portion of only those waveforms having a particular range of amplitudes. For this purpose a pulse-height analyzer is provided to screen the pulses. A variable voltage-level shifter and a time-scale rampvoltage generator take the pulse height relative to the start of the waveform. The variable voltage shifter produces a voltage level raised one step for each sequential signal waveform to be sampled and this results in an unsmeared record of input signal waveforms. Appropriate delay devices permit each sample waveform to pass its peak amplitude before the circuit selects it for display.

  9. [Analysis of variance of bacterial counts in milk. 1. Characterization of total variance and the components of variance random sampling error, methodologic error and variation between parallel errors during storage].

    PubMed

    Böhmer, L; Hildebrandt, G

    1998-01-01

    In contrast to the prevailing automatized chemical analytical methods, classical microbiological techniques are linked with considerable material- and human-dependent sources of errors. These effects must be objectively considered for assessing the reliability and representativeness of a test result. As an example for error analysis, the deviation of bacterial counts and the influence of the time of testing, bacterial species involved (total bacterial count, coliform count) and the detection method used (pour-/spread-plate) were determined in a repeated testing of parallel samples of pasteurized (stored for 8 days at 10 degrees C) and raw (stored for 3 days at 6 degrees C) milk. Separate characterization of deviation components, namely, unavoidable random sampling error as well as methodical error and variation between parallel samples, was made possible by means of a test design where variance analysis was applied. Based on the results of the study, the following conclusions can be drawn: 1. Immediately after filling, the total count deviation in milk mainly followed the POISSON-distribution model and allowed a reliable hygiene evaluation of lots even with few samples. Subsequently, regardless of the examination procedure used, the setting up of parallel dilution series can be disregarded. 2. With increasing storage period, bacterial multiplication especially of psychrotrophs leads to unpredictable changes in the bacterial profile and density. With the increase in errors between samples, it is common to find packages which have acceptable microbiological quality but are already spoiled by the time of the expiry date labeled. As a consequence, a uniform acceptance or rejection of the batch is seldom possible. 3. Because the contamination level of coliforms in certified raw milk mostly lies near the detection limit, coliform counts with high relative deviation are expected to be found in milk directly after filling. Since no bacterial multiplication takes place

  10. Factors predicting mortality in a total population sample of the elderly.

    PubMed Central

    Campbell, A J; Diep, C; Reinken, J; McCosh, L

    1985-01-01

    Between 1977 and 1979 an age stratified sample of people 65 years and over living in the community and in institutions in Gisborne, New Zealand was assessed medically and socially. This sample was followed and reviewed in 1982. At follow up 308 subjects were seen, 227 had died, and 24 had left the area. Factors predicting mortality were assessed. Using a log rank test, factors predicting mortality included age, impaired mental function, functional disability, urinary incontinence, prescribed drugs, pulse pressure, erythrocyte sedimentation rate (ESR), systolic pressure, cardiovascular drugs, and falls. However, a number of these factors increased in prevalence with age. Using a Cox's regression analysis for factors predicting mortality after controlling for age, the following were found to be significant predictors: impaired mental function; functional disability; urinary incontinence; prescribed drugs, ESR and falls. A proportional hazards general linear model showed that the major predictors of mortality in old age were markers of established disease. PMID:4086965

  11. Random Vibrations

    NASA Technical Reports Server (NTRS)

    Messaro. Semma; Harrison, Phillip

    2010-01-01

    Ares I Zonal Random vibration environments due to acoustic impingement and combustion processes are develop for liftoff, ascent and reentry. Random Vibration test criteria for Ares I Upper Stage pyrotechnic components are developed by enveloping the applicable zonal environments where each component is located. Random vibration tests will be conducted to assure that these components will survive and function appropriately after exposure to the expected vibration environments. Methodology: Random Vibration test criteria for Ares I Upper Stage pyrotechnic components were desired that would envelope all the applicable environments where each component was located. Applicable Ares I Vehicle drawings and design information needed to be assessed to determine the location(s) for each component on the Ares I Upper Stage. Design and test criteria needed to be developed by plotting and enveloping the applicable environments using Microsoft Excel Spreadsheet Software and documenting them in a report Using Microsoft Word Processing Software. Conclusion: Random vibration liftoff, ascent, and green run design & test criteria for the Upper Stage Pyrotechnic Components were developed by using Microsoft Excel to envelope zonal environments applicable to each component. Results were transferred from Excel into a report using Microsoft Word. After the report is reviewed and edited by my mentor it will be submitted for publication as an attachment to a memorandum. Pyrotechnic component designers will extract criteria from my report for incorporation into the design and test specifications for components. Eventually the hardware will be tested to the environments I developed to assure that the components will survive and function appropriately after exposure to the expected vibration environments.

  12. Prediction of ethylene content in melt-state random and block polypropylene by near-infrared spectroscopy and chemometrics: influence of a change in sample temperature and its compensation method.

    PubMed

    Watari, Masahiro; Ozaki, Yukihiro

    2005-05-01

    This paper reports on the influence of a change in sample temperature, and a method for its compensation, for the prediction of ethylene (C2) content in melt-state random polypropylene (RPP) and block polypropylene (BPP) by near-infrared (NIR) spectroscopy and chemometrics. Near-infrared (NIR) spectra of RPP in the melt and solid states were measured by a Fourier transform near-infrared (FT-NIR) on-line monitoring system and an FT-NIR laboratory system. There are some significant differences between the solid and melt-state RPP spectra. Moreover, we investigated the predicted values of the C2 content from the RPP or BPP spectra measured at 190 degrees C and 250 degrees C using the calibration model for the C2 content developed using the RPP or BPP spectra measured at 230 degrees C. The errors in the predicted values of the C2 content depend on the pretreatment methods for each calibration model. It was found that multiplicative signal correction (MSC) is very effective in compensating for the influence of the change of temperature for the RPP or BPP samples on the predicted C2 content. From the suggestion of principal component analysis (PCA) and difference spectrum analysis, we propose a new compensation method for the temperature change that uses the difference spectra between two spectra sets measured at different temperatures. We achieved good results using the difference spectra between the RPP/BPP spectra sets measured at 190 degrees C and 250 degrees C after correction and the calibration model developed with the spectra measured at 230 degrees C. The comparison between the method using MSC and the proposed method showed that the predicted error in the latter is slightly better than those in the former.

  13. Fractional randomness

    NASA Astrophysics Data System (ADS)

    Tapiero, Charles S.; Vallois, Pierre

    2016-11-01

    The premise of this paper is that a fractional probability distribution is based on fractional operators and the fractional (Hurst) index used that alters the classical setting of random variables. For example, a random variable defined by its density function might not have a fractional density function defined in its conventional sense. Practically, it implies that a distribution's granularity defined by a fractional kernel may have properties that differ due to the fractional index used and the fractional calculus applied to define it. The purpose of this paper is to consider an application of fractional calculus to define the fractional density function of a random variable. In addition, we provide and prove a number of results, defining the functional forms of these distributions as well as their existence. In particular, we define fractional probability distributions for increasing and decreasing functions that are right continuous. Examples are used to motivate the usefulness of a statistical approach to fractional calculus and its application to economic and financial problems. In conclusion, this paper is a preliminary attempt to construct statistical fractional models. Due to the breadth and the extent of such problems, this paper may be considered as an initial attempt to do so.

  14. GEOSTATISTICAL SAMPLING DESIGNS FOR HAZARDOUS WASTE SITES

    EPA Science Inventory

    This chapter discusses field sampling design for environmental sites and hazardous waste sites with respect to random variable sampling theory, Gy's sampling theory, and geostatistical (kriging) sampling theory. The literature often presents these sampling methods as an adversari...

  15. Is random access memory random?

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.

  16. Response of the Elderly to Disaster: An Age-Stratified Analysis.

    ERIC Educational Resources Information Center

    Bolin, Robert; Klenow, Daniel J.

    1982-01-01

    Analyzed the effect of age on elderly tornado victims' (N=62) responses to stress effects. Compared to younger victims (N=240), the elderly did not suffer disproportionate material losses, but were more likely to be injured and have a death in the household. Elderly victims had a lower incidene of emotional and family problems. (Author/JAC)

  17. Using age-stratified incidence data to examine the transmission consequences of pertussis vaccination.

    PubMed

    Blackwood, J C; Cummings, D A T; Iamsirithaworn, S; Rohani, P

    2016-09-01

    Pertussis is a highly infectious respiratory disease that has been on the rise in many countries worldwide over the past several years. The drivers of this increase in pertussis incidence remain hotly debated, with a central and long-standing hypothesis that questions the ability of vaccines to eliminate pertussis transmission rather than simply modulate the severity of disease. In this paper, we present age-structured case notification data from all provinces of Thailand between 1981 and 2014, a period during which vaccine uptake rose substantially, permitting an evaluation of the transmission impacts of vaccination. Our analyses demonstrate decreases in incidence across all ages with increased vaccine uptake - an observation that is at odds with pertussis case notification data in a number of other countries. To explore whether these observations are consistent with a rise in herd immunity and a reduction in bacterial transmission, we analyze an age-structured model that incorporates contrasting hypotheses concerning the immunological and transmission consequences of vaccines. Our results lead us to conclude that the most parsimonious explanation for the combined reduction in incidence and the shift to older age groups in the Thailand data is vaccine-induced herd immunity. PMID:27663785

  18. Age-Stratified Treatment Response Rates in Hospitalized Patients with Clostridium difficile Infection Treated with Metronidazole

    PubMed Central

    Pham, Vy P.; Luce, Andrea M.; Ruppelt, Sara C.; Wei, Wenjing; Aitken, Samuel L.; Musick, William L.; Roux, Ryan K.

    2015-01-01

    Consensus on the optimal treatment of Clostridium difficile infection (CDI) is rapidly changing. Treatment with metronidazole has been associated with increased clinical failure rates; however, the reasons for this are unclear. The purpose of this study was to assess age-related treatment response rates in hospitalized patients with CDI treated with metronidazole. This was a retrospective, multicenter cohort study of hospitalized patients with CDI. Patients were assessed for refractory CDI, defined as persistent diarrhea after 7 days of metronidazole therapy, and stratified by age and clinical characteristics. A total of 242 individuals, aged 60 ± 18 years (Charlson comorbidity index, 3.8 ± 2.4; Horn's index, 1.7 ± 1.0) were included. One hundred twenty-eight patients (53%) had severe CDI. Seventy patients (29%) had refractory CDI, a percentage that increased from 22% to 28% and to 37% for patients aged less than 50 years, for patients from 50 to 70 years, and for patients aged >70 years, respectively (P = 0.05). In multivariate analysis, Horn's index (odds ratio [OR], 2.04; 95% confidence interval [CI], 1.50 to 2.77; P < 0.001), severe CDI (OR, 2.25; 95% CI, 1.15 to 4.41; P = 0.018), and continued use of antibiotics (OR, 2.65; 95% CI, 1.30 to 5.39; P = 0.0072) were identified as significant predictors of refractory CDI. Age was not identified as an independent risk factor for refractory CDI. Therefore, hospitalized elderly patients with CDI treated with metronidazole had increased refractory CDI rates likely due to increased underlying severity of illness, severity of CDI, and concomitant antibiotic use. These results may help identify patients that may benefit from alternative C. difficile treatments other than metronidazole. PMID:26195522

  19. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.

  20. Deterministic multidimensional nonuniform gap sampling

    NASA Astrophysics Data System (ADS)

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.

  1. Cluster randomization: a trap for the unwary.

    PubMed Central

    Underwood, M; Barnett, A; Hajioff, S

    1998-01-01

    Controlled trials that randomize by practice can provide robust evidence to inform patient care. However, compared with randomizing by each individual patient, this approach may have substantial implications for sample size calculations and the interpretation of results. An increased awareness of these effects will improve the quality of research based on randomization by practice. PMID:9624757

  2. Electron microscopic stereological study of collagen fibrils in bovine articular cartilage: volume and surface densities are best obtained indirectly (from length densities and diameters) using isotropic uniform random sampling

    PubMed Central

    LÅNGSJÖ, TEEMU K.; HYTTINEN, MIKA; PELTTARI, ALPO; KIRALY, KARI; AROKOSKI, JARI; HELMINEN, HEIKKI J.

    1999-01-01

    Results obtained by the indirect zonal isotropic uniform random (IUR) estimation were compared with those obtained by the direct point and interception counting methods on vertical (VS) or IUR sections in a stereological study of bovine articular cartilage collagen fibrils at the ultrastructural level. Besides comparisons between the direct and indirect estimations (direct IUR vs indirect IUR estimations) and between different sampling methods (VS vs IUR sampling), simultaneous comparison of the 2 issues took place (direct VS vs indirect IUR estimation). Using the direct VS method, articular cartilage superficial zone collagen volume fraction (Vv 41%) was 67% and fibril surface density (Sv 0.030 nm2/nm3) 15% higher (P<0.05) than values obtained by the indirect IUR method (Vv 25% and Sv 0.026 nm2/nm3). The same was observed when the direct IUR method was used: collagen volume fraction (Vv 40%) was 63% and fibril surface density (Sv 0.032 nm2/nm3) 21% higher (P<0.05) than those obtained by the indirect IUR technique. Similarly, in the deep zone of articular cartilage direct VS and direct IUR methods gave 50 and 55% higher (P<0.05) collagen fibril volume fractions (Vv 43 and 44% vs 29%) and the direct IUR method 25% higher (P<0.05) fibril surface density values (Sv 0.025 vs 0.020 nm2/nm3) than the indirect IUR estimation. On theoretical grounds, scrutiny calculations, as well as earlier reports, it is concluded that the direct VS and direct IUR methods systematically overestimated the Vv and Sv of collagen fibrils. This bias was due to the overprojection which derives from the high section thickness in relation to collagen fibril diameter. On the other hand, factors that during estimation tend to underestimate Vv and Sv, such as profile overlapping and truncation (‘fuzzy’ profiles), seemed to cause less bias. As length density (Lv) and collagen fibril diameter are minimally biased by the high relative section thickness, the indirect IUR method, based on

  3. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  4. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  5. How to Do Random Allocation (Randomization)

    PubMed Central

    Shin, Wonshik

    2014-01-01

    Purpose To explain the concept and procedure of random allocation as used in a randomized controlled study. Methods We explain the general concept of random allocation and demonstrate how to perform the procedure easily and how to report it in a paper. PMID:24605197

  6. Adaptive Sampling Designs.

    ERIC Educational Resources Information Center

    Flournoy, Nancy

    Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…

  7. Random broadcast on random geometric graphs

    SciTech Connect

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  8. Quantumness, Randomness and Computability

    NASA Astrophysics Data System (ADS)

    Solis, Aldo; Hirsch, Jorge G.

    2015-06-01

    Randomness plays a central role in the quantum mechanical description of our interactions. We review the relationship between the violation of Bell inequalities, non signaling and randomness. We discuss the challenge in defining a random string, and show that algorithmic information theory provides a necessary condition for randomness using Borel normality. We close with a view on incomputablity and its implications in physics.

  9. Methodology Series Module 5: Sampling Strategies.

    PubMed

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study. PMID:27688438

  10. Methodology Series Module 5: Sampling Strategies.

    PubMed

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  11. Directed random walk with random restarts: The Sisyphus random walk

    NASA Astrophysics Data System (ADS)

    Montero, Miquel; Villarroel, Javier

    2016-09-01

    In this paper we consider a particular version of the random walk with restarts: random reset events which suddenly bring the system to the starting value. We analyze its relevant statistical properties, like the transition probability, and show how an equilibrium state appears. Formulas for the first-passage time, high-water marks, and other extreme statistics are also derived; we consider counting problems naturally associated with the system. Finally we indicate feasible generalizations useful for interpreting different physical effects.

  12. Sample Design.

    ERIC Educational Resources Information Center

    Ross, Kenneth N.

    1987-01-01

    This article considers various kinds of probability and non-probability samples in both experimental and survey studies. Throughout, how a sample is chosen is stressed. Size alone is not the determining consideration in sample selection. Good samples do not occur by accident; they are the result of a careful design. (Author/JAZ)

  13. Methodology Series Module 5: Sampling Strategies

    PubMed Central

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  14. Methodology Series Module 5: Sampling Strategies

    PubMed Central

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study.

  15. Subset selection under random censorship

    SciTech Connect

    Kim, J.S.

    1983-03-01

    Suppose we want to model the situation commonly taking place, for example, in industrial life-testing in which two-component series system is understudy. The system functions if and only if both the Type A component and the Type B component are functioning. The distribution or an unknown parameter in the distribution of the Type A component is of interest. Let X/sub 1/, X/sub 2/, ..., X/sub n/ be independent and identically distributed random variables denoting lifelengths of n Type A components with a continuous distribution function F, and let Y/sub 1/, Y/sub 2/, ..., Y/sub n/ be independent and identically distributed random variables denoting lifelengths of n Type B components also with a continuous distribution function H(.). Failure of the Type B component causes the system failure, thereby making it impossible to observe the failure time of the Type A component. The random variables Y/sub 1/, Y/sub 2/, ..., Y/sub n/ are referred to as time-to-censorship or censoring random variables, and the distribution function H(.) as the censoring distribution. We assume that (X/sub 1/, Y/sub 1/), (X/sub 2/, Y/sub 2/), ..., (X/sub n/, Y/sub n/) is an independent and identically distributed sequence of random pairs defined on a common probability space. Our observations consist of the minima, Z/sub 1/ - min (X/sub 1/, Y/sub 1/), Z/sub 2/ = min (X/sub 2/, Y/sub 2/), ..., Z/sub n/ = min (X/sub n/, Y/sub n/, which are i.i.d. random variables. It is the objective of this paper to formulate a k-sample selection problem under random censorship.

  16. Randomization in robot tasks

    NASA Technical Reports Server (NTRS)

    Erdmann, Michael

    1992-01-01

    This paper investigates the role of randomization in the solution of robot manipulation tasks. One example of randomization is shown by the strategy of shaking a bin holding a part in order to orient the part in a desired stable state with some high probability. Randomization can be useful for mobile robot navigation and as a means of guiding the design process.

  17. Random Item IRT Models

    ERIC Educational Resources Information Center

    De Boeck, Paul

    2008-01-01

    It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters…

  18. The MCNP5 Random number generator

    SciTech Connect

    Brown, F. B.; Nagaya, Y.

    2002-01-01

    MCNP and other Monte Carlo particle transport codes use random number generators to produce random variates from a uniform distribution on the interval. These random variates are then used in subsequent sampling from probability distributions to simulate the physical behavior of particles during the transport process. This paper describes the new random number generator developed for MCNP Version 5. The new generator will optionally preserve the exact random sequence of previous versions and is entirely conformant to the Fortran-90 standard, hence completely portable. In addition, skip-ahead algorithms have been implemented to efficiently initialize the generator for new histories, a capability that greatly simplifies parallel algorithms. Further, the precision of the generator has been increased, extending the period by a factor of 10{sup 5}. Finally, the new generator has been subjected to 3 different sets of rigorous and extensive statistical tests to verify that it produces a sufficiently random sequence.

  19. Importance sampling : promises and limitations.

    SciTech Connect

    West, Nicholas J.; Swiler, Laura Painton

    2010-04-01

    Importance sampling is an unbiased sampling method used to sample random variables from different densities than originally defined. These importance sampling densities are constructed to pick 'important' values of input random variables to improve the estimation of a statistical response of interest, such as a mean or probability of failure. Conceptually, importance sampling is very attractive: for example one wants to generate more samples in a failure region when estimating failure probabilities. In practice, however, importance sampling can be challenging to implement efficiently, especially in a general framework that will allow solutions for many classes of problems. We are interested in the promises and limitations of importance sampling as applied to computationally expensive finite element simulations which are treated as 'black-box' codes. In this paper, we present a customized importance sampler that is meant to be used after an initial set of Latin Hypercube samples has been taken, to help refine a failure probability estimate. The importance sampling densities are constructed based on kernel density estimators. We examine importance sampling with respect to two main questions: is importance sampling efficient and accurate for situations where we can only afford small numbers of samples? And does importance sampling require the use of surrogate methods to generate a sufficient number of samples so that the importance sampling process does increase the accuracy of the failure probability estimate? We present various case studies to address these questions.

  20. Capillary sample

    MedlinePlus

    ... using capillary blood sampling. Disadvantages to capillary blood sampling include: Only a limited amount of blood can be drawn using this method. The procedure has some risks (see below). Capillary ...

  1. Quantum random number generation

    DOE PAGES

    Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; Zhang, Zhen; Qi, Bing

    2016-06-28

    Here, quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at amore » high speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less

  2. Adaptive Peer Sampling with Newscast

    NASA Astrophysics Data System (ADS)

    Tölgyesi, Norbert; Jelasity, Márk

    The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.

  3. Quantitation of the niacin metabolites 1-methylnicotinamide and l-methyl-2-pyridone-5-carboxamide in random spot urine samples, by ion-pairing reverse-phase HPLC with UV detection, and the implications for the use of spot urine samples in the assessment of niacin status.

    PubMed

    Creeke, Paul I; Seal, Andrew J

    2005-03-25

    A simple ion-pairing reverse-phase HPLC method, with UV diode array detection, was developed and validated for quantitation of the urinary niacin metabolites 1-methylnicotinamide and l-methyl-2-pyridone-5-carboxamide in a single run. Urine samples were purified using a polymer-based mixed mode anion exchange reverse-phase cartridge. Analysis was performed on a reverse-phase C18 column, using a methanol gradient elution system, containing phosphate buffer pH 7.0, 1-heptanesulphonic acid as the ion-pairing agent and trimethylamine as a modifier. The assay was applied to the measurement of the niacin status of two subjects using spot urine samples. The samples were collected over 4 consecutive days and at four time points during 1 day. Status, expressed as the concentration ratios (2-PYR or 1-MN)/creatinine and 2-PYR/l-MN, varied within and between days and was least for fasting samples. This work illustrates the potential of spot urine sampling for niacin status assessment, but highlights the need for further validation prior to its use in field nutritional surveys.

  4. Powder sampling.

    PubMed

    Venables, Helena J; Wells, J I

    2002-01-01

    The factors involved when sampling powder mixes have been reviewed. The various methods are evaluated (manual, automatic, and sub-sampling) and the errors incurred are discussed. Certain rules have been applied to various samplers and their suitability for powder mixtures are described. The spinning riffler is apparently the most suitable, while the use of sample thieves should be avoided due to error and bias.

  5. Sampling Development

    PubMed Central

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of the enterprise. This article discusses how to sample development in order to accurately discern the shape of developmental change. The ideal solution is daunting: to summarize behavior over 24-hour intervals and collect daily samples over the critical periods of change. We discuss the magnitude of errors due to undersampling, and the risks associated with oversampling. When daily sampling is not feasible, we offer suggestions for sampling methods that can provide preliminary reference points and provisional sketches of the general shape of a developmental trajectory. Denser sampling then can be applied strategically during periods of enhanced variability, inflections in the rate of developmental change, or in relation to key events or processes that may affect the course of change. Despite the challenges of dense repeated sampling, researchers must take seriously the problem of sampling on a developmental time scale if we are to know the true shape of developmental change. PMID:22140355

  6. Sampling Development

    ERIC Educational Resources Information Center

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  7. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  8. Fast generation of sparse random kernel graphs

    DOE PAGES

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  9. Elevating sampling

    PubMed Central

    Labuz, Joseph M.; Takayama, Shuichi

    2014-01-01

    Sampling – the process of collecting, preparing, and introducing an appropriate volume element (voxel) into a system – is often under appreciated and pushed behind the scenes in lab-on-a-chip research. What often stands in the way between proof-of-principle demonstrations of potentially exciting technology and its broader dissemination and actual use, however, is the effectiveness of sample collection and preparation. The power of micro- and nanofluidics to improve reactions, sensing, separation, and cell culture cannot be accessed if sampling is not equally efficient and reliable. This perspective will highlight recent successes as well as assess current challenges and opportunities in this area. PMID:24781100

  10. SAMPLING SYSTEM

    DOEpatents

    Hannaford, B.A.; Rosenberg, R.; Segaser, C.L.; Terry, C.L.

    1961-01-17

    An apparatus is given for the batch sampling of radioactive liquids such as slurries from a system by remote control, while providing shielding for protection of operating personnel from the harmful effects of radiation.

  11. Experimental scattershot boson sampling

    PubMed Central

    Bentivegna, Marco; Spagnolo, Nicolò; Vitelli, Chiara; Flamini, Fulvio; Viggianiello, Niko; Latmiral, Ludovico; Mataloni, Paolo; Brod, Daniel J.; Galvão, Ernesto F.; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto; Sciarrino, Fabio

    2015-01-01

    Boson sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialized quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, scattershot boson sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric down-conversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We use recently proposed statistical tools to analyze our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy. PMID:26601164

  12. Experimental scattershot boson sampling.

    PubMed

    Bentivegna, Marco; Spagnolo, Nicolò; Vitelli, Chiara; Flamini, Fulvio; Viggianiello, Niko; Latmiral, Ludovico; Mataloni, Paolo; Brod, Daniel J; Galvão, Ernesto F; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto; Sciarrino, Fabio

    2015-04-01

    Boson sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialized quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, scattershot boson sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric down-conversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We use recently proposed statistical tools to analyze our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy.

  13. Random bistochastic matrices

    NASA Astrophysics Data System (ADS)

    Cappellini, Valerio; Sommers, Hans-Jürgen; Bruzda, Wojciech; Życzkowski, Karol

    2009-09-01

    Ensembles of random stochastic and bistochastic matrices are investigated. While all columns of a random stochastic matrix can be chosen independently, the rows and columns of a bistochastic matrix have to be correlated. We evaluate the probability measure induced into the Birkhoff polytope of bistochastic matrices by applying the Sinkhorn algorithm to a given ensemble of random stochastic matrices. For matrices of order N = 2 we derive explicit formulae for the probability distributions induced by random stochastic matrices with columns distributed according to the Dirichlet distribution. For arbitrary N we construct an initial ensemble of stochastic matrices which allows one to generate random bistochastic matrices according to a distribution locally flat at the center of the Birkhoff polytope. The value of the probability density at this point enables us to obtain an estimation of the volume of the Birkhoff polytope, consistent with recent asymptotic results.

  14. Generating random density matrices

    NASA Astrophysics Data System (ADS)

    Życzkowski, Karol; Penson, Karol A.; Nechita, Ion; Collins, Benoît

    2011-06-01

    We study various methods to generate ensembles of random density matrices of a fixed size N, obtained by partial trace of pure states on composite systems. Structured ensembles of random pure states, invariant with respect to local unitary transformations are introduced. To analyze statistical properties of quantum entanglement in bi-partite systems we analyze the distribution of Schmidt coefficients of random pure states. Such a distribution is derived in the case of a superposition of k random maximally entangled states. For another ensemble, obtained by performing selective measurements in a maximally entangled basis on a multi-partite system, we show that this distribution is given by the Fuss-Catalan law and find the average entanglement entropy. A more general class of structured ensembles proposed, containing also the case of Bures, forms an extension of the standard ensemble of structureless random pure states, described asymptotically, as N → ∞, by the Marchenko-Pastur distribution.

  15. Randomness: Quantum versus classical

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    2016-05-01

    Recent tremendous development of quantum information theory has led to a number of quantum technological projects, e.g. quantum random generators. This development had stimulated a new wave of interest in quantum foundations. One of the most intriguing problems of quantum foundations is the elaboration of a consistent and commonly accepted interpretation of a quantum state. Closely related problem is the clarification of the notion of quantum randomness and its interrelation with classical randomness. In this short review, we shall discuss basics of classical theory of randomness (which by itself is very complex and characterized by diversity of approaches) and compare it with irreducible quantum randomness. We also discuss briefly “digital philosophy”, its role in physics (classical and quantum) and its coupling to the information interpretation of quantum mechanics (QM).

  16. Quantum random number generator

    DOEpatents

    Pooser, Raphael C.

    2016-05-10

    A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.

  17. Autonomous Byte Stream Randomizer

    NASA Technical Reports Server (NTRS)

    Paloulian, George K.; Woo, Simon S.; Chow, Edward T.

    2013-01-01

    Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

  18. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  19. Understanding infidelity: correlates in a national random sample.

    PubMed

    Atkins, D C; Baucom, D H; Jacobson, N S

    2001-12-01

    Infidelity is a common phenomenon in marriages but is poorly understood. The current study examined variables related to extramarital sex using data from the 1991-1996 General Social Surveys. Predictor variables were entered into a logistic regression with presence of extramarital sex as the dependent variable. Results demonstrated that divorce, education, age when first married, and 2 "opportunity" variables--respondent's income and work status--significantly affected the likelihood of having engaged in infidelity. Also, there were 3 significant interactions related to infidelity: (a) between age and gender, (b) between marital satisfaction and religious behavior, and (c) between past divorce and educational level. Implications of these findings and directions for future research are discussed.

  20. Randomness for Free

    NASA Astrophysics Data System (ADS)

    Chatterjee, Krishnendu; Doyen, Laurent; Gimbert, Hugo; Henzinger, Thomas A.

    We consider two-player zero-sum games on graphs. These games can be classified on the basis of the information of the players and on the mode of interaction between them. On the basis of information the classification is as follows: (a) partial-observation (both players have partial view of the game); (b) one-sided complete-observation (one player has complete observation); and (c) complete-observation (both players have complete view of the game). On the basis of mode of interaction we have the following classification: (a) concurrent (players interact simultaneously); and (b) turn-based (players interact in turn). The two sources of randomness in these games are randomness in transition function and randomness in strategies. In general, randomized strategies are more powerful than deterministic strategies, and randomness in transitions gives more general classes of games. We present a complete characterization for the classes of games where randomness is not helpful in: (a) the transition function (probabilistic transition can be simulated by deterministic transition); and (b) strategies (pure strategies are as powerful as randomized strategies). As consequence of our characterization we obtain new undecidability results for these games.

  1. Sampling apparatus

    DOEpatents

    Gordon, Norman R.; King, Lloyd L.; Jackson, Peter O.; Zulich, Alan W.

    1989-01-01

    A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface.

  2. Sampling apparatus

    DOEpatents

    Gordon, N.R.; King, L.L.; Jackson, P.O.; Zulich, A.W.

    1989-07-18

    A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface. 15 figs.

  3. A stuctured discrete model for dengue fever infections and the determination of R0 from age-stratified serological data.

    PubMed

    Mello, Renato Francisco Lopes; Castilho, César

    2014-06-01

    A time discrete age-structured model for modeling the spread of Dengue fever is built. The demographic dynamics is introduced trough the Leslie model. The basic reproductive number is introduced, and an approximation for it is built. The final age distributions for the susceptibles, infected and removed are obtained, and we show how they can be used to produce an actual estimate for R0 from stratified serological data. An application is made using data from Recife, Brazil, and explicit estimates for R0 are given.

  4. Selecting people randomly.

    PubMed

    Broome, John

    1984-10-01

    This article considers what justification can be found for selecting randomly and in what circumstances it applies, including that of selecting patients to be treated by a scarce medical procedure. The author demonstrates that balancing the merits of fairness, common good, equal rights, and equal chance as they apply in various situations frequently leads to the conclusion that random selection may not be the most appropriate mode of selection. Broome acknowledges that, in the end, we may be forced to conclude that the only merit of random selection is the political one of guarding against partiality and oppression.

  5. Equitable random graphs

    NASA Astrophysics Data System (ADS)

    Newman, M. E. J.; Martin, Travis

    2014-11-01

    Random graph models have played a dominant role in the theoretical study of networked systems. The Poisson random graph of Erdős and Rényi, in particular, as well as the so-called configuration model, have served as the starting point for numerous calculations. In this paper we describe another large class of random graph models, which we call equitable random graphs and which are flexible enough to represent networks with diverse degree distributions and many nontrivial types of structure, including community structure, bipartite structure, degree correlations, stratification, and others, yet are exactly solvable for a wide range of properties in the limit of large graph size, including percolation properties, complete spectral density, and the behavior of homogeneous dynamical systems, such as coupled oscillators or epidemic models.

  6. Sampling Strategy

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Three locations to the right of the test dig area are identified for the first samples to be delivered to the Thermal and Evolved Gas Analyzer (TEGA), the Wet Chemistry Lab (WCL), and the Optical Microscope (OM) on NASA's Phoenix Mars Lander. These sampling areas are informally labeled 'Baby Bear', 'Mama Bear', and 'Papa Bear' respectively. This image was taken on the seventh day of the Mars mission, or Sol 7 (June 1, 2008) by the Surface Stereo Imager aboard NASA's Phoenix Mars Lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  7. Randomized Algorithms for Matrices and Data

    NASA Astrophysics Data System (ADS)

    Mahoney, Michael W.

    2012-03-01

    This chapter reviews recent work on randomized matrix algorithms. By “randomized matrix algorithms,” we refer to a class of recently developed random sampling and random projection algorithms for ubiquitous linear algebra problems such as least-squares (LS) regression and low-rank matrix approximation. These developments have been driven by applications in large-scale data analysis—applications which place very different demands on matrices than traditional scientific computing applications. Thus, in this review, we will focus on highlighting the simplicity and generality of several core ideas that underlie the usefulness of these randomized algorithms in scientific applications such as genetics (where these algorithms have already been applied) and astronomy (where, hopefully, in part due to this review they will soon be applied). The work we will review here had its origins within theoretical computer science (TCS). An important feature in the use of randomized algorithms in TCS more generally is that one must identify and then algorithmically deal with relevant “nonuniformity structure” in the data. For the randomized matrix algorithms to be reviewed here and that have proven useful recently in numerical linear algebra (NLA) and large-scale data analysis applications, the relevant nonuniformity structure is defined by the so-called statistical leverage scores. Defined more precisely below, these leverage scores are basically the diagonal elements of the projection matrix onto the dominant part of the spectrum of the input matrix. As such, they have a long history in statistical data analysis, where they have been used for outlier detection in regression diagnostics. More generally, these scores often have a very natural interpretation in terms of the data and processes generating the data. For example, they can be interpreted in terms of the leverage or influence that a given data point has on, say, the best low-rank matrix approximation; and this

  8. Creating Ensembles of Decision Trees Through Sampling

    SciTech Connect

    Kamath,C; Cantu-Paz, E

    2001-07-26

    Recent work in classification indicates that significant improvements in accuracy can be obtained by growing an ensemble of classifiers and having them vote for the most popular class. This paper focuses on ensembles of decision trees that are created with a randomized procedure based on sampling. Randomization can be introduced by using random samples of the training data (as in bagging or boosting) and running a conventional tree-building algorithm, or by randomizing the induction algorithm itself. The objective of this paper is to describe the first experiences with a novel randomized tree induction method that uses a sub-sample of instances at a node to determine the split. The empirical results show that ensembles generated using this approach yield results that are competitive in accuracy and superior in computational cost to boosting and bagging.

  9. Sample size calculation: Basic principles

    PubMed Central

    Das, Sabyasachi; Mitra, Koel; Mandal, Mohanchandra

    2016-01-01

    Addressing a sample size is a practical issue that has to be solved during planning and designing stage of the study. The aim of any clinical research is to detect the actual difference between two groups (power) and to provide an estimate of the difference with a reasonable accuracy (precision). Hence, researchers should do a priori estimate of sample size well ahead, before conducting the study. Post hoc sample size computation is not encouraged conventionally. Adequate sample size minimizes the random error or in other words, lessens something happening by chance. Too small a sample may fail to answer the research question and can be of questionable validity or provide an imprecise answer while too large a sample may answer the question but is resource-intensive and also may be unethical. More transparency in the calculation of sample size is required so that it can be justified and replicated while reporting. PMID:27729692

  10. Quantum Random Number Generation Using a Quanta Image Sensor.

    PubMed

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R

    2016-06-29

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed.

  11. Quantum Random Number Generation Using a Quanta Image Sensor

    PubMed Central

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  12. Quantum Random Number Generation Using a Quanta Image Sensor.

    PubMed

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  13. Random quantum operations

    NASA Astrophysics Data System (ADS)

    Bruzda, Wojciech; Cappellini, Valerio; Sommers, Hans-Jürgen; Życzkowski, Karol

    2009-01-01

    We define a natural ensemble of trace preserving, completely positive quantum maps and present algorithms to generate them at random. Spectral properties of the superoperator Φ associated with a given quantum map are investigated and a quantum analogue of the Frobenius-Perron theorem is proved. We derive a general formula for the density of eigenvalues of Φ and show the connection with the Ginibre ensemble of real non-symmetric random matrices. Numerical investigations of the spectral gap imply that a generic state of the system iterated several times by a fixed generic map converges exponentially to an invariant state.

  14. Random walks on networks

    NASA Astrophysics Data System (ADS)

    Donnelly, Isaac

    Random walks on lattices are a well used model for diffusion on continuum. They have been to model subdiffusive systems, systems with forcing and reactions as well as a combination of the three. We extend the traditional random walk framework to the network to obtain novel results. As an example due to the small graph diameter, the early time behaviour of subdiffusive dynamics dominates the observed system which has implications for models of the brain or airline networks. I would like to thank the Australian American Fulbright Association.

  15. Randomness Of Amoeba Movements

    NASA Astrophysics Data System (ADS)

    Hashiguchi, S.; Khadijah, Siti; Kuwajima, T.; Ohki, M.; Tacano, M.; Sikula, J.

    2005-11-01

    Movements of amoebas were automatically traced using the difference between two successive frames of the microscopic movie. It was observed that the movements were almost random in that the directions and the magnitudes of the successive two steps are not correlated, and that the distance from the origin was proportional to the square root of the step number.

  16. Random lasers ensnared

    NASA Astrophysics Data System (ADS)

    Leonetti, Marco; López, Cefe

    2012-06-01

    A random laser is formed by a haphazard assembly of nondescript optical scatters with optical gain. Multiple light scattering replaces the optical cavity of traditional lasers and the interplay between gain, scattering and size determines its unique properties. Random lasers studied till recently, consisted of irregularly shaped or polydisperse scatters, with some average scattering strength constant across the gain frequency band. Photonic glasses can sustain scattering resonances that can be placed in the gain window, since they are formed by monodisperse spheres [1]. The unique resonant scattering of this novel material allows controlling the lasing color via the diameter of the particles and their refractive index. Thus a random laser with a priori set lasing peak can be designed [2]. A special pumping scheme that enables to select the number of activated modes in a random laser permits to prepare RLs in two distinct regimes by controlling directionality through the shape of the pump [3]. When pumping is essentially unidirectional, few (barely interacting) modes are turned on that show as sharp, uncorrelated peaks in the spectrum. By increasing angular span of the pump beams, many resonances intervene generating a smooth emission spectrum with a high degree of correlation, and shorter lifetime. These are signs of a phaselocking transition, in which phases are clamped together so that modes oscillate synchronously.

  17. Generating "Random" Integers

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2011-01-01

    One of the author's undergraduate students recently asked him whether it was possible to generate a random positive integer. After some thought, the author realised that there were plenty of interesting mathematical ideas inherent in her question. So much so in fact, that the author decided to organise a workshop, open both to undergraduates and…

  18. Contouring randomly spaced data

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.; Morris, W. D.; Hamm, R. W.

    1977-01-01

    Computer program using triangulation contouring technique contours data points too numerous to fit into rectangular grid. Using random access procedures, program can handle up to 56,000 data points and provides up to 20 contour intervals for multiple number of parameters.

  19. Uniform random number generators

    NASA Technical Reports Server (NTRS)

    Farr, W. R.

    1971-01-01

    Methods are presented for the generation of random numbers with uniform and normal distributions. Subprogram listings of Fortran generators for the Univac 1108, SDS 930, and CDC 3200 digital computers are also included. The generators are of the mixed multiplicative type, and the mathematical method employed is that of Marsaglia and Bray.

  20. On Random Numbers and Design

    ERIC Educational Resources Information Center

    Ben-Ari, Morechai

    2004-01-01

    The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…

  1. Randomization Does Not Help Much, Comparability Does

    PubMed Central

    Saint-Mont, Uwe

    2015-01-01

    According to R.A. Fisher, randomization “relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed.” Since, in particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result, it has become very popular. This contribution challenges the received view. First, looking for quantitative support, we study a number of straightforward, mathematically simple models. They all demonstrate that the optimism surrounding randomization is questionable: In small to medium-sized samples, random allocation of units to treatments typically yields a considerable imbalance between the groups, i.e., confounding due to randomization is the rule rather than the exception. In the second part of this contribution, the reasoning is extended to a number of traditional arguments in favour of randomization. This discussion is rather non-technical, and sometimes touches on the rather fundamental Frequentist/Bayesian debate. However, the result of this analysis turns out to be quite similar: While the contribution of randomization remains doubtful, comparability contributes much to a compelling conclusion. Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable. PMID:26193621

  2. Bootstrapped models for intrinsic random functions

    SciTech Connect

    Campbell, K.

    1988-08-01

    Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.

  3. Bootstrapped models for intrinsic random functions

    SciTech Connect

    Campbell, K.

    1987-01-01

    The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.

  4. Sampling properties of directed networks.

    PubMed

    Son, S-W; Christensen, C; Bizhani, G; Foster, D V; Grassberger, P; Paczuski, M

    2012-10-01

    For many real-world networks only a small "sampled" version of the original network may be investigated; those results are then used to draw conclusions about the actual system. Variants of breadth-first search (BFS) sampling, which are based on epidemic processes, are widely used. Although it is well established that BFS sampling fails, in most cases, to capture the IN component(s) of directed networks, a description of the effects of BFS sampling on other topological properties is all but absent from the literature. To systematically study the effects of sampling biases on directed networks, we compare BFS sampling to random sampling on complete large-scale directed networks. We present new results and a thorough analysis of the topological properties of seven complete directed networks (prior to sampling), including three versions of Wikipedia, three different sources of sampled World Wide Web data, and an Internet-based social network. We detail the differences that sampling method and coverage can make to the structural properties of sampled versions of these seven networks. Most notably, we find that sampling method and coverage affect both the bow-tie structure and the number and structure of strongly connected components in sampled networks. In addition, at a low sampling coverage (i.e., less than 40%), the values of average degree, variance of out-degree, degree autocorrelation, and link reciprocity are overestimated by 30% or more in BFS-sampled networks and only attain values within 10% of the corresponding values in the complete networks when sampling coverage is in excess of 65%. These results may cause us to rethink what we know about the structure, function, and evolution of real-world directed networks.

  5. Relativistic Weierstrass random walks.

    PubMed

    Saa, Alberto; Venegeroles, Roberto

    2010-08-01

    The Weierstrass random walk is a paradigmatic Markov chain giving rise to a Lévy-type superdiffusive behavior. It is well known that special relativity prevents the arbitrarily high velocities necessary to establish a superdiffusive behavior in any process occurring in Minkowski spacetime, implying, in particular, that any relativistic Markov chain describing spacetime phenomena must be essentially Gaussian. Here, we introduce a simple relativistic extension of the Weierstrass random walk and show that there must exist a transition time t{c} delimiting two qualitative distinct dynamical regimes: the (nonrelativistic) superdiffusive Lévy flights, for tt{c} . Implications of this crossover between different diffusion regimes are discussed for some explicit examples. The study of such an explicit and simple Markov chain can shed some light on several results obtained in much more involved contexts. PMID:20866862

  6. Relativistic Weierstrass random walks.

    PubMed

    Saa, Alberto; Venegeroles, Roberto

    2010-08-01

    The Weierstrass random walk is a paradigmatic Markov chain giving rise to a Lévy-type superdiffusive behavior. It is well known that special relativity prevents the arbitrarily high velocities necessary to establish a superdiffusive behavior in any process occurring in Minkowski spacetime, implying, in particular, that any relativistic Markov chain describing spacetime phenomena must be essentially Gaussian. Here, we introduce a simple relativistic extension of the Weierstrass random walk and show that there must exist a transition time t{c} delimiting two qualitative distinct dynamical regimes: the (nonrelativistic) superdiffusive Lévy flights, for tt{c} . Implications of this crossover between different diffusion regimes are discussed for some explicit examples. The study of such an explicit and simple Markov chain can shed some light on several results obtained in much more involved contexts.

  7. Interactions in random copolymers

    NASA Astrophysics Data System (ADS)

    Marinov, Toma; Luettmer-Strathmann, Jutta

    2002-04-01

    The description of thermodynamic properties of copolymers in terms of simple lattice models requires a value for the effective interaction strength between chain segments, in addition to parameters that can be derived from the properties of the corresponding homopolymers. If the monomers are chemically similar, Berthelot's geometric-mean combining rule provides a good first approximation for interactions between unlike segments. In earlier work on blends of polyolefins [1], we found that the small-scale architecture of the chains leads to corrections to the geometric-mean approximation that are important for the prediction of phase diagrams. In this work, we focus on the additional effects due to sequencing of the monomeric units. In order to estimate the effective interaction for random copolymers, the small-scale simulation approach developed in [1] is extended to allow for random sequencing of the monomeric units. The approach is applied here to random copolymers of ethylene and 1-butene. [1] J. Luettmer-Strathmann and J.E.G. Lipson. Phys. Rev. E 59, 2039 (1999) and Macromolecules 32, 1093 (1999).

  8. Sampling properties of directed networks

    NASA Astrophysics Data System (ADS)

    Son, S.-W.; Christensen, C.; Bizhani, G.; Foster, D. V.; Grassberger, P.; Paczuski, M.

    2012-10-01

    For many real-world networks only a small “sampled” version of the original network may be investigated; those results are then used to draw conclusions about the actual system. Variants of breadth-first search (BFS) sampling, which are based on epidemic processes, are widely used. Although it is well established that BFS sampling fails, in most cases, to capture the IN component(s) of directed networks, a description of the effects of BFS sampling on other topological properties is all but absent from the literature. To systematically study the effects of sampling biases on directed networks, we compare BFS sampling to random sampling on complete large-scale directed networks. We present new results and a thorough analysis of the topological properties of seven complete directed networks (prior to sampling), including three versions of Wikipedia, three different sources of sampled World Wide Web data, and an Internet-based social network. We detail the differences that sampling method and coverage can make to the structural properties of sampled versions of these seven networks. Most notably, we find that sampling method and coverage affect both the bow-tie structure and the number and structure of strongly connected components in sampled networks. In addition, at a low sampling coverage (i.e., less than 40%), the values of average degree, variance of out-degree, degree autocorrelation, and link reciprocity are overestimated by 30% or more in BFS-sampled networks and only attain values within 10% of the corresponding values in the complete networks when sampling coverage is in excess of 65%. These results may cause us to rethink what we know about the structure, function, and evolution of real-world directed networks.

  9. Random lasing with spatially nonuniform gain

    NASA Astrophysics Data System (ADS)

    Fan, Ting; Lü, Jiantao

    2016-07-01

    Spatial and spectral properties of random lasing with spatially nonuniform gain were investigated in two-dimensional (2D) disordered medium. The pumping light was described by an individual electric field and coupled into the rate equations by using the polarization equation. The spatially nonuniform gain comes from the multiple scattering of this pumping light. Numerical simulation of the random system with uniform and nonuniform gain were performed both in weak and strong scattering regime. In weak scattering sample, all the lasing modes correspond to those of the passive system whether the nonuniform gain is considered. However, in strong scattering regime, new lasing modes appear with nonuniform gain as the localization area changes. Our results show that it is more accurate to describe the random lasing behavior with introducing the nonuniform gain origins from the multiple light scattering.

  10. A random number generator for continuous random variables

    NASA Technical Reports Server (NTRS)

    Guerra, V. M.; Tapia, R. A.; Thompson, J. R.

    1972-01-01

    A FORTRAN 4 routine is given which may be used to generate random observations of a continuous real valued random variable. Normal distribution of F(x), X, E(akimas), and E(linear) is presented in tabular form.

  11. On the pertinence to Physics of random walks induced by random dynamical systems: a survey

    NASA Astrophysics Data System (ADS)

    Petritis, Dimitri

    2016-08-01

    Let be an abstract space and a denumerable (finite or infinite) alphabet. Suppose that is a family of functions such that for all we have and a family of transformations . The pair ((Sa)a , (pa)a ) is termed an iterated function system with place dependent probabilities. Such systems can be thought as generalisations of random dynamical systems. As a matter of fact, suppose we start from a given ; we pick then randomly, with probability pa (x), the transformation Sa and evolve to Sa (x). We are interested in the behaviour of the system when the iteration continues indefinitely. Random walks of the above type are omnipresent in both classical and quantum Physics. To give a small sample of occurrences we mention: random walks on the affine group, random walks on Penrose lattices, random walks on partially directed lattices, evolution of density matrices induced by repeated quantum measurements, quantum channels, quantum random walks, etc. In this article, we review some basic properties of such systems and provide with a pathfinder in the extensive bibliography (both on mathematical and physical sides) where the main results have been originally published.

  12. THE SAMPLING THEORY OF PIERRE GY COMPARISONS, IMPLEMENTATION, AND APPLICATIONS FOR ENVIRONMENTAL SAMPLING

    EPA Science Inventory

    The sampling theory developed and decribed by Pierre Gy is compared to design-based classical finite sampling methods for estimation of a ratio of random variables. For samples of materials that can be completely enumerated, the methods are asymptotically equivalent. Gy extends t...

  13. [Variance estimation considering multistage sampling design in multistage complex sample analysis].

    PubMed

    Li, Yichong; Zhao, Yinjun; Wang, Limin; Zhang, Mei; Zhou, Maigeng

    2016-03-01

    Multistage sampling is a frequently-used method in random sampling survey in public health. Clustering or independence between observations often exists in the sampling, often called complex sample, generated by multistage sampling. Sampling error may be underestimated and the probability of type I error may be increased if the multistage sample design was not taken into consideration in analysis. As variance (error) estimator in complex sample is often complicated, statistical software usually adopt ultimate cluster variance estimate (UCVE) to approximate the estimation, which simply assume that the sample comes from one-stage sampling. However, with increased sampling fraction of primary sampling unit, contribution from subsequent sampling stages is no more trivial, and the ultimate cluster variance estimate may, therefore, lead to invalid variance estimation. This paper summarize a method of variance estimation considering multistage sampling design. The performances are compared with UCVE and the method considering multistage sampling design by simulating random sampling under different sampling schemes using real world data. Simulation showed that as primary sampling unit (PSU) sampling fraction increased, UCVE tended to generate increasingly biased estimation, whereas accurate estimates were obtained by using the method considering multistage sampling design.

  14. Attenuation of species abundance distributions by sampling.

    PubMed

    Shimadzu, Hideyasu; Darnell, Ross

    2015-04-01

    Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies. PMID:26064626

  15. Attenuation of species abundance distributions by sampling

    PubMed Central

    Shimadzu, Hideyasu; Darnell, Ross

    2015-01-01

    Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies. PMID:26064626

  16. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  17. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) Sample items shall be selected in such a way as to comprise a simple random sample of each relevant...% Expected Error Rate—0% Maximum Tolerable Error Rate—10% (3) Option 3. The auditor may use some other...

  18. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Sample items shall be selected in such a way as to comprise a simple random sample of each relevant...% Expected Error Rate—0% Maximum Tolerable Error Rate—10% (3) Option 3. The auditor may use some other...

  19. 40 CFR 80.127 - Sample size guidelines.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Sample items shall be selected in such a way as to comprise a simple random sample of each relevant...% Expected Error Rate—0% Maximum Tolerable Error Rate—10% (3) Option 3. The auditor may use some other...

  20. Sampling and Measurement Error in Faculty Activity and Effort Reporting

    ERIC Educational Resources Information Center

    Lee, Edgar; Kutina, Kenneth L.

    1974-01-01

    Evaluates a stratified sampling method developed at the Case Western Reserve University School of Medicine to estimate mean faculty effort devoted to programs from randomly selected faculty samples over three consecutive years. (Author/PG)

  1. Cluster Randomized Controlled Trial

    PubMed Central

    Young, John; Chapman, Katie; Nixon, Jane; Patel, Anita; Holloway, Ivana; Mellish, Kirste; Anwar, Shamaila; Breen, Rachel; Knapp, Martin; Murray, Jenni; Farrin, Amanda

    2015-01-01

    Background and Purpose— We developed a new postdischarge system of care comprising a structured assessment covering longer-term problems experienced by patients with stroke and their carers, linked to evidence-based treatment algorithms and reference guides (the longer-term stroke care system of care) to address the poor longer-term recovery experienced by many patients with stroke. Methods— A pragmatic, multicentre, cluster randomized controlled trial of this system of care. Eligible patients referred to community-based Stroke Care Coordinators were randomized to receive the new system of care or usual practice. The primary outcome was improved patient psychological well-being (General Health Questionnaire-12) at 6 months; secondary outcomes included functional outcomes for patients, carer outcomes, and cost-effectiveness. Follow-up was through self-completed postal questionnaires at 6 and 12 months. Results— Thirty-two stroke services were randomized (29 participated); 800 patients (399 control; 401 intervention) and 208 carers (100 control; 108 intervention) were recruited. In intention to treat analysis, the adjusted difference in patient General Health Questionnaire-12 mean scores at 6 months was −0.6 points (95% confidence interval, −1.8 to 0.7; P=0.394) indicating no evidence of statistically significant difference between the groups. Costs of Stroke Care Coordinator inputs, total health and social care costs, and quality-adjusted life year gains at 6 months, 12 months, and over the year were similar between the groups. Conclusions— This robust trial demonstrated no benefit in clinical or cost-effectiveness outcomes associated with the new system of care compared with usual Stroke Care Coordinator practice. Clinical Trial Registration— URL: http://www.controlled-trials.com. Unique identifier: ISRCTN 67932305. PMID:26152298

  2. Index statistical properties of sparse random graphs

    NASA Astrophysics Data System (ADS)

    Metz, F. L.; Stariolo, Daniel A.

    2015-10-01

    Using the replica method, we develop an analytical approach to compute the characteristic function for the probability PN(K ,λ ) that a large N ×N adjacency matrix of sparse random graphs has K eigenvalues below a threshold λ . The method allows to determine, in principle, all moments of PN(K ,λ ) , from which the typical sample-to-sample fluctuations can be fully characterized. For random graph models with localized eigenvectors, we show that the index variance scales linearly with N ≫1 for |λ |>0 , with a model-dependent prefactor that can be exactly calculated. Explicit results are discussed for Erdös-Rényi and regular random graphs, both exhibiting a prefactor with a nonmonotonic behavior as a function of λ . These results contrast with rotationally invariant random matrices, where the index variance scales only as lnN , with an universal prefactor that is independent of λ . Numerical diagonalization results confirm the exactness of our approach and, in addition, strongly support the Gaussian nature of the index fluctuations.

  3. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual. Appendix 2: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.

  4. Composite Random Fiber Networks

    NASA Astrophysics Data System (ADS)

    Picu, Catalin; Shahsavari, Ali

    2013-03-01

    Systems made from fibers are common in the biological and engineering worlds. In many instances, as for example in skin, where elastin and collagen fibers are present, the fiber network is composite, in the sense that it contains fibers of very different properties. The relationship between microstructural parameters and the elastic moduli of random fiber networks containing a single type of fiber is understood. In this work we address a similar target for the composite networks. We show that linear superposition of the contributions to stiffness of individual sub-networks does not apply and interesting non-linear effects are observed. A physical basis of these effects is proposed.

  5. Can randomization be informative?

    NASA Astrophysics Data System (ADS)

    Pereira, Carlos A. B.; Campos, Thiago F.; Silva, Gustavo M.; Wechsler, Sergio

    2012-10-01

    In this paper the Pair of Siblings Paradox introduced by Pereira [1] is extended by considering more than two children and more than one child observed for gender. We follow the same lines of Wechsler et al. [2] that generalizes the three prisoners' dilemma, introduced by Gardner [3]. This paper's conjecture is that the Pair of Siblings and the Three Prisoners dilemma are dual paradoxes. Looking at possible likelihoods, the sure (randomized) selection for the former is non informative (informative), the opposite that holds for the latter. This situation is maintained for generalizations. Non informative likelihood here means that prior and posterior are equal.

  6. Creating ensembles of decision trees through sampling

    DOEpatents

    Kamath, Chandrika; Cantu-Paz, Erick

    2005-08-30

    A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.

  7. Universal microbial diagnostics using random DNA probes

    PubMed Central

    Aghazadeh, Amirali; Lin, Adam Y.; Sheikh, Mona A.; Chen, Allen L.; Atkins, Lisa M.; Johnson, Coreen L.; Petrosino, Joseph F.; Drezek, Rebekah A.; Baraniuk, Richard G.

    2016-01-01

    Early identification of pathogens is essential for limiting development of therapy-resistant pathogens and mitigating infectious disease outbreaks. Most bacterial detection schemes use target-specific probes to differentiate pathogen species, creating time and cost inefficiencies in identifying newly discovered organisms. We present a novel universal microbial diagnostics (UMD) platform to screen for microbial organisms in an infectious sample, using a small number of random DNA probes that are agnostic to the target DNA sequences. Our platform leverages the theory of sparse signal recovery (compressive sensing) to identify the composition of a microbial sample that potentially contains novel or mutant species. We validated the UMD platform in vitro using five random probes to recover 11 pathogenic bacteria. We further demonstrated in silico that UMD can be generalized to screen for common human pathogens in different taxonomy levels. UMD’s unorthodox sensing approach opens the door to more efficient and universal molecular diagnostics. PMID:27704040

  8. Random numbers from vacuum fluctuations

    NASA Astrophysics Data System (ADS)

    Shi, Yicheng; Chng, Brenda; Kurtsiefer, Christian

    2016-07-01

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  9. Random recursive trees and the elephant random walk

    NASA Astrophysics Data System (ADS)

    Kürsten, Rüdiger

    2016-03-01

    One class of random walks with infinite memory, so-called elephant random walks, are simple models describing anomalous diffusion. We present a surprising connection between these models and bond percolation on random recursive trees. We use a coupling between the two models to translate results from elephant random walks to the percolation process. We calculate, besides other quantities, exact expressions for the first and the second moment of the root cluster size and of the number of nodes in child clusters of the first generation. We further introduce another model, the skew elephant random walk, and calculate the first and second moment of this process.

  10. Random recursive trees and the elephant random walk.

    PubMed

    Kürsten, Rüdiger

    2016-03-01

    One class of random walks with infinite memory, so-called elephant random walks, are simple models describing anomalous diffusion. We present a surprising connection between these models and bond percolation on random recursive trees. We use a coupling between the two models to translate results from elephant random walks to the percolation process. We calculate, besides other quantities, exact expressions for the first and the second moment of the root cluster size and of the number of nodes in child clusters of the first generation. We further introduce another model, the skew elephant random walk, and calculate the first and second moment of this process. PMID:27078296

  11. Random-walk enzymes

    NASA Astrophysics Data System (ADS)

    Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.

    2015-09-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C →U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.

  12. Random rough surface photofabrication

    NASA Astrophysics Data System (ADS)

    Brissonneau, Vincent; Escoubas, Ludovic; Flory, François; Berginc, Gérard

    2011-10-01

    Random rough surfaces are of primary interest for their optical properties: reducing reflection at the interface or obtaining specific scattering diagram for example. Thus controlling surface statistics during the fabrication process paves the way to original and specific behaviors of reflected optical waves. We detail an experimental method allowing the fabrication of random rough surfaces showing tuned statistical properties. A two-step photoresist exposure process was developed. In order to initiate photoresist polymerization, an energy threshold needs to be reached by light exposure. This energy is brought by a uniform exposure equipment comprising UV-LEDs. This pre-exposure is studied by varying parameters such as optical power and exposure time. The second step consists in an exposure based on the Gray method.1 The speckle pattern of an enlarged scattered laser beam is used to insolate the photoresist. A specific photofabrication bench using an argon ion laser was implemented. Parameters such as exposure time and distances between optical components are discussed. Then, we describe how we modify the speckle-based exposure bench to include a spatial light modulator (SLM). The SLM used is a micromirror matrix known as Digital Micromirror Device (DMD) which allows spatial modulation by displaying binary images. Thus, the spatial beam shape can be tuned and so the speckle pattern on the photoresist is modified. As the photoresist photofabricated surface is correlated to the speckle pattern used to insolate, the roughness parameters can be adjusted.

  13. Random-walk enzymes.

    PubMed

    Mak, Chi H; Pham, Phuong; Afif, Samir A; Goodman, Myron F

    2015-09-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C→U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.

  14. Random-walk enzymes

    PubMed Central

    Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.

    2015-01-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C → U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508

  15. Random-walk enzymes.

    PubMed

    Mak, Chi H; Pham, Phuong; Afif, Samir A; Goodman, Myron F

    2015-09-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C→U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508

  16. Random Numbers and Quantum Computers

    ERIC Educational Resources Information Center

    McCartney, Mark; Glass, David

    2002-01-01

    The topic of random numbers is investigated in such a way as to illustrate links between mathematics, physics and computer science. First, the generation of random numbers by a classical computer using the linear congruential generator and logistic map is considered. It is noted that these procedures yield only pseudo-random numbers since…

  17. Randomized Response Analysis in Mplus

    ERIC Educational Resources Information Center

    Hox, Joop; Lensvelt-Mulders, Gerty

    2004-01-01

    This article describes a technique to analyze randomized response data using available structural equation modeling (SEM) software. The randomized response technique was developed to obtain estimates that are more valid when studying sensitive topics. The basic feature of all randomized response methods is that the data are deliberately…

  18. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate...

  19. Random cyclic matrices.

    PubMed

    Jain, Sudhir R; Srivastava, Shashi C L

    2008-09-01

    We present a Gaussian ensemble of random cyclic matrices on the real field and study their spectral fluctuations. These cyclic matrices are shown to be pseudosymmetric with respect to generalized parity. We calculate the joint probability distribution function of eigenvalues and the spacing distributions analytically and numerically. For small spacings, the level spacing distribution exhibits either a Gaussian or a linear form. Furthermore, for the general case of two arbitrary complex eigenvalues, leaving out the spacings among real eigenvalues, and, among complex conjugate pairs, we find that the spacing distribution agrees completely with the Wigner distribution for a Poisson process on a plane. The cyclic matrices occur in a wide variety of physical situations, including disordered linear atomic chains and the Ising model in two dimensions. These exact results are also relevant to two-dimensional statistical mechanics and nu -parametrized quantum chromodynamics. PMID:18851127

  20. Structure of random foam.

    SciTech Connect

    Reinelt, Douglas A.; van Swol, Frank B.; Kraynik, Andrew Michael

    2004-06-01

    The Surface Evolver was used to compute the equilibrium microstructure of dry soap foams with random structure and a wide range of cell-size distributions. Topological and geometric properties of foams and individual cells were evaluated. The theory for isotropic Plateau polyhedra describes the dependence of cell geometric properties on their volume and number of faces. The surface area of all cells is about 10% greater than a sphere of equal volume; this leads to a simple but accurate theory for the surface free energy density of foam. A novel parameter based on the surface-volume mean bubble radius R32 is used to characterize foam polydispersity. The foam energy, total cell edge length, and average number of faces per cell all decrease with increasing polydispersity. Pentagonal faces are the most common in monodisperse foam but quadrilaterals take over in highly polydisperse structures.

  1. CONTOURING RANDOMLY SPACED DATA

    NASA Technical Reports Server (NTRS)

    Hamm, R. W.

    1994-01-01

    This program prepares contour plots of three-dimensional randomly spaced data. The contouring techniques use a triangulation procedure developed by Dr. C. L. Lawson of the Jet Propulsion Laboratory which allows the contouring of randomly spaced input data without first fitting the data into a rectangular grid. The program also allows contour points to be fitted with a smooth curve using an interpolating spline under tension. The input data points to be contoured are read from a magnetic tape or disk file with one record for each data point. Each record contains the X and Y coordinates, value to be contoured, and an alternate contour value (if applicable). The contour data is then partitioned by the program to reduce core storage requirements. Output consists of the contour plots and user messages. Several output options are available to the user such as: controlling which value in the data record is to be contoured, whether contours are drawn by polygonal lines or by a spline under tension (smooth curves), and controlling the contour level labels which may be suppressed if desired. The program can handle up to 56,000 data points and provide for up to 20 contour intervals for a multiple number of parameters. This program was written in FORTRAN IV for implementation on a CDC 6600 computer using CALCOMP plotting capabilities. The field length required is dependent upon the number of data points to be contoured. The program requires 42K octal storage locations plus the larger of: 24 times the maximum number of points in each data partition (defaults to maximum of 1000 data points in each partition with 20 percent overlap) or 2K plus four times the total number of points to be plotted. This program was developed in 1975.

  2. How random are random numbers generated using photons?

    NASA Astrophysics Data System (ADS)

    Solis, Aldo; Angulo Martínez, Alí M.; Ramírez Alarcón, Roberto; Cruz Ramírez, Hector; U'Ren, Alfred B.; Hirsch, Jorge G.

    2015-06-01

    Randomness is fundamental in quantum theory, with many philosophical and practical implications. In this paper we discuss the concept of algorithmic randomness, which provides a quantitative method to assess the Borel normality of a given sequence of numbers, a necessary condition for it to be considered random. We use Borel normality as a tool to investigate the randomness of ten sequences of bits generated from the differences between detection times of photon pairs generated by spontaneous parametric downconversion. These sequences are shown to fulfil the randomness criteria without difficulties. As deviations from Borel normality for photon-generated random number sequences have been reported in previous work, a strategy to understand these diverging findings is outlined.

  3. Quantum random walk polynomial and quantum random walk measure

    NASA Astrophysics Data System (ADS)

    Kang, Yuanbao; Wang, Caishi

    2014-05-01

    In the paper, we introduce a quantum random walk polynomial (QRWP) that can be defined as a polynomial , which is orthogonal with respect to a quantum random walk measure (QRWM) on , such that the parameters are in the recurrence relations and satisfy . We firstly obtain some results of QRWP and QRWM, in which case the correspondence between measures and orthogonal polynomial sequences is one-to-one. It shows that any measure with respect to which a quantum random walk polynomial sequence is orthogonal is a quantum random walk measure. We next collect some properties of QRWM; moreover, we extend Karlin and McGregor's representation formula for the transition probabilities of a quantum random walk (QRW) in the interacting Fock space, which is a parallel result with the CGMV method. Using these findings, we finally obtain some applications for QRWM, which are of interest in the study of quantum random walk, highlighting the role played by QRWP and QRWM.

  4. Stratified Random Design.

    1988-02-19

    Version 00 STRADE generates matrices of experimental designs based on the Latin Hypercube Sampling technique, that can be applied to any kind of sensitivity analysis or system identification problem involving a large number of input variables. The program was developed for use in reactor safety probabilistic analyses.

  5. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2013-01-01

    numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or

  6. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  7. Differential Cost Avoidance and Successful Criminal Careers: Random or Rational?

    ERIC Educational Resources Information Center

    Kazemian, Lila; Le Blanc, Marc

    2007-01-01

    Using a sample of adjudicated French Canadian males from the Montreal Two Samples Longitudinal Study, this article investigates individual and social characteristics associated with differential cost avoidance. The main objective of this study is to determine whether such traits are randomly distributed across differential degrees of cost…

  8. Random-phase metasurfaces at optical wavelengths

    NASA Astrophysics Data System (ADS)

    Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P.; Bozhevolnyi, Sergey I.

    2016-06-01

    Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector.

  9. Random-phase metasurfaces at optical wavelengths

    PubMed Central

    Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P.; Bozhevolnyi, Sergey I.

    2016-01-01

    Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector. PMID:27328635

  10. Random-phase metasurfaces at optical wavelengths.

    PubMed

    Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P; Bozhevolnyi, Sergey I

    2016-01-01

    Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector. PMID:27328635

  11. Topics in Randomized Algorithms for Numerical Linear Algebra

    NASA Astrophysics Data System (ADS)

    Holodnak, John T.

    In this dissertation, we present results for three topics in randomized algorithms. Each topic is related to random sampling. We begin by studying a randomized algorithm for matrix multiplication that randomly samples outer products. We show that if a set of deterministic conditions is satisfied, then the algorithm can compute the exact product. In addition, we show probabilistic bounds on the two norm relative error of the algorithm. two norm relative error of the algorithm. In the second part, we discuss the sensitivity of leverage scores to perturbations. Leverage scores are scalar quantities that give a notion of importance to the rows of a matrix. They are used as sampling probabilities in many randomized algorithms. We show bounds on the difference between the leverage scores of a matrix and a perturbation of the matrix. In the last part, we approximate functions over an active subspace of parameters. To identify the active subspace, we apply an algorithm that relies on a random sampling scheme. We show bounds on the accuracy of the active subspace identification algorithm and construct an approximation to a function with 3556 parameters using a ten-dimensional active subspace.

  12. The Variable Responding Scale for Detection of Random Responding on the Multidimensional Pain Inventory.

    ERIC Educational Resources Information Center

    Bruehl, Stephen; Lofland, Kenneth R.; Carlson, Charles R.; Sherman, Jeffrey J.

    1998-01-01

    Developed a scale for detecting random responses on the Multidimensional Pain Inventory using 95 undergraduates, 34 chronic pain patients, and 115 health-care professionals. A variable response scale was developed that discriminated accurately between valid and random profiles in two cross-validation samples, predicting random profiles with 90%…

  13. Nonvolatile random access memory

    NASA Technical Reports Server (NTRS)

    Wu, Jiin-Chuan (Inventor); Stadler, Henry L. (Inventor); Katti, Romney R. (Inventor)

    1994-01-01

    A nonvolatile magnetic random access memory can be achieved by an array of magnet-Hall effect (M-H) elements. The storage function is realized with a rectangular thin-film ferromagnetic material having an in-plane, uniaxial anisotropy and inplane bipolar remanent magnetization states. The thin-film magnetic element is magnetized by a local applied field, whose direction is used to form either a 0 or 1 state. The element remains in the 0 or 1 state until a switching field is applied to change its state. The stored information is detcted by a Hall-effect sensor which senses the fringing field from the magnetic storage element. The circuit design for addressing each cell includes transistor switches for providing a current of selected polarity to store a binary digit through a separate conductor overlying the magnetic element of the cell. To read out a stored binary digit, transistor switches are employed to provide a current through a row of Hall-effect sensors connected in series and enabling a differential voltage amplifier connected to all Hall-effect sensors of a column in series. To avoid read-out voltage errors due to shunt currents through resistive loads of the Hall-effect sensors of other cells in the same column, at least one transistor switch is provided between every pair of adjacent cells in every row which are not turned on except in the row of the selected cell.

  14. Spatially embedded random networks.

    PubMed

    Barnett, L; Di Paolo, E; Bullock, S

    2007-11-01

    Many real-world networks analyzed in modern network theory have a natural spatial element; e.g., the Internet, social networks, neural networks, etc. Yet, aside from a comparatively small number of somewhat specialized and domain-specific studies, the spatial element is mostly ignored and, in particular, its relation to network structure disregarded. In this paper we introduce a model framework to analyze the mediation of network structure by spatial embedding; specifically, we model connectivity as dependent on the distance between network nodes. Our spatially embedded random networks construction is not primarily intended as an accurate model of any specific class of real-world networks, but rather to gain intuition for the effects of spatial embedding on network structure; nevertheless we are able to demonstrate, in a quite general setting, some constraints of spatial embedding on connectivity such as the effects of spatial symmetry, conditions for scale free degree distributions and the existence of small-world spatial networks. We also derive some standard structural statistics for spatially embedded networks and illustrate the application of our model framework with concrete examples. PMID:18233726

  15. Spatially embedded random networks

    NASA Astrophysics Data System (ADS)

    Barnett, L.; di Paolo, E.; Bullock, S.

    2007-11-01

    Many real-world networks analyzed in modern network theory have a natural spatial element; e.g., the Internet, social networks, neural networks, etc. Yet, aside from a comparatively small number of somewhat specialized and domain-specific studies, the spatial element is mostly ignored and, in particular, its relation to network structure disregarded. In this paper we introduce a model framework to analyze the mediation of network structure by spatial embedding; specifically, we model connectivity as dependent on the distance between network nodes. Our spatially embedded random networks construction is not primarily intended as an accurate model of any specific class of real-world networks, but rather to gain intuition for the effects of spatial embedding on network structure; nevertheless we are able to demonstrate, in a quite general setting, some constraints of spatial embedding on connectivity such as the effects of spatial symmetry, conditions for scale free degree distributions and the existence of small-world spatial networks. We also derive some standard structural statistics for spatially embedded networks and illustrate the application of our model framework with concrete examples.

  16. Does Random Dispersion Help Survival?

    NASA Astrophysics Data System (ADS)

    Schinazi, Rinaldo B.

    2015-04-01

    Many species live in colonies that prosper for a while and then collapse. After the collapse the colony survivors disperse randomly and found new colonies that may or may not make it depending on the new environment they find. We use birth and death chains in random environments to model such a population and to argue that random dispersion is a superior strategy for survival.

  17. Leadership statistics in random structures

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Krapivsky, P. L.

    2004-01-01

    The largest component ("the leader") in evolving random structures often exhibits universal statistical properties. This phenomenon is demonstrated analytically for two ubiquitous structures: random trees and random graphs. In both cases, lead changes are rare as the average number of lead changes increases quadratically with logarithm of the system size. As a function of time, the number of lead changes is self-similar. Additionally, the probability that no lead change ever occurs decays exponentially with the average number of lead changes.

  18. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  19. Immunogenicity and safety of a cell culture-based quadrivalent influenza vaccine in adults: A Phase III, double-blind, multicenter, randomized, non-inferiority study

    PubMed Central

    Bart, Stephan; Cannon, Kevin; Herrington, Darrell; Mills, Richard; Forleo-Neto, Eduardo; Lindert, Kelly; Abdul Mateen, Ahmed

    2016-01-01

    ABSTRACT Quadrivalent influenza vaccines (QIVs), which include both B lineage strains, are expected to provide broader protection than trivalent influenza vaccines (TIVs). The non-inferiority, immunogenicity, and safety of a cell culture-based investigational QIVc and 2 TIVs (TIV1c, TIV2c), in adults (≥18 y), were evaluated in this Phase III, double-blind, multicenter study. A total of 2680 age-stratified subjects were randomized (2:1:1) to receive 1 dose of QIVc (n = 1335), TIV1c (n = 676), or TIV2c (n = 669). TIV1c (B/Yamagata) and TIV2c (B/Victoria) differed only in B strain lineage. The primary objective was to demonstrate non-inferiority of the hemagglutinin-inhibition antibody responses of QIVc against TIVc, 22 d post-vaccination. Secondary objectives included the evaluation of immunogenicity of QIVc and TIVc in younger (≥18 – <65 y) and older (≥65 y) adults. Hemagglutinin inhibition assays were performed at days 1 and 22. Solicited local and systemic adverse events (AEs) were monitored for 7 d post-vaccination, and unsolicited AEs and serious AEs until day 181. QIVc met the non-inferiority criteria for all 4 vaccine strains and demonstrated superiority for both influenza B strains over the unmatched B strain included in the TIV1c and TIV2c, when geometric mean titers and seroconversion rates with TIVc were compared at day 22. Between 48%–52% of subjects experienced ≥1 solicited AE, the most common being injection-site pain and headache. Serious AEs were reported by ≤1% of subjects, none were vaccine-related. The results indicate that QIVc is immunogenic and well tolerated in both younger and older adults. The immunogenicity and safety profiles of QIVc and TIVc were comparable at all ages evaluated. PMID:27322354

  20. Random diffusion model.

    PubMed

    Mazenko, Gene F

    2008-09-01

    We study the random diffusion model. This is a continuum model for a conserved scalar density field varphi driven by diffusive dynamics. The interesting feature of the dynamics is that the bare diffusion coefficient D is density dependent. In the simplest case, D=D[over ]+D_{1}deltavarphi , where D[over ] is the constant average diffusion constant. In the case where the driving effective Hamiltonian is quadratic, the model can be treated using perturbation theory in terms of the single nonlinear coupling D1 . We develop perturbation theory to fourth order in D1 . The are two ways of analyzing this perturbation theory. In one approach, developed by Kawasaki, at one-loop order one finds mode-coupling theory with an ergodic-nonergodic transition. An alternative more direct interpretation at one-loop order leads to a slowing down as the nonlinear coupling increases. Eventually one hits a critical coupling where the time decay becomes algebraic. Near this critical coupling a weak peak develops at a wave number well above the peak at q=0 associated with the conservation law. The width of this peak in Fourier space decreases with time and can be identified with a characteristic kinetic length which grows with a power law in time. For stronger coupling the system becomes metastable and then unstable. At two-loop order it is shown that the ergodic-nonergodic transition is not supported. It is demonstrated that the critical properties of the direct approach survive, going to higher order in perturbation theory.

  1. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  2. Bell experiments with random destination sources

    SciTech Connect

    Sciarrino, Fabio; Mataloni, Paolo; Vallone, Giuseppe; Cabello, Adan

    2011-03-15

    It is generally assumed that sources randomly sending two particles to one or two different observers, random destination sources (RDSs), cannot be used for genuine quantum nonlocality tests because of the postselection loophole. We demonstrate that Bell experiments not affected by the postselection loophole may be performed with (i) an RDS and local postselection using perfect detectors, (ii) an RDS, local postselection, and fair sampling assumption with any detection efficiency, and (iii) an RDS and a threshold detection efficiency required to avoid the detection loophole. These results allow the adoption of RDS setups which are simpler and more efficient for long-distance free-space Bell tests, and extend the range of physical systems which can be used for loophole-free Bell tests.

  3. Globally, unrelated protein sequences appear random

    PubMed Central

    Lavelle, Daniel T.; Pearson, William R.

    2010-01-01

    Motivation: To test whether protein folding constraints and secondary structure sequence preferences significantly reduce the space of amino acid words in proteins, we compared the frequencies of four- and five-amino acid word clumps (independent words) in proteins to the frequencies predicted by four random sequence models. Results: While the human proteome has many overrepresented word clumps, these words come from large protein families with biased compositions (e.g. Zn-fingers). In contrast, in a non-redundant sample of Pfam-AB, only 1% of four-amino acid word clumps (4.7% of 5mer words) are 2-fold overrepresented compared with our simplest random model [MC(0)], and 0.1% (4mers) to 0.5% (5mers) are 2-fold overrepresented compared with a window-shuffled random model. Using a false discovery rate q-value analysis, the number of exceptional four- or five-letter words in real proteins is similar to the number found when comparing words from one random model to another. Consensus overrepresented words are not enriched in conserved regions of proteins, but four-letter words are enriched 1.18- to 1.56-fold in α-helical secondary structures (but not β-strands). Five-residue consensus exceptional words are enriched for α-helix 1.43- to 1.61-fold. Protein word preferences in regular secondary structure do not appear to significantly restrict the use of sequence words in unrelated proteins, although the consensus exceptional words have a secondary structure bias for α-helix. Globally, words in protein sequences appear to be under very few constraints; for the most part, they appear to be random. Contact: wrp@virginia.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19948773

  4. Subrandom methods for multidimensional nonuniform sampling.

    PubMed

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics.

  5. Subrandom methods for multidimensional nonuniform sampling

    NASA Astrophysics Data System (ADS)

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics.

  6. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    PubMed

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  7. Ticks of a Random clock

    NASA Astrophysics Data System (ADS)

    Jung, P.; Talkner, P.

    2010-09-01

    A simple way to convert a purely random sequence of events into a signal with a strong periodic component is proposed. The signal consists of those instants of time at which the length of the random sequence exceeds an integer multiple of a given number. The larger this number the more pronounced the periodic behavior becomes.

  8. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  9. ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES

    PubMed Central

    van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.

    2014-01-01

    SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298

  10. Randomness versus Nonlocality and Entanglement

    NASA Astrophysics Data System (ADS)

    Acín, Antonio; Massar, Serge; Pironio, Stefano

    2012-03-01

    The outcomes obtained in Bell tests involving two-outcome measurements on two subsystems can, in principle, generate up to 2 bits of randomness. However, the maximal violation of the Clauser-Horne-Shimony-Holt inequality guarantees the generation of only 1.23 bits of randomness. We prove here that quantum correlations with arbitrarily little nonlocality and states with arbitrarily little entanglement can be used to certify that close to the maximum of 2 bits of randomness are produced. Our results show that nonlocality, entanglement, and randomness are inequivalent quantities. They also imply that device-independent quantum key distribution with an optimal key generation rate is possible by using almost-local correlations and that device-independent randomness generation with an optimal rate is possible with almost-local correlations and with almost-unentangled states.

  11. Sampling of illicit drugs for quantitative analysis--part III: sampling plans and sample preparations.

    PubMed

    Csesztregi, T; Bovens, M; Dujourdy, L; Franc, A; Nagy, J

    2014-08-01

    The findings in this paper are based on the results of our drug homogeneity studies and particle size investigations. Using that information, a general sampling plan (depicted in the form of a flow-chart) was devised that could be applied to the quantitative instrumental analysis of the most common illicit drugs: namely heroin, cocaine, amphetamine, cannabis resin, MDMA tablets and herbal cannabis in 'bud' form (type I). Other more heterogeneous forms of cannabis (type II) were found to require alternative, more traditional sampling methods. A table was constructed which shows the sampling uncertainty expected when a particular number of random increments are taken and combined to form a single primary sample. It also includes a recommended increment size; which is 1 g for powdered drugs and cannabis resin, 1 tablet for MDMA and 1 bud for herbal cannabis in bud form (type I). By referring to that table, individual laboratories can ensure that the sampling uncertainty for a particular drug seizure can be minimised, such that it lies in the same region as their analytical uncertainty for that drug. The table shows that assuming a laboratory wishes to quantitatively analyse a seizure of powdered drug or cannabis resin with a 'typical' heterogeneity, a primary sample of 15×1 g increments is generally appropriate. The appropriate primary sample for MDMA tablets is 20 tablets, while for herbal cannabis (in bud form) 50 buds were found to be appropriate. Our study also showed that, for a suitably homogenised primary sample of the most common powdered drugs, an analytical sample size of between 20 and 35 mg was appropriate and for herbal cannabis the appropriate amount was 200 mg. The need to ensure that the results from duplicate or multiple incremental sampling were compared, to demonstrate whether or not a particular seized material has a 'typical' heterogeneity and that the sampling procedure applied has resulted in a 'correct sample', was highlighted and the setting

  12. Sampling of illicit drugs for quantitative analysis--part III: sampling plans and sample preparations.

    PubMed

    Csesztregi, T; Bovens, M; Dujourdy, L; Franc, A; Nagy, J

    2014-08-01

    The findings in this paper are based on the results of our drug homogeneity studies and particle size investigations. Using that information, a general sampling plan (depicted in the form of a flow-chart) was devised that could be applied to the quantitative instrumental analysis of the most common illicit drugs: namely heroin, cocaine, amphetamine, cannabis resin, MDMA tablets and herbal cannabis in 'bud' form (type I). Other more heterogeneous forms of cannabis (type II) were found to require alternative, more traditional sampling methods. A table was constructed which shows the sampling uncertainty expected when a particular number of random increments are taken and combined to form a single primary sample. It also includes a recommended increment size; which is 1 g for powdered drugs and cannabis resin, 1 tablet for MDMA and 1 bud for herbal cannabis in bud form (type I). By referring to that table, individual laboratories can ensure that the sampling uncertainty for a particular drug seizure can be minimised, such that it lies in the same region as their analytical uncertainty for that drug. The table shows that assuming a laboratory wishes to quantitatively analyse a seizure of powdered drug or cannabis resin with a 'typical' heterogeneity, a primary sample of 15×1 g increments is generally appropriate. The appropriate primary sample for MDMA tablets is 20 tablets, while for herbal cannabis (in bud form) 50 buds were found to be appropriate. Our study also showed that, for a suitably homogenised primary sample of the most common powdered drugs, an analytical sample size of between 20 and 35 mg was appropriate and for herbal cannabis the appropriate amount was 200 mg. The need to ensure that the results from duplicate or multiple incremental sampling were compared, to demonstrate whether or not a particular seized material has a 'typical' heterogeneity and that the sampling procedure applied has resulted in a 'correct sample', was highlighted and the setting

  13. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  14. On the stability of robotic systems with random communication rates

    NASA Technical Reports Server (NTRS)

    Kobayashi, H.; Yun, X.; Paul, R. P.

    1989-01-01

    Control problems of sampled data systems which are subject to random sample rate variations and delays are studied. Due to the rapid growth of the use of computers more and more systems are controlled digitally. Complex systems such as space telerobotic systems require the integration of a number of subsystems at different hierarchical levels. While many subsystems may run on a single processor, some subsystems require their own processor or processors. The subsystems are integrated into functioning systems through communications. Communications between processes sharing a single processor are also subject to random delays due to memory management and interrupt latency. Communications between processors involve random delays due to network access and to data collisions. Furthermore, all control processes involve delays due to casual factors in measuring devices and to signal processing. Traditionally, sampling rates are chosen to meet the worst case communication delay. Such a strategy is wasteful as the processors are then idle a great proportion of the time; sample rates are not as high as possible resulting in poor performance or in the over specification of control processors; there is the possibility of missing data no matter how low the sample rate is picked. Asymptotical stability with probability one for randomly sampled multi-dimensional linear systems is studied. A sufficient condition for the stability is obtained. This condition is so simple that it can be applied to practical systems. A design procedure is also shown.

  15. A program for contouring randomly spaced data

    NASA Technical Reports Server (NTRS)

    Hamm, R. W.; Kibler, J. F.; Morris, W. D.

    1975-01-01

    A description is given of a digital computer program which prepares contour plots of three dimensional data. The contouring technique uses a triangulation procedure. As presently configured, the program can accept up to 56,000 randomly spaced data points, although the required computer resources may be prohibitive. However, with relatively minor internal modifications, the program can handle essentially unlimited amounts of data. Up to 20 contouring intervals can be selected and contoured with either polygonal lines or smooth curves. Sample cases are illustrated. A general description of the main program and primary level subroutines is included to permit simple modifications of the program.

  16. Spin models and boson sampling

    NASA Astrophysics Data System (ADS)

    Garcia Ripoll, Juan Jose; Peropadre, Borja; Aspuru-Guzik, Alan

    Aaronson & Arkhipov showed that predicting the measurement statistics of random linear optics circuits (i.e. boson sampling) is a classically hard problem for highly non-classical input states. A typical boson-sampling circuit requires N single photon emitters and M photodetectors, and it is a natural idea to rely on few-level systems for both tasks. Indeed, we show that 2M two-level emitters at the input and output ports of a general M-port interferometer interact via an XY-model with collective dissipation and a large number of dark states that could be used for quantum information storage. More important is the fact that, when we neglect dissipation, the resulting long-range XY spin-spin interaction is equivalent to boson sampling under the same conditions that make boson sampling efficient. This allows efficient implementations of boson sampling using quantum simulators & quantum computers. We acknowledge support from Spanish Mineco Project FIS2012-33022, CAM Research Network QUITEMAD+ and EU FP7 FET-Open Project PROMISCE.

  17. Replica trick for rare samples

    NASA Astrophysics Data System (ADS)

    Rizzo, Tommaso

    2014-05-01

    In the context of disordered systems with quenched Hamiltonians I address the problem of characterizing rare samples where the thermal average of a specific observable has a value different from the typical one. These rare samples can be selected through a variation of the replica trick which amounts to replicating the system and dividing the replicas intwo two groups containing, respectively, M and -M replicas. Replicas in the first (second) group experience a positive (negative) small field O (1/M) conjugate to the observable considered and the M →∞ limit is to be taken in the end. Applications to the random-field Ising model and to the Sherrington-Kirkpatrick model are discussed.

  18. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    PubMed Central

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  19. Suicidality in a Sample of Arctic Households

    ERIC Educational Resources Information Center

    Haggarty, John M.; Cernovsky, Zack; Bedard, Michel; Merskey, Harold

    2008-01-01

    We investigated the association of suicidal ideation and behavior with depression, anxiety, and alcohol abuse in a Canadian Arctic Inuit community. Inuit (N = 111) from a random sample of households completed assessments of anxiety and depression, alcohol abuse, and suicidality. High rates of suicidal ideation within the past week (43.6%), and…

  20. A Mars Sample Return Sample Handling System

    NASA Technical Reports Server (NTRS)

    Wilson, David; Stroker, Carol

    2013-01-01

    We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory

  1. Random sequential adsorption on fractals.

    PubMed

    Ciesla, Michal; Barbasz, Jakub

    2012-07-28

    Irreversible adsorption of spheres on flat collectors having dimension d < 2 is studied. Molecules are adsorbed on Sierpinski's triangle and carpet-like fractals (1 < d < 2), and on general Cantor set (d < 1). Adsorption process is modeled numerically using random sequential adsorption (RSA) algorithm. The paper concentrates on measurement of fundamental properties of coverages, i.e., maximal random coverage ratio and density autocorrelation function, as well as RSA kinetics. Obtained results allow to improve phenomenological relation between maximal random coverage ratio and collector dimension. Moreover, simulations show that, in general, most of known dimensional properties of adsorbed monolayers are valid for non-integer dimensions.

  2. Effect of noise correlations on randomized benchmarking

    NASA Astrophysics Data System (ADS)

    Ball, Harrison; Stace, Thomas M.; Flammia, Steven T.; Biercuk, Michael J.

    2016-02-01

    Among the most popular and well-studied quantum characterization, verification, and validation techniques is randomized benchmarking (RB), an important statistical tool used to characterize the performance of physical logic operations useful in quantum information processing. In this work we provide a detailed mathematical treatment of the effect of temporal noise correlations on the outcomes of RB protocols. We provide a fully analytic framework capturing the accumulation of error in RB expressed in terms of a three-dimensional random walk in "Pauli space." Using this framework we derive the probability density function describing RB outcomes (averaged over noise) for both Markovian and correlated errors, which we show is generally described by a Γ distribution with shape and scale parameters depending on the correlation structure. Long temporal correlations impart large nonvanishing variance and skew in the distribution towards high-fidelity outcomes—consistent with existing experimental data—highlighting potential finite-sampling pitfalls and the divergence of the mean RB outcome from worst-case errors in the presence of noise correlations. We use the filter-transfer function formalism to reveal the underlying reason for these differences in terms of effective coherent averaging of correlated errors in certain random sequences. We conclude by commenting on the impact of these calculations on the utility of single-metric approaches to quantum characterization, verification, and validation.

  3. Variational Infinite Hidden Conditional Random Fields.

    PubMed

    Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja; Ghahramani, Zoubin

    2015-09-01

    Hidden conditional random fields (HCRFs) are discriminative latent variable models which have been shown to successfully learn the hidden structure of a given classification problem. An Infinite hidden conditional random field is a hidden conditional random field with a countably infinite number of hidden states, which rids us not only of the necessity to specify a priori a fixed number of hidden states available but also of the problem of overfitting. Markov chain Monte Carlo (MCMC) sampling algorithms are often employed for inference in such models. However, convergence of such algorithms is rather difficult to verify, and as the complexity of the task at hand increases the computational cost of such algorithms often becomes prohibitive. These limitations can be overcome by variational techniques. In this paper, we present a generalized framework for infinite HCRF models, and a novel variational inference approach on a model based on coupled Dirichlet Process Mixtures, the HCRF-DPM. We show that the variational HCRF-DPM is able to converge to a correct number of represented hidden states, and performs as well as the best parametric HCRFs-chosen via cross-validation-for the difficult tasks of recognizing instances of agreement, disagreement, and pain in audiovisual sequences. PMID:26353136

  4. Exploring number space by random digit generation.

    PubMed

    Loetscher, Tobias; Brugger, Peter

    2007-07-01

    There is some evidence that human subjects preferentially select small numbers when asked to sample numbers from large intervals "at random". A retrospective analysis of single digit frequencies in 16 independent experiments with the Mental Dice Task (generation of digits 1-6 during 1 min) confirmed the occurrence of small-number biases (SNBs) in 488 healthy subjects. A subset of these experiments suggested a spatial nature of this bias in the sense of a "leftward" shift along the number line. First, individual SNBs were correlated with leftward deviations in a number line bisection task (but unrelated to the bisection of physical lines). Second, in 20 men, the magnitude of SNBs significantly correlated with leftward attentional biases in the judgment of chimeric faces. Finally, cognitive activation of the right hemisphere enhanced SNBs in 20 different men, while left hemisphere activation reduced them. Together, these findings provide support for a spatial component in random number generation. Specifically, they allow an interpretation of SNBs in terms of "pseudoneglect in number space." We recommend the use of random digit generation for future explorations of spatial-attentional asymmetries in numerical processing and discuss methodological issues relevant to prospective designs.

  5. Fast phase randomization via two-folds

    PubMed Central

    Jeffrey, M. R.

    2016-01-01

    A two-fold is a singular point on the discontinuity surface of a piecewise-smooth vector field, at which the vector field is tangent to the discontinuity surface on both sides. If an orbit passes through an invisible two-fold (also known as a Teixeira singularity) before settling to regular periodic motion, then the phase of that motion cannot be determined from initial conditions, and, in the presence of small noise, the asymptotic phase of a large number of sample solutions is highly random. In this paper, we show how the probability distribution of the asymptotic phase depends on the global nonlinear dynamics. We also show how the phase of a smooth oscillator can be randomized by applying a simple discontinuous control law that generates an invisible two-fold. We propose that such a control law can be used to desynchronize a collection of oscillators, and that this manner of phase randomization is fast compared with existing methods (which use fixed points as phase singularities), because there is no slowing of the dynamics near a two-fold. PMID:27118901

  6. Control theory for random systems

    NASA Technical Reports Server (NTRS)

    Bryson, A. E., Jr.

    1972-01-01

    A survey is presented of the current knowledge available for designing and predicting the effectiveness of controllers for dynamic systems which can be modeled by ordinary differential equations. A short discussion of feedback control is followed by a description of deterministic controller design and the concept of system state. The need for more realistic disturbance models led to the use of stochastic process concepts, in particular the Gauss-Markov process. A compensator controlled system, with random forcing functions, random errors in the measurements, and random initial conditions, is treated as constituting a Gauss-Markov random process; hence the mean-square behavior of the controlled system is readily predicted. As an example, a compensator is designed for a helicopter to maintain it in hover in a gusty wind over a point on the ground.

  7. Quantifying randomness in real networks

    NASA Astrophysics Data System (ADS)

    Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-10-01

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.

  8. Quantifying randomness in real networks.

    PubMed

    Orsini, Chiara; Dankulov, Marija M; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-10-20

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.

  9. Quantifying randomness in real networks

    PubMed Central

    Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-01-01

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs. PMID:26482121

  10. Diffraction by random Ronchi gratings.

    PubMed

    Torcal-Milla, Francisco Jose; Sanchez-Brea, Luis Miguel

    2016-08-01

    In this work, we obtain analytical expressions for the near-and far-field diffraction of random Ronchi diffraction gratings where the slits of the grating are randomly displaced around their periodical positions. We theoretically show that the effect of randomness in the position of the slits of the grating produces a decrease of the contrast and even disappearance of the self-images for high randomness level at the near field. On the other hand, it cancels high-order harmonics in far field, resulting in only a few central diffraction orders. Numerical simulations by means of the Rayleigh-Sommerfeld diffraction formula are performed in order to corroborate the analytical results. These results are of interest for industrial and technological applications where manufacture errors need to be considered.

  11. Cluster randomization and political philosophy.

    PubMed

    Chwang, Eric

    2012-11-01

    In this paper, I will argue that, while the ethical issues raised by cluster randomization can be challenging, they are not new. My thesis divides neatly into two parts. In the first, easier part I argue that many of the ethical challenges posed by cluster randomized human subjects research are clearly present in other types of human subjects research, and so are not novel. In the second, more difficult part I discuss the thorniest ethical challenge for cluster randomized research--cases where consent is genuinely impractical to obtain. I argue that once again these cases require no new analytic insight; instead, we should look to political philosophy for guidance. In other words, the most serious ethical problem that arises in cluster randomized research also arises in political philosophy.

  12. Quantum entanglement from random measurements

    NASA Astrophysics Data System (ADS)

    Tran, Minh Cong; Dakić, Borivoje; Arnault, François; Laskowski, Wiesław; Paterek, Tomasz

    2015-11-01

    We show that the expectation value of squared correlations measured along random local directions is an identifier of quantum entanglement in pure states, which can be directly experimentally assessed if two copies of the state are available. Entanglement can therefore be detected by parties who do not share a common reference frame and whose local reference frames, such as polarizers or Stern-Gerlach magnets, remain unknown. Furthermore, we also show that in every experimental run, access to only one qubit from the macroscopic reference is sufficient to identify entanglement, violate a Bell inequality, and, in fact, observe all phenomena observable with macroscopic references. Finally, we provide a state-independent entanglement witness solely in terms of random correlations and emphasize how data gathered for a single random measurement setting per party reliably detects entanglement. This is only possible due to utilized randomness and should find practical applications in experimental confirmation of multiphoton entanglement or space experiments.

  13. Diffraction by random Ronchi gratings.

    PubMed

    Torcal-Milla, Francisco Jose; Sanchez-Brea, Luis Miguel

    2016-08-01

    In this work, we obtain analytical expressions for the near-and far-field diffraction of random Ronchi diffraction gratings where the slits of the grating are randomly displaced around their periodical positions. We theoretically show that the effect of randomness in the position of the slits of the grating produces a decrease of the contrast and even disappearance of the self-images for high randomness level at the near field. On the other hand, it cancels high-order harmonics in far field, resulting in only a few central diffraction orders. Numerical simulations by means of the Rayleigh-Sommerfeld diffraction formula are performed in order to corroborate the analytical results. These results are of interest for industrial and technological applications where manufacture errors need to be considered. PMID:27505363

  14. 40 CFR 205.57-2 - Test vehicle sample selection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... sample selection. (a) Vehicles comprising the batch sample which are required to be tested pursuant to a... request from a batch of vehicles of the category or configuration specified in the test request. If the test request specifies that the vehicles comprising the batch sample must be selected randomly,...

  15. Pedagogical Simulation of Sampling Distributions and the Central Limit Theorem

    ERIC Educational Resources Information Center

    Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

    2007-01-01

    Students often find the fact that a sample statistic is a random variable very hard to grasp. Even more mysterious is why a sample mean should become ever more Normal as the sample size increases. This simulation tool is meant to illustrate the process, thereby giving students some intuitive grasp of the relationship between a parent population…

  16. On the Utilization of Sample Weights in Latent Variable Models.

    ERIC Educational Resources Information Center

    Kaplan, David; Ferguson, Aaron J.

    1999-01-01

    Examines the use of sample weights in latent variable models in the case where a simple random sample is drawn from a population containing a mixture of strata through a bootstrap simulation study. Results show that ignoring weights can lead to serious bias in latent variable model parameters and reveal the advantages of using sample weights. (SLD)

  17. P-Type Factor Analyses of Individuals' Thought Sampling Data.

    ERIC Educational Resources Information Center

    Hurlburt, Russell T.; Melancon, Susan M.

    Recently, interest in research measuring stream of consciousness or thought has increased. A study was conducted, based on a previous study by Hurlburt, Lech, and Saltman, in which subjects were randomly interrupted to rate their thoughts and moods on a Likert-type scale. Thought samples were collected from 27 subjects who carried random-tone…

  18. Quasi-Random Sequence Generators.

    1994-03-01

    Version 00 LPTAU generates quasi-random sequences. The sequences are uniformly distributed sets of L=2**30 points in the N-dimensional unit cube: I**N=[0,1]. The sequences are used as nodes for multidimensional integration, as searching points in global optimization, as trial points in multicriteria decision making, as quasi-random points for quasi Monte Carlo algorithms.

  19. Staggered chiral random matrix theory

    SciTech Connect

    Osborn, James C.

    2011-02-01

    We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.

  20. On Pfaffian Random Point Fields

    NASA Astrophysics Data System (ADS)

    Kargin, V.

    2014-02-01

    We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.

  1. Randomness in the bouncing ball dynamics

    NASA Astrophysics Data System (ADS)

    Giusepponi, S.; Marchesoni, F.; Borromeo, M.

    2005-06-01

    The dynamics of a vibrated bouncing ball is studied numerically in the reduced impact representation, where the velocity of the bouncing ball is sampled at each impact with the plate (asynchronous sampling). Its random nature is thus fully revealed: (i) the chattering mechanism, through which the ball gets locked on the plate, is accomplished within a limited interval of the plate oscillation phase, and (ii) is well described in impact representation by a special structure of looped, nested bands and (iii) chattering trajectories and strange attractors may coexist for appropriate ranges of the parameter values. Structure and substructure of the chattering bands are well explained in terms of a simple impact map rule. These results are of potential application to the analysis of high-temperature vibrated granular gases.

  2. Randomness and degrees of irregularity.

    PubMed Central

    Pincus, S; Singer, B H

    1996-01-01

    The fundamental question "Are sequential data random?" arises in myriad contexts, often with severe data length constraints. Furthermore, there is frequently a critical need to delineate nonrandom sequences in terms of closeness to randomness--e.g., to evaluate the efficacy of therapy in medicine. We address both these issues from a computable framework via a quantification of regularity. ApEn (approximate entropy), defining maximal randomness for sequences of arbitrary length, indicating the applicability to sequences as short as N = 5 points. An infinite sequence formulation of randomness is introduced that retains the operational (and computable) features of the finite case. In the infinite sequence setting, we indicate how the "foundational" definition of independence in probability theory, and the definition of normality in number theory, reduce to limit theorems without rates of convergence, from which we utilize ApEn to address rates of convergence (of a deficit from maximal randomness), refining the aforementioned concepts in a computationally essential manner. Representative applications among many are indicated to assess (i) random number generation output; (ii) well-shuffled arrangements; and (iii) (the quality of) bootstrap replicates. PMID:11607637

  3. Experimental evidence of quantum randomness incomputability

    SciTech Connect

    Calude, Cristian S.; Dinneen, Michael J.; Dumitrescu, Monica; Svozil, Karl

    2010-08-15

    In contrast with software-generated randomness (called pseudo-randomness), quantum randomness can be proven incomputable; that is, it is not exactly reproducible by any algorithm. We provide experimental evidence of incomputability--an asymptotic property--of quantum randomness by performing finite tests of randomness inspired by algorithmic information theory.

  4. Instrument sequentially samples ac signals from several accelerometers

    NASA Technical Reports Server (NTRS)

    Chapman, C. P.

    1967-01-01

    Scanner circuit sequentially samples the ac signals from accelerometers used in conducting noise vibration tests, and provides a time-averaged output signal. The scanner is used in conjunction with other devices for random noise vibration tests.

  5. Phase Transitions on Random Lattices: How Random is Topological Disorder?

    NASA Astrophysics Data System (ADS)

    Barghathi, Hatem; Vojta, Thomas

    2015-03-01

    We study the effects of topological (connectivity) disorder on phase transitions. We identify a broad class of random lattices whose disorder fluctuations decay much faster with increasing length scale than those of generic random systems, yielding a wandering exponent of ω = (d - 1) / (2 d) in d dimensions. The stability of clean critical points is thus governed by the criterion (d + 1) ν > 2 rather than the usual Harris criterion dν > 2 , making topological disorder less relevant than generic randomness. The Imry-Ma criterion is also modified, allowing first-order transitions to survive in all dimensions d > 1 . These results explain a host of puzzling violations of the original criteria for equilibrium and nonequilibrium phase transitions on random lattices. We discuss applications, and we illustrate our theory by computer simulations of random Voronoi and other lattices. This work was supported by the NSF under Grant Nos. DMR-1205803 and PHYS-1066293. We acknowledge the hospitality of the Aspen Center for Physics.

  6. Representative Sampling of Maritally Violent and Nonviolent Couples: A Feasibility Study

    ERIC Educational Resources Information Center

    Farris, Coreen; Holtzworth-Munroe, Amy

    2007-01-01

    Despite the methodological advantages of representative sampling, few researchers in the field of marital violence have employed random samples for laboratory assessments of couples. The current study tests the feasibility and sampling success of three recruitment methods: (a) random digit dialing, (b) directory-assisted recruitment, and (c) a…

  7. Maximum of the Characteristic Polynomial of Random Unitary Matrices

    NASA Astrophysics Data System (ADS)

    Arguin, Louis-Pierre; Belius, David; Bourgade, Paul

    2016-09-01

    It was recently conjectured by Fyodorov, Hiary and Keating that the maximum of the characteristic polynomial on the unit circle of a {N× N} random unitary matrix sampled from the Haar measure grows like {CN/(log N)^{3/4}} for some random variable C. In this paper, we verify the leading order of this conjecture, that is, we prove that with high probability the maximum lies in the range {[N^{1 - ɛ},N^{1 + ɛ}]} , for arbitrarily small ɛ. The method is based on identifying an approximate branching random walk in the Fourier decomposition of the characteristic polynomial, and uses techniques developed to describe the extremes of branching random walks and of other log-correlated random fields. A key technical input is the asymptotic analysis of Toeplitz determinants with dimension-dependent symbols. The original argument for these asymptotics followed the general idea that the statistical mechanics of 1/f-noise random energy models is governed by a freezing transition. We also prove the conjectured freezing of the free energy for random unitary matrices.

  8. Sampling Motif-Constrained Ensembles of Networks

    NASA Astrophysics Data System (ADS)

    Fischer, Rico; Leitão, Jorge C.; Peixoto, Tiago P.; Altmann, Eduardo G.

    2015-10-01

    The statistical significance of network properties is conditioned on null models which satisfy specified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this Letter we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, networks with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.

  9. R. A. Fisher and his advocacy of randomization.

    PubMed

    Hall, Nancy S

    2007-01-01

    The requirement of randomization in experimental design was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimental design were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods.

  10. R. A. Fisher and his advocacy of randomization.

    PubMed

    Hall, Nancy S

    2007-01-01

    The requirement of randomization in experimental design was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimental design were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods. PMID:18175604

  11. Is the Non-Dipole Magnetic Field Random?

    NASA Technical Reports Server (NTRS)

    Walker, Andrew D.; Backus, George E.

    1996-01-01

    Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.

  12. Texture synthesis and transfer from multiple samples

    NASA Astrophysics Data System (ADS)

    Qi, Yue; Zhao, Qinping

    2003-09-01

    Texture Mapping plays a very important role in Computer Graphics. Texture Synthesis is one of the main methods to obtain textures, it makes use of sample textures to generate new textures. Texture Transfer is based on Texture Synthesis, it renders objects with textures taken from different objects. Currently, most of Texture Synthesis and Transfer methods use a single sample texture. A method for Texture Synthesis adn Transfer from multi samples was presented. For texture synthesis, the L-shaped neighborhood seaching approach was used. Users specify the proportion of each sample, the number of seed points, and these seed points are scattered randomly according to their samples in horizontal and vertical direction synchronously to synthesize textures. The synthesized textures are very good. For texture transfer, the luminance of the target image and the sample textures are analyzed. This procedure is from coarse to fine, and can produce a visually pleasing result.

  13. The single-channel regime of transport through random media

    PubMed Central

    Peña, A.; Girschik, A.; Libisch, F.; Rotter, S.; Chabanov, A. A.

    2014-01-01

    The propagation of light through samples with random inhomogeneities can be described by way of transmission eigenchannels, which connect incoming and outgoing external propagating modes. Although the detailed structure of a disordered sample can generally not be fully specified, these transmission eigenchannels can nonetheless be successfully controlled and used for focusing and imaging light through random media. Here we demonstrate that in deeply localized quasi-1D systems, the single dominant transmission eigenchannel is formed by an individual Anderson-localized mode or by a ‘necklace state’. In this single-channel regime, the disordered sample can be treated as an effective 1D system with a renormalized localization length, coupled through all the external modes to its surroundings. Using statistical criteria of the single-channel regime and pulsed excitations of the disordered samples allows us to identify long-lived localized modes and short-lived necklace states at long and short time delays, respectively. PMID:24663028

  14. Virial expansion for almost diagonal random matrices

    NASA Astrophysics Data System (ADS)

    Yevtushenko, Oleg; Kravtsov, Vladimir E.

    2003-08-01

    Energy level statistics of Hermitian random matrices hat H with Gaussian independent random entries Higeqj is studied for a generic ensemble of almost diagonal random matrices with langle|Hii|2rangle ~ 1 and langle|Hi\

  15. Full randomness from arbitrarily deterministic events

    NASA Astrophysics Data System (ADS)

    Gallego, Rodrigo; Masanes, Lluis; de la Torre, Gonzalo; Dhara, Chirag; Aolita, Leandro; Acín, Antonio

    2013-10-01

    Do completely unpredictable events exist? Classical physics excludes fundamental randomness. Although quantum theory makes probabilistic predictions, this does not imply that nature is random, as randomness should be certified without relying on the complete structure of the theory being used. Bell tests approach the question from this perspective. However, they require prior perfect randomness, falling into a circular reasoning. A Bell test that generates perfect random bits from bits possessing high—but less than perfect—randomness has recently been obtained. Yet, the main question remained open: does any initial randomness suffice to certify perfect randomness? Here we show that this is indeed the case. We provide a Bell test that uses arbitrarily imperfect random bits to produce bits that are, under the non-signalling principle assumption, perfectly random. This provides the first protocol attaining full randomness amplification. Our results have strong implications onto the debate of whether there exist events that are fully random.

  16. On Convergent Probability of a Random Walk

    ERIC Educational Resources Information Center

    Lee, Y.-F.; Ching, W.-K.

    2006-01-01

    This note introduces an interesting random walk on a straight path with cards of random numbers. The method of recurrent relations is used to obtain the convergent probability of the random walk with different initial positions.

  17. Wave propagation through a random medium - The random slab problem

    NASA Technical Reports Server (NTRS)

    Acquista, C.

    1978-01-01

    The first-order smoothing approximation yields integral equations for the mean and the two-point correlation function of a wave in a random medium. A method is presented for the approximate solution of these equations that combines features of the eiconal approximation and of the Born expansion. This method is applied to the problem of reflection and transmission of a plane wave by a slab of a random medium. Both the mean wave and the covariance are calculated to determine the reflected and transmitted amplitudes and intensities.

  18. EDITORIAL: Nano and random lasers Nano and random lasers

    NASA Astrophysics Data System (ADS)

    Wiersma, Diederik S.; Noginov, Mikhail A.

    2010-02-01

    The field of extreme miniature sources of stimulated emission represented by random lasers and nanolasers has gone through an enormous development in recent years. Random lasers are disordered optical structures in which light waves are both multiply scattered and amplified. Multiple scattering is a process that we all know very well from daily experience. Many familiar materials are actually disordered dielectrics and owe their optical appearance to multiple light scattering. Examples are white marble, white painted walls, paper, white flowers, etc. Light waves inside such materials perform random walks, that is they are scattered several times in random directions before they leave the material, and this gives it an opaque white appearance. This multiple scattering process does not destroy the coherence of the light. It just creates a very complex interference pattern (also known as speckle). Random lasers can be made of basically any disordered dielectric material by adding an optical gain mechanism to the structure. In practice this can be achieved with, for instance, laser dye that is dissolved in the material and optically excited by a pump laser. Alternative routes to incorporate gain are achieved using rare-earth or transition metal doped solid-state laser materials or direct band gap semiconductors. The latter can potentially be pumped electrically. After excitation, the material is capable of scattering light and amplifying it, and these two ingredients form the basis for a random laser. Random laser emission can be highly coherent, even in the absence of an optical cavity. The reason is that random structures can sustain optical modes that are spectrally narrow. This provides a spectral selection mechanism that, together with gain saturation, leads to coherent emission. A random laser can have a large number of (randomly distributed) modes that are usually strongly coupled. This means that many modes compete for the gain that is available in a random

  19. Cover times of random searches

    NASA Astrophysics Data System (ADS)

    Chupeau, Marie; Bénichou, Olivier; Voituriez, Raphaël

    2015-10-01

    How long must one undertake a random search to visit all sites of a given domain? This time, known as the cover time, is a key observable to quantify the efficiency of exhaustive searches, which require a complete exploration of an area and not only the discovery of a single target. Examples range from immune-system cells chasing pathogens to animals harvesting resources, from robotic exploration for cleaning or demining to the task of improving search algorithms. Despite its broad relevance, the cover time has remained elusive and so far explicit results have been scarce and mostly limited to regular random walks. Here we determine the full distribution of the cover time for a broad range of random search processes, including Lévy strategies, intermittent strategies, persistent random walks and random walks on complex networks, and reveal its universal features. We show that for all these examples the mean cover time can be minimized, and that the corresponding optimal strategies also minimize the mean search time for a single target, unambiguously pointing towards their robustness.

  20. Random walk through fractal environments.

    PubMed

    Isliker, H; Vlahos, L

    2003-02-01

    We analyze random walk through fractal environments, embedded in three-dimensional, permeable space. Particles travel freely and are scattered off into random directions when they hit the fractal. The statistical distribution of the flight increments (i.e., of the displacements between two consecutive hittings) is analytically derived from a common, practical definition of fractal dimension, and it turns out to approximate quite well a power-law in the case where the dimension D(F) of the fractal is less than 2, there is though, always a finite rate of unaffected escape. Random walks through fractal sets with D(F)< or =2 can thus be considered as defective Levy walks. The distribution of jump increments for D(F)>2 is decaying exponentially. The diffusive behavior of the random walk is analyzed in the frame of continuous time random walk, which we generalize to include the case of defective distributions of walk increments. It is shown that the particles undergo anomalous, enhanced diffusion for D(F)<2, the diffusion is dominated by the finite escape rate. Diffusion for D(F)>2 is normal for large times, enhanced though for small and intermediate times. In particular, it follows that fractals generated by a particular class of self-organized criticality models give rise to enhanced diffusion. The analytical results are illustrated by Monte Carlo simulations.

  1. Random root movements in weightlessness

    NASA Technical Reports Server (NTRS)

    Johnsson, A.; Karlsson, C.; Iversen, T. H.; Chapman, D. K.

    1996-01-01

    The dynamics of root growth was studied in weightlessness. In the absence of the gravitropic reference direction during weightlessness, root movements could be controlled by spontaneous growth processes, without any corrective growth induced by the gravitropic system. If truly random of nature, the bending behavior should follow so-called 'random walk' mathematics during weightlessness. Predictions from this hypothesis were critically tested. In a Spacelab ESA-experiment, denoted RANDOM and carried out during the IML-2 Shuttle flight in July 1994, the growth of garden cress (Lepidium sativum) roots was followed by time lapse photography at 1-h intervals. The growth pattern was recorded for about 20 h. Root growth was significantly smaller in weightlessness as compared to gravity (control) conditions. It was found that the roots performed spontaneous movements in weightlessness. The average direction of deviation of the plants consistently stayed equal to zero, despite these spontaneous movements. The average squared deviation increased linearly with time as predicted theoretically (but only for 8-10 h). Autocorrelation calculations showed that bendings of the roots, as determined from the 1-h photographs, were uncorrelated after about a 2-h interval. It is concluded that random processes play an important role in root growth. Predictions from a random walk hypothesis as to the growth dynamics could explain parts of the growth patterns recorded. This test of the hypothesis required microgravity conditions as provided for in a space experiment.

  2. On co-design of filter and fault estimator against randomly occurring nonlinearities and randomly occurring deception attacks

    NASA Astrophysics Data System (ADS)

    Hu, Jun; Liu, Steven; Ji, Donghai; Li, Shanqiang

    2016-07-01

    In this paper, the co-design problem of filter and fault estimator is studied for a class of time-varying non-linear stochastic systems subject to randomly occurring nonlinearities and randomly occurring deception attacks. Two mutually independent random variables obeying the Bernoulli distribution are employed to characterize the phenomena of the randomly occurring nonlinearities and randomly occurring deception attacks, respectively. By using the augmentation approach, the co-design problem of the robust filter and fault estimator is converted into the recursive filter design problem. A new compensation scheme is proposed such that, for both randomly occurring nonlinearities and randomly occurring deception attacks, an upper bound of the filtering error covariance is obtained and such an upper bound is minimized by properly designing the filter gain at each sampling instant. Moreover, the explicit form of the filter gain is given based on the solution to two Riccati-like difference equations. It is shown that the proposed co-design algorithm is of a recursive form that is suitable for online computation. Finally, a simulation example is given to illustrate the usefulness of the developed filtering approach.

  3. What Does a Random Line Look Like: An Experimental Study

    ERIC Educational Resources Information Center

    Turner, Nigel E.; Liu, Eleanor; Toneatto, Tony

    2011-01-01

    The study examined the perception of random lines by people with gambling problems compared to people without gambling problems. The sample consisted of 67 probable pathological gamblers and 46 people without gambling problems. Participants completed a number of questionnaires about their gambling and were then presented with a series of random…

  4. A Randomized Violence Prevention Trial with Comparison: Responses by Gender

    ERIC Educational Resources Information Center

    Griffin, James P., Jr.; Chen, Dungtsa; Eubanks, Adriane; Brantley, Katrina M.; Willis, Leigh A.

    2007-01-01

    Using random assignment of students to two intervention groups and a comparison school sample, the researchers evaluated a three-group school-based violence prevention program. The three groups were (1) a whole-school intervention, (2) whole-school, cognitive-behavioral and cultural enrichment training, and (3) no violence prevention. The…

  5. Visual Categorization with Random Projection.

    PubMed

    Arriaga, Rosa I; Rutter, David; Cakmak, Maya; Vempala, Santosh S

    2015-10-01

    Humans learn categories of complex objects quickly and from a few examples. Random projection has been suggested as a means to learn and categorize efficiently. We investigate how random projection affects categorization by humans and by very simple neural networks on the same stimuli and categorization tasks, and how this relates to the robustness of categories. We find that (1) drastic reduction in stimulus complexity via random projection does not degrade performance in categorization tasks by either humans or simple neural networks, (2) human accuracy and neural network accuracy are remarkably correlated, even at the level of individual stimuli, and (3) the performance of both is strongly indicated by a natural notion of category robustness.

  6. Fast Randomized STDMA Link Scheduling

    NASA Astrophysics Data System (ADS)

    Gomez, Sergio; Gras, Oriol; Friderikos, Vasilis

    In this paper a fast randomized parallel link swap based packing (RSP) algorithm for timeslot allocation in a spatial time division multiple access (STDMA) wireless mesh network is presented. The proposed randomized algorithm extends several greedy scheduling algorithms that utilize the physical interference model by applying a local search that leads to a substantial improvement in the spatial timeslot reuse. Numerical simulations reveal that compared to previously scheduling schemes the proposed randomized algorithm can achieve a performance gain of up to 11%. A significant benefit of the proposed scheme is that the computations can be parallelized and therefore can efficiently utilize commoditized and emerging multi-core and/or multi-CPU processors.

  7. Selecting a Sample

    ERIC Educational Resources Information Center

    Ritter, Lois A., Ed.; Sue, Valerie M., Ed.

    2007-01-01

    This chapter provides an overview of sampling methods that are appropriate for conducting online surveys. The authors review some of the basic concepts relevant to online survey sampling, present some probability and nonprobability techniques for selecting a sample, and briefly discuss sample size determination and nonresponse bias. Although some…

  8. Fluid sampling tool

    DOEpatents

    Garcia, Anthony R.; Johnston, Roger G.; Martinez, Ronald K.

    2000-01-01

    A fluid-sampling tool for obtaining a fluid sample from a container. When used in combination with a rotatable drill, the tool bores a hole into a container wall, withdraws a fluid sample from the container, and seals the borehole. The tool collects fluid sample without exposing the operator or the environment to the fluid or to wall shavings from the container.

  9. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    SciTech Connect

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  10. A Randomized Experiment Comparing Random and Cutoff-Based Assignment

    ERIC Educational Resources Information Center

    Shadish, William R.; Galindo, Rodolfo; Wong, Vivian C.; Steiner, Peter M.; Cook, Thomas D.

    2011-01-01

    In this article, we review past studies comparing randomized experiments to regression discontinuity designs, mostly finding similar results, but with significant exceptions. The latter might be due to potential confounds of study characteristics with assignment method or with failure to estimate the same parameter over methods. In this study, we…

  11. Relatively Random: Context Effects on Perceived Randomness and Predicted Outcomes

    ERIC Educational Resources Information Center

    Matthews, William J.

    2013-01-01

    This article concerns the effect of context on people's judgments about sequences of chance outcomes. In Experiment 1, participants judged whether sequences were produced by random, mechanical processes (such as a roulette wheel) or skilled human action (such as basketball shots). Sequences with lower alternation rates were judged more likely to…

  12. Relatively random: context effects on perceived randomness and predicted outcomes.

    PubMed

    Matthews, William J

    2013-09-01

    This article concerns the effect of context on people's judgments about sequences of chance outcomes. In Experiment 1, participants judged whether sequences were produced by random, mechanical processes (such as a roulette wheel) or skilled human action (such as basketball shots). Sequences with lower alternation rates were judged more likely to result from human action. However, this effect was highly context-dependent: A moderate alternation rate was judged more likely to indicate a random physical process when encountered among sequences with lower alternation rates than when embedded among sequences with higher alternation rates. Experiment 2 found the same effect for predictions of the next outcome following a streak: A streak of 3 at the end of the sequence was judged less likely to continue by participants who had encountered shorter terminal streaks in previous trials than by those who had encountered longer ones. These contrast effects (a) help to explain variability in the types of sequences that are judged to be random and that elicit the gambler's fallacy, and urge caution about attempts to establish universal parameterizations of these effects; (b) are congruent with theories of sequence judgment that emphasize the importance of people's actual experiences with sequences of different kinds; (c) provide a link between models of sequence judgment and broader accounts of psychophysical/economic judgment; and (d) may offer new insight into individual differences in randomness judgments and sequence predictions.

  13. Random photonic crystal optical memory

    NASA Astrophysics Data System (ADS)

    Wirth Lima, A., Jr.; Sombra, A. S. B.

    2012-10-01

    Currently, optical cross-connects working on wavelength division multiplexing systems are based on optical fiber delay lines buffering. We designed and analyzed a novel photonic crystal optical memory, which replaces the fiber delay lines of the current optical cross-connect buffer. Optical buffering systems based on random photonic crystal optical memory have similar behavior to the electronic buffering systems based on electronic RAM memory. In this paper, we show that OXCs working with optical buffering based on random photonic crystal optical memories provides better performance than the current optical cross-connects.

  14. Truncations of random orthogonal matrices.

    PubMed

    Khoruzhenko, Boris A; Sommers, Hans-Jürgen; Życzkowski, Karol

    2010-10-01

    Statistical properties of nonsymmetric real random matrices of size M, obtained as truncations of random orthogonal N×N matrices, are investigated. We derive an exact formula for the density of eigenvalues which consists of two components: finite fraction of eigenvalues are real, while the remaining part of the spectrum is located inside the unit disk symmetrically with respect to the real axis. In the case of strong nonorthogonality, M/N=const, the behavior typical to real Ginibre ensemble is found. In the case M=N-L with fixed L, a universal distribution of resonance widths is recovered.

  15. Truncations of random orthogonal matrices

    NASA Astrophysics Data System (ADS)

    Khoruzhenko, Boris A.; Sommers, Hans-Jürgen; Życzkowski, Karol

    2010-10-01

    Statistical properties of nonsymmetric real random matrices of size M , obtained as truncations of random orthogonal N×N matrices, are investigated. We derive an exact formula for the density of eigenvalues which consists of two components: finite fraction of eigenvalues are real, while the remaining part of the spectrum is located inside the unit disk symmetrically with respect to the real axis. In the case of strong nonorthogonality, M/N=const , the behavior typical to real Ginibre ensemble is found. In the case M=N-L with fixed L , a universal distribution of resonance widths is recovered.

  16. Neutron transport in random media

    SciTech Connect

    Makai, M.

    1996-08-01

    The survey reviews the methods available in the literature which allow a discussion of corium recriticality after a severe accident and a characterization of the corium. It appears that to date no one has considered the eigenvalue problem, though for the source problem several approaches have been proposed. The mathematical formulation of a random medium may be approached in different ways. Based on the review of the literature, we can draw three basic conclusions. The problem of static, random perturbations has been solved. The static case is tractable by the Monte Carlo method. There is a specific time dependent case for which the average flux is given as a series expansion.

  17. Molecular random tilings as glasses

    PubMed Central

    Garrahan, Juan P.; Stannard, Andrew; Blunt, Matthew O.; Beton, Peter H.

    2009-01-01

    We have recently shown that p-terphenyl-3,5,3′,5′-tetracarboxylic acid adsorbed on graphite self-assembles into a two-dimensional rhombus random tiling. This tiling is close to ideal, displaying long-range correlations punctuated by sparse localized tiling defects. In this article we explore the analogy between dynamic arrest in this type of random tilings and that of structural glasses. We show that the structural relaxation of these systems is via the propagation–reaction of tiling defects, giving rise to dynamic heterogeneity. We study the scaling properties of the dynamics and discuss connections with kinetically constrained models of glasses. PMID:19720990

  18. Synchronizability of random rectangular graphs

    SciTech Connect

    Estrada, Ernesto Chen, Guanrong

    2015-08-15

    Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.

  19. Digital adaptive sampling.

    NASA Technical Reports Server (NTRS)

    Breazeale, G. J.; Jones, L. E.

    1971-01-01

    Discussion of digital adaptive sampling, which is consistently better than fixed sampling in noise-free cases. Adaptive sampling is shown to be feasible and, it is considered, should be studied further. It should be noted that adaptive sampling is a class of variable rate sampling in which the variability depends on system signals. Digital rather than analog laws should be studied, because cases can arise in which the analog signals are not even available. An extremely important problem is implementation.

  20. Sample design effects in landscape genetics

    USGS Publications Warehouse

    Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.

    2012-01-01

    An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.

  1. Self-averaging and ergodicity of subdiffusion in quenched random media.

    PubMed

    Dentz, Marco; Russian, Anna; Gouze, Philippe

    2016-01-01

    We study the self-averaging properties and ergodicity of the mean square displacement m(t) of particles diffusing in d dimensional quenched random environments which give rise to subdiffusive average motion. These properties are investigated in terms of the sample to sample fluctuations as measured by the variance of m(t). We find that m(t) is not self-averaging for d<2 due to the inefficient disorder sampling by random motion in a single realization. For d≥2 in contrast, the efficient sampling of heterogeneity by the space random walk renders m(t) self-averaging and thus ergodic. This is remarkable because the average particle motion in d>2 obeys a CTRW, which by itself displays weak ergodicity breaking. This paradox is resolved by the observation that the CTRW as an average model does not reflect the disorder sampling by random motion in a single medium realization.

  2. Fast random bit generation with bandwidth-enhanced chaos in semiconductor lasers.

    PubMed

    Hirano, Kunihito; Yamazaki, Taiki; Morikatsu, Shinichiro; Okumura, Haruka; Aida, Hiroki; Uchida, Atsushi; Yoshimori, Shigeru; Yoshimura, Kazuyuki; Harayama, Takahisa; Davis, Peter

    2010-03-15

    We experimentally demonstrate random bit generation using multi-bit samples of bandwidth-enhanced chaos in semiconductor lasers. Chaotic fluctuation of laser output is generated in a semiconductor laser with optical feedback and the chaotic output is injected into a second semiconductor laser to obtain a chaotic intensity signal with bandwidth enhanced up to 16 GHz. The chaotic signal is converted to an 8-bit digital signal by sampling with a digital oscilloscope at 12.5 Giga samples per second (GS/s). Random bits are generated by bitwise exclusive-OR operation on corresponding bits in samples of the chaotic signal and its time-delayed signal. Statistical tests verify the randomness of bit sequences obtained using 1 to 6 bits per sample, corresponding to fast random bit generation rates from 12.5 to 75 Gigabit per second (Gb/s) ( = 6 bit x 12.5 GS/s).

  3. Self-averaging and ergodicity of subdiffusion in quenched random media.

    PubMed

    Dentz, Marco; Russian, Anna; Gouze, Philippe

    2016-01-01

    We study the self-averaging properties and ergodicity of the mean square displacement m(t) of particles diffusing in d dimensional quenched random environments which give rise to subdiffusive average motion. These properties are investigated in terms of the sample to sample fluctuations as measured by the variance of m(t). We find that m(t) is not self-averaging for d<2 due to the inefficient disorder sampling by random motion in a single realization. For d≥2 in contrast, the efficient sampling of heterogeneity by the space random walk renders m(t) self-averaging and thus ergodic. This is remarkable because the average particle motion in d>2 obeys a CTRW, which by itself displays weak ergodicity breaking. This paradox is resolved by the observation that the CTRW as an average model does not reflect the disorder sampling by random motion in a single medium realization. PMID:26871007

  4. Heuristic-biased stochastic sampling

    SciTech Connect

    Bresina, J.L.

    1996-12-31

    This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show that with the proper bias function, it can be easy to outperform greedy search.

  5. Garnet Random-Access Memory

    NASA Technical Reports Server (NTRS)

    Katti, Romney R.

    1995-01-01

    Random-access memory (RAM) devices of proposed type exploit magneto-optical properties of magnetic garnets exhibiting perpendicular anisotropy. Magnetic writing and optical readout used. Provides nonvolatile storage and resists damage by ionizing radiation. Because of basic architecture and pinout requirements, most likely useful as small-capacity memory devices.

  6. Common Randomness Principles of Secrecy

    ERIC Educational Resources Information Center

    Tyagi, Himanshu

    2013-01-01

    This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…

  7. Randomized Item Response Theory Models

    ERIC Educational Resources Information Center

    Fox, Jean-Paul

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR…

  8. Universality in random quantum networks

    NASA Astrophysics Data System (ADS)

    Novotný, Jaroslav; Alber, Gernot; Jex, Igor

    2015-12-01

    Networks constitute efficient tools for assessing universal features of complex systems. In physical contexts, classical as well as quantum networks are used to describe a wide range of phenomena, such as phase transitions, intricate aspects of many-body quantum systems, or even characteristic features of a future quantum internet. Random quantum networks and their associated directed graphs are employed for capturing statistically dominant features of complex quantum systems. Here, we develop an efficient iterative method capable of evaluating the probability of a graph being strongly connected. It is proven that random directed graphs with constant edge-establishing probability are typically strongly connected, i.e., any ordered pair of vertices is connected by a directed path. This typical topological property of directed random graphs is exploited to demonstrate universal features of the asymptotic evolution of large random qubit networks. These results are independent of our knowledge of the details of the network topology. These findings suggest that other highly complex networks, such as a future quantum internet, may also exhibit similar universal properties.

  9. Pseudo-Random Number Generators

    NASA Technical Reports Server (NTRS)

    Howell, L. W.; Rheinfurth, M. H.

    1984-01-01

    Package features comprehensive selection of probabilistic distributions. Monte Carlo simulations resorted to whenever systems studied not amenable to deterministic analyses or when direct experimentation not feasible. Random numbers having certain specified distribution characteristic integral part of simulations. Package consists of collector of "pseudorandom" number generators for use in Monte Carlo simulations.

  10. Undecidability Theorem and Quantum Randomness

    NASA Astrophysics Data System (ADS)

    Berezin, Alexander A.

    2005-04-01

    As scientific folklore has it, Kurt Godel was once annoyed by question whether he sees any link between his Undecidability Theorem (UT) and Uncertainty Relationship. His reaction, however, may indicate that he probably felt that such a hidden link could indeed exist but he was unable clearly formulate it. Informational version of UT (G.J.Chaitin) states impossibility to rule out algorithmic compressibility of arbitrary digital string. Thus, (mathematical) randomness can only be disproven, not proven. Going from mathematical to physical (mainly quantum) randomness, we encounter seemingly random acts of radioactive decays of isotopes (such as C14), emission of excited atoms, tunneling effects, etc. However, our notion of quantum randomness (QR) may likely hit similarly formidable wall of physical version of UT leading to seemingly bizarre ideas such as Everett many world model (D.Deutsch) or backward causation (J.A.Wheeler). Resolution may potentially lie in admitting some form of Aristotelean final causation (AFC) as an ultimate foundational principle (G.W.Leibniz) connecting purely mathematical (Platonic) grounding aspects with it physically observable consequences, such as plethora of QR effects. Thus, what we interpret as QR may eventually be manifestation of AFC in which UT serves as delivery vehicle. Another example of UT/QR/AFC connection is question of identity (indistinguishability) of elementary particles (are all electrons exactly the same or just approximately so to a very high degree?).

  11. Plated wire random access memories

    NASA Technical Reports Server (NTRS)

    Gouldin, L. D.

    1975-01-01

    A program was conducted to construct 4096-work by 18-bit random access, NDRO-plated wire memory units. The memory units were subjected to comprehensive functional and environmental tests at the end-item level to verify comformance with the specified requirements. A technical description of the unit is given, along with acceptance test data sheets.

  12. Two approaches for ultrafast random bit generation based on the chaotic dynamics of a semiconductor laser.

    PubMed

    Li, Nianqiang; Kim, Byungchil; Chizhevsky, V N; Locquet, A; Bloch, M; Citrin, D S; Pan, Wei

    2014-03-24

    This paper reports the experimental investigation of two different approaches to random bit generation based on the chaotic dynamics of a semiconductor laser with optical feedback. By computing high-order finite differences of the chaotic laser intensity time series, we obtain time series with symmetric statistical distributions that are more conducive to ultrafast random bit generation. The first approach is guided by information-theoretic considerations and could potentially reach random bit generation rates as high as 160 Gb/s by extracting 4 bits per sample. The second approach is based on pragmatic considerations and could lead to rates of 2.2 Tb/s by extracting 55 bits per sample. The randomness of the bit sequences obtained from the two approaches is tested against three standard randomness tests (ENT, Diehard, and NIST tests), as well as by calculating the statistical bias and the serial correlation coefficients on longer sequences of random bits than those used in the standard tests.

  13. Permutation/randomization-based inference for environmental data.

    PubMed

    Spicer, R Christopher; Gangloff, Harry J

    2016-03-01

    Quantitative inference from environmental contaminant data is almost exclusively from within the classic Neyman/Pearson (N/P) hypothesis-testing model, by which the mean serves as the fundamental quantitative measure, but which is constrained by random sampling and the assumption of normality in the data. Permutation/randomization-based inference originally forwarded by R. A. Fisher derives probability directly from the proportion of the occurrences of interest and is not dependent upon the distribution of data or random sampling. Foundationally, the underlying logic and the interpretation of the significance of the two models vary, but inference using either model can often be successfully applied. However, data examples from airborne environmental fungi (mold), asbestos in settled dust, and 1,2,3,4-tetrachlorobenzene (TeCB) in soil demonstrate potentially misleading inference using traditional N/P hypothesis testing based upon means/variance compared to permutation/randomization inference using differences in frequency of detection (Δf d). Bootstrapping and permutation testing, which are extensions of permutation/randomization, confirm calculated p values via Δf d and should be utilized to verify the appropriateness of a given data analysis by either model.

  14. Apollo 14 rock samples

    NASA Technical Reports Server (NTRS)

    Carlson, I. C.

    1978-01-01

    Petrographic descriptions of all Apollo 14 samples larger than 1 cm in any dimension are presented. The sample description format consists of: (1) an introductory section which includes information on lunar sample location, orientation, and return containers, (2) a section on physical characteristics, which contains the sample mass, dimensions, and a brief description; (3) surface features, including zap pits, cavities, and fractures as seen in binocular view; (4) petrographic description, consisting of a binocular description and, if possible, a thin section description; and (5) a discussion of literature relevant to sample petrology is included for samples which have previously been examined by the scientific community.

  15. Rain sampling device

    DOEpatents

    Nelson, Danny A.; Tomich, Stanley D.; Glover, Donald W.; Allen, Errol V.; Hales, Jeremy M.; Dana, Marshall T.

    1991-01-01

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of said precipitation from said chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device.

  16. Stardust Sample: Investigator's Guidebook

    NASA Technical Reports Server (NTRS)

    Allen, Carl

    2006-01-01

    In January 2006, the Stardust spacecraft returned the first in situ collection of samples from a comet, and the first samples of contemporary interstellar dust. Stardust is the first US sample return mission from a planetary body since Apollo, and the first ever from beyond the moon. This handbook is a basic reference source for allocation procedures and policies for Stardust samples. These samples consist of particles and particle residues in aerogel collectors, in aluminum foil, and in spacecraft components. Contamination control samples and unflown collection media are also available for allocation.

  17. Rain sampling device

    DOEpatents

    Nelson, D.A.; Tomich, S.D.; Glover, D.W.; Allen, E.V.; Hales, J.M.; Dana, M.T.

    1991-05-14

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of the precipitation from the chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device. 11 figures.

  18. Geographic sampling of urban soils for contaminant mapping: how many samples and from where.

    PubMed

    Griffith, Daniel A

    2008-12-01

    Properly sampling soils and mapping soil contamination in urban environments requires that impacts of spatial autocorrelation be taken into account. As spatial autocorrelation increases in an urban landscape, the amount of duplicate information contained in georeferenced data also increases, whether an entire population or some type of random sample drawn from that population is being analyzed, resulting in conventional power and sample size calculation formulae yielding incorrect sample size numbers vis-à-vis model-based inference. Griffith (in Annals, Association of American Geographers, 95, 740-760, 2005) exploits spatial statistical model specifications to formulate equations for estimating the necessary sample size needed to obtain some predetermined level of precision for an analysis of georeferenced data when implementing a tessellation stratified random sampling design, labeling this approach model-informed, since a model of latent spatial autocorrelation is required. This paper addresses issues of efficiency associated with these model-based results. It summarizes findings from a data collection exercise (soil samples collected from across Syracuse, NY), as well as from a set of resampling and from a set of simulation experiments following experimental design principles spelled out by Overton and Stehman (in Communications in Statistics: Theory and Methods, 22, 2641-2660). Guidelines are suggested concerning appropriate sample size (i.e., how many) and sampling network (i.e., where).

  19. A maternal screening program for congenital toxoplasmosis in Quindio, Colombia and application of mathematical models to estimate incidences using age-stratified data.

    PubMed

    Gomez-Marin, J E; Montoya-de-Londono, M T; Castano-Osorio, J C

    1997-08-01

    We studied 937 pregnant women from Quindio, Colombia for the presence of specific anti-Toxoplasma gondii IgG antibodies using the indirect immunofluorescence antibody technique (IFAT-IgG). Specific anti-T. gondii IgM antibodies detected using the immunosorbent agglutination assay (ISAgA-IgM) were investigated in patients with high titers in the IFAT-IgG (dilutions > or = 1:1,024). We used mathematical models based on the age prevalence results of the IFAT-IgG to estimate the number of seroconversions and these were compared with the results predicted by the IgM based-incidence results. We found 15 positive cases by ISAgA-IgM and we were able to follow the children of six mothers from this group in which we found one case of congenital toxoplasmosis with the development of a retinal scar despite prenatal and postnatal treatment. The estimation of new cases for the annual total of pregnancies (approximately 8,000) in the Quindio region was 30-120 according to the ISAgA-IgM results and 57-85 using mathematical models. Thus, mathematical models based on age prevalence can give useful estimations of the magnitude of the problem.

  20. A Mixed Effects Randomized Item Response Model

    ERIC Educational Resources Information Center

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  1. Organic random lasers in the weak-scattering regime

    NASA Astrophysics Data System (ADS)

    Polson, R. C.; Vardeny, Z. V.

    2005-01-01

    We used the ensemble-averaged power Fourier transform (PFT) of random laser emission spectra over the illuminated area to study random lasers with coherent feedback in four different disordered organic gain media in the weak scattering regime, where the light mean free path, ℓ* is much larger than the emission wavelength. The disordered gain media include a π -conjugated polymer film, an opal photonic crystal infiltrated with a laser dye (rhodamine 6G; R6G) having optical gain in the visible spectral range, a suspension of titania balls in R6G solution, and biological tissues such as chicken breast infiltrated with R6G. We show the existence of universality among the random resonators in each gain medium that we tested, in which at the same excitation intensity a dominant random cavity is excited in different parts of the sample. We show a second universality when scaling the average PFT of the four different media by ℓ* ; we found that the dominant cavity in each disordered gain medium scales with ℓ* . The excellent agreement obtained with computer simulations using a distribution of random microdisks, each contributing a number of longitudinal whispering gallery modes within the gain spectrum, unambiguously shows that random lasers in the weak scattering regime cannot be described by gain amplification of localized photon states.

  2. Random trinomial tree models and vanilla options

    NASA Astrophysics Data System (ADS)

    Ganikhodjaev, Nasir; Bayram, Kamola

    2013-09-01

    In this paper we introduce and study random trinomial model. The usual trinomial model is prescribed by triple of numbers (u, d, m). We call the triple (u, d, m) an environment of the trinomial model. A triple (Un, Dn, Mn), where {Un}, {Dn} and {Mn} are the sequences of independent, identically distributed random variables with 0 < Dn < 1 < Un and Mn = 1 for all n, is called a random environment and trinomial tree model with random environment is called random trinomial model. The random trinomial model is considered to produce more accurate results than the random binomial model or usual trinomial model.

  3. Accurate Prediction of the Statistics of Repetitions in Random Sequences: A Case Study in Archaea Genomes

    PubMed Central

    Régnier, Mireille; Chassignet, Philippe

    2016-01-01

    Repetitive patterns in genomic sequences have a great biological significance and also algorithmic implications. Analytic combinatorics allow to derive formula for the expected length of repetitions in a random sequence. Asymptotic results, which generalize previous works on a binary alphabet, are easily computable. Simulations on random sequences show their accuracy. As an application, the sample case of Archaea genomes illustrates how biological sequences may differ from random sequences. PMID:27376057

  4. Accurate Prediction of the Statistics of Repetitions in Random Sequences: A Case Study in Archaea Genomes.

    PubMed

    Régnier, Mireille; Chassignet, Philippe

    2016-01-01

    Repetitive patterns in genomic sequences have a great biological significance and also algorithmic implications. Analytic combinatorics allow to derive formula for the expected length of repetitions in a random sequence. Asymptotic results, which generalize previous works on a binary alphabet, are easily computable. Simulations on random sequences show their accuracy. As an application, the sample case of Archaea genomes illustrates how biological sequences may differ from random sequences.

  5. Accurate Prediction of the Statistics of Repetitions in Random Sequences: A Case Study in Archaea Genomes.

    PubMed

    Régnier, Mireille; Chassignet, Philippe

    2016-01-01

    Repetitive patterns in genomic sequences have a great biological significance and also algorithmic implications. Analytic combinatorics allow to derive formula for the expected length of repetitions in a random sequence. Asymptotic results, which generalize previous works on a binary alphabet, are easily computable. Simulations on random sequences show their accuracy. As an application, the sample case of Archaea genomes illustrates how biological sequences may differ from random sequences. PMID:27376057

  6. Superposition Enhanced Nested Sampling

    NASA Astrophysics Data System (ADS)

    Martiniani, Stefano; Stevenson, Jacob D.; Wales, David J.; Frenkel, Daan

    2014-07-01

    The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  7. GROUND WATER SAMPLING ISSUES

    EPA Science Inventory

    Obtaining representative ground water samples is important for site assessment and
    remedial performance monitoring objectives. Issues which must be considered prior to initiating a ground-water monitoring program include defining monitoring goals and objectives, sampling point...

  8. Improved Sampling Method Reduces Isokinetic Sampling Errors.

    ERIC Educational Resources Information Center

    Karels, Gale G.

    The particulate sampling system currently in use by the Bay Area Air Pollution Control District, San Francisco, California is described in this presentation for the 12th Conference on Methods in Air Pollution and Industrial Hygiene Studies, University of Southern California, April, 1971. The method represents a practical, inexpensive tool that can…

  9. Parallelized nested sampling

    NASA Astrophysics Data System (ADS)

    Henderson, R. Wesley; Goggans, Paul M.

    2014-12-01

    One of the important advantages of nested sampling as an MCMC technique is its ability to draw representative samples from multimodal distributions and distributions with other degeneracies. This coverage is accomplished by maintaining a number of so-called live samples within a likelihood constraint. In usual practice, at each step, only the sample with the least likelihood is discarded from this set of live samples and replaced. In [1], Skilling shows that for a given number of live samples, discarding only one sample yields the highest precision in estimation of the log-evidence. However, if we increase the number of live samples, more samples can be discarded at once while still maintaining the same precision. For computer code running only serially, this modification would considerably increase the wall clock time necessary to reach convergence. However, if we use a computer with parallel processing capabilities, and we write our code to take advantage of this parallelism to replace multiple samples concurrently, the performance penalty can be eliminated entirely and possibly reversed. In this case, we must use the more general equation in [1] for computing the expectation of the shrinkage distribution: E [- log t]= (N r-r+1)-1+(Nr-r+2)-1+⋯+Nr-1, for shrinkage t with Nr live samples and r samples discarded at each iteration. The equation for the variance Var (- log t)= (N r-r+1)-2+(Nr-r+2)-2+⋯+Nr-2 is used to find the appropriate number of live samples Nr to use with r > 1 to match the variance achieved with N1 live samples and r = 1. In this paper, we show that by replacing multiple discarded samples in parallel, we are able to achieve a more thorough sampling of the constrained prior distribution, reduce runtime, and increase precision.

  10. Random density matrices versus random evolution of open system

    NASA Astrophysics Data System (ADS)

    Pineda, Carlos; Seligman, Thomas H.

    2015-10-01

    We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.

  11. Random walk with random resetting to the maximum position

    NASA Astrophysics Data System (ADS)

    Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory

    2015-11-01

    We study analytically a simple random walk model on a one-dimensional lattice, where at each time step the walker resets to the maximum of the already visited positions (to the rightmost visited site) with a probability r , and with probability (1 -r ) , it undergoes symmetric random walk, i.e., it hops to one of its neighboring sites, with equal probability (1 -r )/2 . For r =0 , it reduces to a standard random walk whose typical distance grows as √{n } for large n . In the presence of a nonzero resetting rate 0

  12. Random walk with random resetting to the maximum position.

    PubMed

    Majumdar, Satya N; Sabhapandit, Sanjib; Schehr, Grégory

    2015-11-01

    We study analytically a simple random walk model on a one-dimensional lattice, where at each time step the walker resets to the maximum of the already visited positions (to the rightmost visited site) with a probability r, and with probability (1-r), it undergoes symmetric random walk, i.e., it hops to one of its neighboring sites, with equal probability (1-r)/2. For r=0, it reduces to a standard random walk whose typical distance grows as √n for large n. In the presence of a nonzero resetting rate 0

  13. Non-random patterns in viral diversity

    PubMed Central

    Anthony, Simon J.; Islam, Ariful; Johnson, Christine; Navarrete-Macias, Isamara; Liang, Eliza; Jain, Komal; Hitchens, Peta L.; Che, Xiaoyu; Soloyvov, Alexander; Hicks, Allison L.; Ojeda-Flores, Rafael; Zambrana-Torrelio, Carlos; Ulrich, Werner; Rostal, Melinda K.; Petrosov, Alexandra; Garcia, Joel; Haider, Najmul; Wolfe, Nathan; Goldstein, Tracey; Morse, Stephen S.; Rahman, Mahmudur; Epstein, Jonathan H.; Mazet, Jonna K.; Daszak, Peter; Lipkin, W. Ian

    2015-01-01

    It is currently unclear whether changes in viral communities will ever be predictable. Here we investigate whether viral communities in wildlife are inherently structured (inferring predictability) by looking at whether communities are assembled through deterministic (often predictable) or stochastic (not predictable) processes. We sample macaque faeces across nine sites in Bangladesh and use consensus PCR and sequencing to discover 184 viruses from 14 viral families. We then use network modelling and statistical null-hypothesis testing to show the presence of non-random deterministic patterns at different scales, between sites and within individuals. We show that the effects of determinism are not absolute however, as stochastic patterns are also observed. In showing that determinism is an important process in viral community assembly we conclude that it should be possible to forecast changes to some portion of a viral community, however there will always be some portion for which prediction will be unlikely. PMID:26391192

  14. Ancestry assessment using random forest modeling.

    PubMed

    Hefner, Joseph T; Spradley, M Kate; Anderson, Bruce

    2014-05-01

    A skeletal assessment of ancestry relies on morphoscopic traits and skeletal measurements. Using a sample of American Black (n = 38), American White (n = 39), and Southwest Hispanics (n = 72), the present study investigates whether these data provide similar biological information and combines both data types into a single classification using a random forest model (RFM). Our results indicate that both data types provide similar information concerning the relationships among population groups. Also, by combining both in an RFM, the correct allocation of ancestry for an unknown cranium increases. The distribution of cross-validated grouped cases correctly classified using discriminant analyses and RFMs ranges between 75.4% (discriminant function analysis, morphoscopic data only) and 89.6% (RFM). Unlike the traditional, experience-based approach using morphoscopic traits, the inclusion of both data types in a single analysis is a quantifiable approach accounting for more variation within and between groups, reducing misclassification rates, and capturing aspects of cranial shape, size, and morphology.

  15. Random amplified polymorphic DNA analysis of genetically modified organisms.

    PubMed

    Yoke-Kqueen, Cheah; Radu, Son

    2006-12-15

    Randomly amplified polymorphic DNA (RAPD) was used to analyzed 78 samples comprises of certified reference materials (soya and maize powder), raw seeds (soybean and maize), processed food and animal feed. Combination assay of two arbitrary primers in the RAPD analysis enable to distinguish genetically modified organism (GMO) reference materials from the samples tested. Dendrogram analysis revealed 13 clusters at 45% similarity from the RAPD. RAPD analysis showed that the maize and soybean samples were clustered differently besides the GMO and non-GMO products.

  16. Random amplified polymorphic DNA analysis of genetically modified organisms.

    PubMed

    Yoke-Kqueen, Cheah; Radu, Son

    2006-12-15

    Randomly amplified polymorphic DNA (RAPD) was used to analyzed 78 samples comprises of certified reference materials (soya and maize powder), raw seeds (soybean and maize), processed food and animal feed. Combination assay of two arbitrary primers in the RAPD analysis enable to distinguish genetically modified organism (GMO) reference materials from the samples tested. Dendrogram analysis revealed 13 clusters at 45% similarity from the RAPD. RAPD analysis showed that the maize and soybean samples were clustered differently besides the GMO and non-GMO products. PMID:16860900

  17. Developing Water Sampling Standards

    ERIC Educational Resources Information Center

    Environmental Science and Technology, 1974

    1974-01-01

    Participants in the D-19 symposium on aquatic sampling and measurement for water pollution assessment were informed that determining the extent of waste water stream pollution is not a cut and dry procedure. Topics discussed include field sampling, representative sampling from storm sewers, suggested sampler features and application of improved…

  18. SAMPLING OF CONTAMINATED SITES

    EPA Science Inventory

    A critical aspect of characterization of the amount and species of contamination of a hazardous waste site is the sampling plan developed for that site. f the sampling plan is not thoroughly conceptualized before sampling takes place, then certain critical aspects of the limits o...

  19. ORGANIC SPECIATION SAMPLING ARTIFACTS

    EPA Science Inventory

    Sampling artifacts for molecular markers from organic speciation of particulate matter were investigated by analyzing forty-one samples collected in Philadelphia as a part of the Northeast Oxidant and Particulate Study (NEOPS). Samples were collected using a high volume sampler ...

  20. Decision by Sampling

    ERIC Educational Resources Information Center

    Stewart, Neil; Chater, Nick; Brown, Gordon D. A.

    2006-01-01

    We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute's subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We…

  1. Toward Digital Staining using Imaging Mass Spectrometry and Random Forests

    PubMed Central

    Hanselmann, Michael; Köthe, Ullrich; Kirchner, Marc; Renard, Bernhard Y.; Amstalden, Erika R.; Glunde, Kristine; Heeren, Ron M. A.; Hamprecht, Fred A.

    2009-01-01

    We show on Imaging Mass Spectrometry (IMS) data that the Random Forest classifier can be used for automated tissue classification and that it results in predictions with high sensitivities and positive predictive values, even when inter-sample variability is present in the data. We further demonstrate how Markov Random Fields and vector-valued median filtering can be applied to reduce noise effects to further improve the classification results in a post-hoc smoothing step. Our study gives clear evidence that digital staining by means of IMS constitutes a promising complement to chemical staining techniques. PMID:19469555

  2. Sampling in epidemiological research: issues, hazards and pitfalls

    PubMed Central

    Tyrer, Stephen; Heyman, Bob

    2016-01-01

    Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research. PMID:27087985

  3. Sampling in epidemiological research: issues, hazards and pitfalls.

    PubMed

    Tyrer, Stephen; Heyman, Bob

    2016-04-01

    Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research.

  4. Accuracy Sampling Design Bias on Coarse Spatial Resolution Land Cover Data in the Great Lakes Region (United States and Canada)

    EPA Science Inventory

    A number of articles have investigated the impact of sampling design on remotely sensed landcover accuracy estimates. Gong and Howarth (1990) found significant differences for Kappa accuracy values when comparing purepixel sampling, stratified random sampling, and stratified sys...

  5. Aerosol sampling system

    DOEpatents

    Masquelier, Donald A.

    2004-02-10

    A system for sampling air and collecting particulate of a predetermined particle size range. A low pass section has an opening of a preselected size for gathering the air but excluding particles larger than the sample particles. An impactor section is connected to the low pass section and separates the air flow into a bypass air flow that does not contain the sample particles and a product air flow that does contain the sample particles. A wetted-wall cyclone collector, connected to the impactor section, receives the product air flow and traps the sample particles in a liquid.

  6. Rockballer Sample Acquisition Tool

    NASA Technical Reports Server (NTRS)

    Giersch, Louis R.; Cook, Brant T.

    2013-01-01

    It would be desirable to acquire rock and/or ice samples that extend below the surface of the parent rock or ice in extraterrestrial environments such as the Moon, Mars, comets, and asteroids. Such samples would allow measurements to be made further back into the geologic history of the rock, providing critical insight into the history of the local environment and the solar system. Such samples could also be necessary for sample return mission architectures that would acquire samples from extraterrestrial environments for return to Earth for more detailed scientific investigation.

  7. Soil Sampling Techniques For Alabama Grain Fields

    NASA Technical Reports Server (NTRS)

    Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.

    2003-01-01

    Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.

  8. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  9. Comparison of Web and Mail Surveys in Collecting Illicit Drug Use Data: A Randomized Experiment

    ERIC Educational Resources Information Center

    McCabe, Sean Esteban

    2004-01-01

    This randomized experiment examined survey mode effects for self-reporting illicit drug use by comparing prevalence estimates between a Web-based survey and a mail-based survey. A random sample of 7,000 traditional-aged undergraduate students attending a large public university in the United States was selected to participate in the spring of…

  10. Randomized Controlled Trial of a Preventive Intervention for Perinatal Depression in High-Risk Latinas

    ERIC Educational Resources Information Center

    Le, Huynh-Nhu; Perry, Deborah F.; Stuart, Elizabeth A.

    2011-01-01

    Objective: A randomized controlled trial was conducted to evaluate the efficacy of a cognitive-behavioral (CBT) intervention to prevent perinatal depression in high-risk Latinas. Method: A sample of 217 participants, predominantly low-income Central American immigrants who met demographic and depression risk criteria, were randomized into usual…

  11. Fernique-type inequalities and moduli of continuity for anisotropic Gaussian random fields.

    PubMed

    Meerschaert, Mark M; Wang, Wensheng; Xiao, Yimin

    2012-08-01

    This paper is concerned with sample path properties of anisotropic Gaussian random fields. We establish Fernique-type inequalities and utilize them to study the global and local moduli of continuity for anisotropic Gaussian random fields. Applications to fractional Brownian sheets and to the solutions of stochastic partial differential equations are investigated.

  12. Fernique-type inequalities and moduli of continuity for anisotropic Gaussian random fields

    PubMed Central

    Meerschaert, Mark M.; Wang, Wensheng; Xiao, Yimin

    2013-01-01

    This paper is concerned with sample path properties of anisotropic Gaussian random fields. We establish Fernique-type inequalities and utilize them to study the global and local moduli of continuity for anisotropic Gaussian random fields. Applications to fractional Brownian sheets and to the solutions of stochastic partial differential equations are investigated. PMID:24825922

  13. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  14. The Probability of Small Schedule Values and Preference for Random-Interval Schedules

    ERIC Educational Resources Information Center

    Soreth, Michelle Ennis; Hineline, Philip N.

    2009-01-01

    Preference for working on variable schedules and temporal discrimination were simultaneously examined in two experiments using a discrete-trial, concurrent-chains arrangement with fixed interval (FI) and random interval (RI) terminal links. The random schedule was generated by first sampling a probability distribution after the programmed delay to…

  15. Sample Caching Subsystem

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Collins, Curtis L.

    2007-01-01

    A paper describes the Sample Caching Subsystem (SCS), a method for storing planetary core and soil samples in a container that seals the samples away from the environment to protect the integrity of the samples and any organics they might contain. This process places samples in individual sleeves that are sealed within a container for use by either the current mission or by following missions. A sample container is stored with its sleeves partially inserted. When a sample is ready to be contained, a transfer arm rotates over and grasps a sleeve, pulls it out of the container from below, rotates over and inserts the sleeve into a funnel where it is passively locked into place and then released from the arm. An external sampling tool deposits the sample into the sleeve, which is aligned with the tool via passive compliance of the funnel. After the sampling tool leaves the funnel, the arm retrieves the sleeve and inserts it all the way into the sample container. This action engages the seal. Full containers can be left behind for pick-up by subsequent science missions, and container dimensions are compatible for placement in a Mars Ascent Vehicle for later return to Earth.

  16. The Lunar Sample Compendium

    NASA Technical Reports Server (NTRS)

    Meyer, Charles

    2009-01-01

    The Lunar Sample Compendium is a succinct summary of the data obtained from 40 years of study of Apollo and Luna samples of the Moon. Basic petrographic, chemical and age information is compiled, sample-by-sample, in the form of an advanced catalog in order to provide a basic description of each sample. The LSC can be found online using Google. The initial allocation of lunar samples was done sparingly, because it was realized that scientific techniques would improve over the years and new questions would be formulated. The LSC is important because it enables scientists to select samples within the context of the work that has already been done and facilitates better review of proposed allocations. It also provides back up material for public displays, captures information found only in abstracts, grey literature and curatorial databases and serves as a ready access to the now-vast scientific literature.

  17. Phylogenetic effective sample size.

    PubMed

    Bartoszek, Krzysztof

    2016-10-21

    In this paper I address the question-how large is a phylogenetic sample? I propose a definition of a phylogenetic effective sample size for Brownian motion and Ornstein-Uhlenbeck processes-the regression effective sample size. I discuss how mutual information can be used to define an effective sample size in the non-normal process case and compare these two definitions to an already present concept of effective sample size (the mean effective sample size). Through a simulation study I find that the AICc is robust if one corrects for the number of species or effective number of species. Lastly I discuss how the concept of the phylogenetic effective sample size can be useful for biodiversity quantification, identification of interesting clades and deciding on the importance of phylogenetic correlations. PMID:27343033

  18. Sample Proficiency Test exercise

    SciTech Connect

    Alcaraz, A; Gregg, H; Koester, C

    2006-02-05

    The current format of the OPCW proficiency tests has multiple sets of 2 samples sent to an analysis laboratory. In each sample set, one is identified as a sample, the other as a blank. This method of conducting proficiency tests differs from how an OPCW designated laboratory would receive authentic samples (a set of three containers, each not identified, consisting of the authentic sample, a control sample, and a blank sample). This exercise was designed to test the reporting if the proficiency tests were to be conducted. As such, this is not an official OPCW proficiency test, and the attached report is one method by which LLNL might report their analyses under a more realistic testing scheme. Therefore, the title on the report ''Report of the Umpteenth Official OPCW Proficiency Test'' is meaningless, and provides a bit of whimsy for the analyses and readers of the report.

  19. Curation of Frozen Samples

    NASA Technical Reports Server (NTRS)

    Fletcher, L. A.; Allen, C. C.; Bastien, R.

    2008-01-01

    NASA's Johnson Space Center (JSC) and the Astromaterials Curator are charged by NPD 7100.10D with the curation of all of NASA s extraterrestrial samples, including those from future missions. This responsibility includes the development of new sample handling and preparation techniques; therefore, the Astromaterials Curator must begin developing procedures to preserve, prepare and ship samples at sub-freezing temperatures in order to enable future sample return missions. Such missions might include the return of future frozen samples from permanently-shadowed lunar craters, the nuclei of comets, the surface of Mars, etc. We are demonstrating the ability to curate samples under cold conditions by designing, installing and testing a cold curation glovebox. This glovebox will allow us to store, document, manipulate and subdivide frozen samples while quantifying and minimizing contamination throughout the curation process.

  20. Lunar Sample Compendium

    NASA Technical Reports Server (NTRS)

    Meyer, C.

    2009-01-01

    The Lunar Sample Compendium is a succinct summary of what has been learned from the study of Apollo and Luna samples of the Moon. Basic information is compiled, sample-by-sample, in the form of an advanced catalog in order to provide a basic description of each sample. Information presented is carefully attributed to the original source publication, thus the Compendium also serves as a ready access to the now vast scientific literature pertaining to lunar smples. The Lunar Sample Compendium is a work in progress (and may always be). Future plans include: adding sections on additional samples, adding new thin section photomicrographs, replacing the faded photographs with newly digitized photos from the original negatives, attempting to correct the age data using modern decay constants, adding references to each section, and adding an internal search engine.