Science.gov

Sample records for age-stratified random sample

  1. Randomization and sampling issues

    USGS Publications Warehouse

    Geissler, P.H.

    1996-01-01

    The need for randomly selected routes and other sampling issues have been debated by the Amphibian electronic discussion group. Many excellent comments have been made, pro and con, but we have not reached consensus yet. This paper brings those comments together and attempts a synthesis. I hope that the resulting discussion will bring us closer to a consensus.

  2. Parametric models for samples of random functions

    SciTech Connect

    Grigoriu, M.

    2015-09-15

    A new class of parametric models, referred to as sample parametric models, is developed for random elements that match sample rather than the first two moments and/or other global properties of these elements. The models can be used to characterize, e.g., material properties at small scale in which case their samples represent microstructures of material specimens selected at random from a population. The samples of the proposed models are elements of finite-dimensional vector spaces spanned by samples, eigenfunctions of Karhunen–Loève (KL) representations, or modes of singular value decompositions (SVDs). The implementation of sample parametric models requires knowledge of the probability laws of target random elements. Numerical examples including stochastic processes and random fields are used to demonstrate the construction of sample parametric models, assess their accuracy, and illustrate how these models can be used to solve efficiently stochastic equations.

  3. Reduction of display artifacts by random sampling

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.

    1983-01-01

    The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.

  4. Multiband signal reconstruction for random equivalent sampling

    NASA Astrophysics Data System (ADS)

    Zhao, Y. J.; Liu, C. J.

    2014-10-01

    The random equivalent sampling (RES) is a sampling approach that can be applied to capture high speed repetitive signals with a sampling rate that is much lower than the Nyquist rate. However, the uneven random distribution of the time interval between the excitation pulse and the signal degrades the signal reconstruction performance. For sparse multiband signal sampling, the compressed sensing (CS) based signal reconstruction algorithm can tease out the band supports with overwhelming probability and reduce the impact of uneven random distribution in RES. In this paper, the mathematical model of RES behavior is constructed in the frequency domain. Based on the constructed mathematical model, the band supports of signal can be determined. Experimental results demonstrate that, for a signal with unknown sparse multiband, the proposed CS-based signal reconstruction algorithm is feasible, and the CS reconstruction algorithm outperforms the traditional RES signal reconstruction method.

  5. Multiband signal reconstruction for random equivalent sampling.

    PubMed

    Zhao, Y J; Liu, C J

    2014-10-01

    The random equivalent sampling (RES) is a sampling approach that can be applied to capture high speed repetitive signals with a sampling rate that is much lower than the Nyquist rate. However, the uneven random distribution of the time interval between the excitation pulse and the signal degrades the signal reconstruction performance. For sparse multiband signal sampling, the compressed sensing (CS) based signal reconstruction algorithm can tease out the band supports with overwhelming probability and reduce the impact of uneven random distribution in RES. In this paper, the mathematical model of RES behavior is constructed in the frequency domain. Based on the constructed mathematical model, the band supports of signal can be determined. Experimental results demonstrate that, for a signal with unknown sparse multiband, the proposed CS-based signal reconstruction algorithm is feasible, and the CS reconstruction algorithm outperforms the traditional RES signal reconstruction method. PMID:25362458

  6. Sampled-Data Consensus Over Random Networks

    NASA Astrophysics Data System (ADS)

    Wu, Junfeng; Meng, Ziyang; Yang, Tao; Shi, Guodong; Johansson, Karl Henrik

    2016-09-01

    This paper considers the consensus problem for a network of nodes with random interactions and sampled-data control actions. We first show that consensus in expectation, in mean square, and almost surely are equivalent for a general random network model when the inter-sampling interval and network size satisfy a simple relation. The three types of consensus are shown to be simultaneously achieved over an independent or a Markovian random network defined on an underlying graph with a directed spanning tree. For both independent and Markovian random network models, necessary and sufficient conditions for mean-square consensus are derived in terms of the spectral radius of the corresponding state transition matrix. These conditions are then interpreted as the existence of critical value on the inter-sampling interval, below which global mean-square consensus is achieved and above which the system diverges in mean-square sense for some initial states. Finally, we establish an upper bound on the inter-sampling interval below which almost sure consensus is reached, and a lower bound on the inter-sampling interval above which almost sure divergence is reached. Some numerical simulations are given to validate the theoretical results and some discussions on the critical value of the inter-sampling intervals for the mean-square consensus are provided.

  7. Sparsely sampling the sky: Regular vs. random sampling

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.

    2015-09-01

    Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.

  8. Acceptance sampling using judgmental and randomly selected samples

    SciTech Connect

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  9. Sample controllability of impulsive differential systems with random coefficients

    NASA Astrophysics Data System (ADS)

    Zhang, Shuorui; Sun, Jitao

    2016-07-01

    In this paper, we investigate the controllability of impulsive differential systems with random coefficients. Impulsive differential systems with random coefficients are a different stochastic model from stochastic differential equations. Sufficient conditions of sample controllability for impulsive differential systems with random coefficients are obtained by using random Sadovskii's fixed-point theorem. Finally, an example is given to illustrate our results.

  10. Age-Stratified Risk of Unexpected Uterine Sarcoma Following Surgery for Presumed Benign Leiomyoma

    PubMed Central

    Li, Li; Andikyan, Vaagn; Običan, Sarah G.; Cioffi, Angela; Hao, Ke; Dudley, Joel T.; Ascher-Walsh, Charles; Kasarskis, Andrew; Maki, Robert G.

    2015-01-01

    Background. Estimates of unexpected uterine sarcoma following surgery for presumed benign leiomyoma that use age-stratification are lacking. Patients and Methods. A retrospective cohort of 2,075 patients that had undergone myomectomy was evaluated to determine the case incidence of unexpected uterine sarcoma. An aggregate risk estimate was generated using a meta-analysis of similar studies plus our data. Database-derived age distributions of the incidence rates of uterine sarcoma and uterine leiomyoma surgery were used to stratify risk by age. Results. Of 2,075 patients in our retrospective cohort, 6 were diagnosed with uterine sarcoma. Our meta-analysis revealed 8 studies from 1980 to 2014. Combined with our study, 18 cases of leiomyosarcoma are reported in 10,120 patients, for an aggregate risk of 1.78 per 1,000 (95% confidence interval [CI]: 1.1–2.8) or 1 in 562. Eight cases of other uterine sarcomas were reported in 6,889 patients, for an aggregate risk of 1.16 per 1,000 (95% CI: 0.5–4.9) or 1 in 861. The summation of these risks gives an overall risk of uterine sarcoma of 2.94 per 1,000 (95% CI: 1.8–4.1) or 1 in 340. After stratification by age, we predict the risk of uterine sarcoma to range from a peak of 10.1 cases per 1,000, or 1 in 98, for patients aged 75–79 years to <1 case per 500 for patients aged <30 years. Conclusion. The risk of unexpected uterine sarcoma varies significantly across age groups. Our age-stratified predictive model should be incorporated to more accurately counsel patients and to assist in providing guidelines for the surgical technique for leiomyoma. PMID:25765878

  11. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  12. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling.

    PubMed

    Barranca, Victor J; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  13. Code System to Generate Latin Hypercube and Random Samples.

    SciTech Connect

    IMAN, RONALD L.

    1999-02-25

    Version: 00 LHS was written for the generation of multi variate samples either completely at random or by a constrained randomization termed Latin hypercube sampling (LHS). The generation of these samples is based on user-specified parameters which dictate the characteristics of the generated samples, such as type of sample (LHS or random), sample size, number of samples desired, correlation structure on input variables, and type of distribution specified on each variable. The following distributions are built into the program: normal, lognormal, uniform, loguniform, triangular, and beta. In addition, the samples from the uniform and loguniform distributions may be modified by changing the frequency of the sampling within subintervals, and a subroutine which can be modified by the user to generate samples from other distributions (including empirical data) is provided.

  14. Code System to Generate Latin Hypercube and Random Samples.

    1999-02-25

    Version: 00 LHS was written for the generation of multi variate samples either completely at random or by a constrained randomization termed Latin hypercube sampling (LHS). The generation of these samples is based on user-specified parameters which dictate the characteristics of the generated samples, such as type of sample (LHS or random), sample size, number of samples desired, correlation structure on input variables, and type of distribution specified on each variable. The following distributions aremore » built into the program: normal, lognormal, uniform, loguniform, triangular, and beta. In addition, the samples from the uniform and loguniform distributions may be modified by changing the frequency of the sampling within subintervals, and a subroutine which can be modified by the user to generate samples from other distributions (including empirical data) is provided.« less

  15. Recent research (N = 9,305) underscores the importance of using age-stratified actuarial tables in sex offender risk assessments.

    PubMed

    Wollert, Richard; Cramer, Elliot; Waggoner, Jacqueline; Skelton, Alex; Vess, James

    2010-12-01

    A useful understanding of the relationship between age, actuarial scores, and sexual recidivism can be obtained by comparing the entries in equivalent cells from "age-stratified" actuarial tables. This article reports the compilation of the first multisample age-stratified table of sexual recidivism rates, referred to as the "multisample age-stratified table of sexual recidivism rates (MATS-1)," from recent research on Static-99 and another actuarial known as the Automated Sexual Recidivism Scale. The MATS-1 validates the "age invariance effect" that the risk of sexual recidivism declines with advancing age and shows that age-restricted tables underestimate risk for younger offenders and overestimate risk for older offenders. Based on data from more than 9,000 sex offenders, our conclusion is that evaluators should report recidivism estimates from age-stratified tables when they are assessing sexual recidivism risk, particularly when evaluating the aging sex offender. PMID:21098823

  16. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  17. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  18. A random spatial sampling method in a rural developing nation

    PubMed Central

    2014-01-01

    Background Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. Methods We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. Results This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Conclusions Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available. PMID:24716473

  19. Adaptive importance sampling of random walks on continuous state spaces

    SciTech Connect

    Baggerly, K.; Cox, D.; Picard, R.

    1998-11-01

    The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material.

  20. Describing Typical Capstone Course Experiences from a National Random Sample

    ERIC Educational Resources Information Center

    Grahe, Jon E.; Hauhart, Robert C.

    2013-01-01

    The pedagogical value of capstones has been regularly discussed within psychology. This study presents results from an examination of a national random sample of department webpages and an online survey that characterized the typical capstone course in terms of classroom activities and course administration. The department webpages provide an…

  1. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  2. Non-local MRI denoising using random sampling.

    PubMed

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338

  3. Selecting a Sample for Your Experiment: A Non-Random Stratified Sampling Approach

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2012-01-01

    The purpose of this paper is to develop a more general method for sample recruitment in experiments that is purposive (not random) and that results in a sample that is compositionally similar to the generalization population. This work builds on Tipton et al. (2011) by offering solutions to a larger class of problems than the non-overlapping…

  4. Age-stratified Bayesian analysis to estimate sensitivity and specificity of four diagnostic tests for detection of Cryptosporidium oocysts in neonatal calves.

    PubMed

    De Waele, Valerie; Berzano, Marco; Berkvens, Dirk; Speybroeck, Niko; Lowery, Colm; Mulcahy, Grace M; Murphy, Thomas M

    2011-01-01

    There is no gold standard diagnostic test for the detection of bovine cryptosporidiosis. Infection is usually highest in 2-week-old calves, and these calves also excrete high numbers of oocysts. These factors may give rise to variations in the sensitivity and specificity of the various diagnostic tests used to detect infection in calves of various ages. An age-stratified Bayesian analysis was carried out to determine the optimum diagnostic test to identify asymptomatic and clinical Cryptosporidium sp. infection in neonatal calves. Fecal samples collected from 82 calves at 1 week, 2 weeks, 3 weeks, and 4 weeks of age were subjected to the following tests: microscopic examination of smears stained with either phenol-auramine O or fluorescein isothiocyanate (FITC)-conjugated anti-Cryptosporidium monoclonal antibody, nested-PCR, and quantitative real-time PCR. The results confirmed a high prevalence of Cryptosporidium sp. infection, as well as a high level of oocyst excretion, in 2-week-old calves. The sensitivities of all the tests varied with the age of the calves. Quantitative real-time PCR proved to be the most sensitive and specific test for detecting infection irrespective of the age of the calf. The microscopic techniques were the least sensitive and exhibited only moderate efficiency with 2-week-old calves excreting large numbers of oocysts, the majority of which were diarrheic. It was concluded that, when interpreting the results of routine tests for bovine cryptosporidiosis, cognizance should be taken of the sensitivity of the tests in relation to the age of the calves and stage of infection. PMID:21048012

  5. Sampling Polymorphs of Ionic Solids using Random Superlattices

    NASA Astrophysics Data System (ADS)

    Stevanović, Vladan

    2016-02-01

    Polymorphism offers rich and virtually unexplored space for discovering novel functional materials. To harness this potential approaches capable of both exploring the space of polymorphs and assessing their realizability are needed. One such approach devised for partially ionic solids is presented. The structure prediction part is carried out by performing local density functional theory relaxations on a large set of random supperlattices (RSLs) with atoms distributed randomly over different planes in a way that favors cation-anion coordination. Applying the RSL sampling on MgO, ZnO, and SnO2 reveals that the resulting probability of occurrence of a given structure offers a measure of its realizability explaining fully the experimentally observed, metastable polymorphs in these three systems.

  6. Randomized Sampling for Large Data Applications of SVM

    SciTech Connect

    Ferragut, Erik M; Laska, Jason A

    2012-01-01

    A trend in machine learning is the application of existing algorithms to ever-larger datasets. Support Vector Machines (SVM) have been shown to be very effective, but have been difficult to scale to large-data problems. Some approaches have sought to scale SVM training by approximating and parallelizing the underlying quadratic optimization problem. This paper pursues a different approach. Our algorithm, which we call Sampled SVM, uses an existing SVM training algorithm to create a new SVM training algorithm. It uses randomized data sampling to better extend SVMs to large data applications. Experiments on several datasets show that our method is faster than and comparably accurate to both the original SVM algorithm it is based on and the Cascade SVM, the leading data organization approach for SVMs in the literature. Further, we show that our approach is more amenable to parallelization than Cascade SVM.

  7. An environmental sampling model for combining judgment and randomly placed samples

    SciTech Connect

    Sego, Landon H.; Anderson, Kevin K.; Matzke, Brett D.; Sieber, Karl; Shulman, Stanley; Bennett, James; Gillen, M.; Wilson, John E.; Pulsipher, Brent A.

    2007-08-23

    In the event of the release of a lethal agent (such as anthrax) inside a building, law enforcement and public health responders take samples to identify and characterize the contamination. Sample locations may be rapidly chosen based on available incident details and professional judgment. To achieve greater confidence of whether or not a room or zone was contaminated, or to certify that detectable contamination is not present after decontamination, we consider a Bayesian model for combining the information gained from both judgment and randomly placed samples. We investigate the sensitivity of the model to the parameter inputs and make recommendations for its practical use.

  8. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  9. Simple-random-sampling-based multiclass text classification algorithm.

    PubMed

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements. PMID:24778587

  10. Simple-Random-Sampling-Based Multiclass Text Classification Algorithm

    PubMed Central

    Liu, Wuying; Wang, Lin; Yi, Mianzhu

    2014-01-01

    Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements. PMID:24778587

  11. Asymptomatic carriage of Neisseria meningitidis in a randomly sampled population.

    PubMed Central

    Caugant, D A; Høiby, E A; Magnus, P; Scheel, O; Hoel, T; Bjune, G; Wedege, E; Eng, J; Frøholm, L O

    1994-01-01

    To estimate the extent of meningococcal carriage in the Norwegian population and to investigate the relationship of several characteristics of the population to the carrier state, 1,500 individuals living in rural and small-town areas near Oslo were selected at random from the Norwegian National Population Registry. These persons were asked to complete a questionnaire and to volunteer for a bacteriological tonsillopharyngeal swab sampling. Sixty-three percent of the selected persons participated in the survey. Ninety-one (9.6%) of the volunteers harbored Neisseria meningitidis. The isolates were serogrouped, serotyped, tested for antibiotic resistance, and analyzed by multilocus enzyme electrophoresis. Eight (8.8%) of the 91 isolates represented clones of the two clone complexes that have been responsible for most of the systemic meningococal disease in Norway in the 1980s. Age between 15 and 24, male sex, and active and passive smoking were found to be independently associated with meningococcal carriage in logistic regression analyses. Working outside the home and having an occupation in transportation or industry also increased the risk for meningococcal carriage in individuals older than 17, when corrections for gender and smoking were made. Assuming that our sample is representative of the Norwegian population, we estimated that about 40,000 individuals in Norway are asymptomatic carriers of isolates with epidemic potential. Thus, carriage eradication among close contacts of persons with systemic disease is unlikely to have a significant impact on the overall epidemiological situation. PMID:8150942

  12. A comparison of methods for representing sparsely sampled random quantities.

    SciTech Connect

    Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua

    2013-09-01

    This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

  13. USAC: a universal framework for random sample consensus.

    PubMed

    Raguram, Rahul; Chum, Ondrej; Pollefeys, Marc; Matas, Jirí; Frahm, Jan-Michael

    2013-08-01

    A computational problem that arises frequently in computer vision is that of estimating the parameters of a model from data that have been contaminated by noise and outliers. More generally, any practical system that seeks to estimate quantities from noisy data measurements must have at its core some means of dealing with data contamination. The random sample consensus (RANSAC) algorithm is one of the most popular tools for robust estimation. Recent years have seen an explosion of activity in this area, leading to the development of a number of techniques that improve upon the efficiency and robustness of the basic RANSAC algorithm. In this paper, we present a comprehensive overview of recent research in RANSAC-based robust estimation by analyzing and comparing various approaches that have been explored over the years. We provide a common context for this analysis by introducing a new framework for robust estimation, which we call Universal RANSAC (USAC). USAC extends the simple hypothesize-and-verify structure of standard RANSAC to incorporate a number of important practical and computational considerations. In addition, we provide a general-purpose C++ software library that implements the USAC framework by leveraging state-of-the-art algorithms for the various modules. This implementation thus addresses many of the limitations of standard RANSAC within a single unified package. We benchmark the performance of the algorithm on a large collection of estimation problems. The implementation we provide can be used by researchers either as a stand-alone tool for robust estimation or as a benchmark for evaluating new techniques. PMID:23787350

  14. Is Knowledge Random? Introducing Sampling and Bias through Outdoor Inquiry

    ERIC Educational Resources Information Center

    Stier, Sam

    2010-01-01

    Sampling, very generally, is the process of learning about something by selecting and assessing representative parts of that population or object. In the inquiry activity described here, students learned about sampling techniques as they estimated the number of trees greater than 12 cm dbh (diameter at breast height) in a wooded, discrete area…

  15. Stratified random sampling plan for an irrigation customer telephone survey

    SciTech Connect

    Johnston, J.W.; Davis, L.J.

    1986-05-01

    This report describes the procedures used to design and select a sample for a telephone survey of individuals who use electricity in irrigating agricultural cropland in the Pacific Northwest. The survey is intended to gather information on the irrigated agricultural sector that will be useful for conservation assessment, load forecasting, rate design, and other regional power planning activities.

  16. Use of randomized sampling for analysis of metabolic networks.

    PubMed

    Schellenberger, Jan; Palsson, Bernhard Ø

    2009-02-27

    Genome-scale metabolic network reconstructions in microorganisms have been formulated and studied for about 8 years. The constraint-based approach has shown great promise in analyzing the systemic properties of these network reconstructions. Notably, constraint-based models have been used successfully to predict the phenotypic effects of knock-outs and for metabolic engineering. The inherent uncertainty in both parameters and variables of large-scale models is significant and is well suited to study by Monte Carlo sampling of the solution space. These techniques have been applied extensively to the reaction rate (flux) space of networks, with more recent work focusing on dynamic/kinetic properties. Monte Carlo sampling as an analysis tool has many advantages, including the ability to work with missing data, the ability to apply post-processing techniques, and the ability to quantify uncertainty and to optimize experiments to reduce uncertainty. We present an overview of this emerging area of research in systems biology. PMID:18940807

  17. A Comparison of Single Sample and Bootstrap Methods to Assess Mediation in Cluster Randomized Trials

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Stapleton, Laura M.; Kang, Joo Youn

    2006-01-01

    A Monte Carlo study examined the statistical performance of single sample and bootstrap methods that can be used to test and form confidence interval estimates of indirect effects in two cluster randomized experimental designs. The designs were similar in that they featured random assignment of clusters to one of two treatment conditions and…

  18. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  19. Prevalence and Severity of College Student Bereavement Examined in a Randomly Selected Sample

    ERIC Educational Resources Information Center

    Balk, David E.; Walker, Andrea C.; Baker, Ardith

    2010-01-01

    The authors used stratified random sampling to assess the prevalence and severity of bereavement in college undergraduates, providing an advance over findings that emerge from convenience sampling methods or from anecdotal observations. Prior research using convenience sampling indicated that 22% to 30% of college students are within 12 months of…

  20. Conflict-cost based random sampling design for parallel MRI with low rank constraints

    NASA Astrophysics Data System (ADS)

    Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie

    2015-05-01

    In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.

  1. Microwave spectral analysis based on photonic compressive sampling with random demodulation.

    PubMed

    Chi, Hao; Mei, Yuan; Chen, Ying; Wang, Donghui; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin

    2012-11-15

    In this Letter, we present a photonic compressive sampling scheme based on optical sampling and random demodulation for microwave spectral analysis. A novel (to our knowledge) approach to realizing the multiplication of a pseudorandom binary sequence and the input microwave signal of interest in the optical domain is proposed, which largely simplifies the implementation of the compressive sampling. A spectrally sparse signal can be successfully captured by an electrical digitizer with a sampling rate much lower than the Nyquist rate with the help of random demodulation and the sparse reconstruction algorithm. Identification of the signals with multiple frequency components is successfully demonstrated. PMID:23164863

  2. Reconstruction of seismic data with missing traces based on local random sampling and curvelet transform

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Cao, Siyuan; Li, Guofa; He, Yuan

    2015-04-01

    It is likely to yield seismic data with missing traces in field data acquisition, however, the subsequent processing requires complete seismic data; therefore, it is more necessary to reconstruct missing traces in seismic data. The reconstruction of seismic data becomes a sparse optimization problem based on sparsity of seismic data in curvelet transform domain and the gradient projection algorithm is employed to solve it. To overcome the limitations of uncontrolled random sampling, the local random sampling is presented in the paper; it can not only control the size of the sampling gaps effectively, but also keep the randomness of the sampling. The numerical modeling shows that the reconstructed result of local random sampling is better than that of traditional random sampling and jitter sampling. In addition, the proposed approach is also applied to pre-stack shot gather and stacked section, the field examples indicate that this method is effective and applicable for the reconstruction of seismic data with missing traces again. Furthermore, this approach can provide satisfactory result for the following processing on seismic data.

  3. Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions

    SciTech Connect

    Žerovnik, G.; Trkov, A.; Kodeli, I.A.; Capote, R.; Smith, D.L.

    2015-01-15

    Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.

  4. Age-stratified cut-off points for the nocturnal penile tumescence measurement using Nocturnal Electrobioimpedance Volumetric Assessment (NEVA(®) ) in sexually active healthy men.

    PubMed

    Tok, A; Eminaga, O; Burghaus, L; Herden, J; Akbarov, I; Engelmann, U; Wille, S

    2016-08-01

    The current nocturnal penile tumescence (NPT) measurement is based on standard cut-off levels defined regardless of age. This study was conducted to provide age-stratified cut-off points for NPT measurement. Forty sexually active healthy men between 20 and 60 years old were enrolled and divided equally into four groups defined by age (20-29, 30-39, 40-49 and 50-60 years.). None of the candidates had sexual dysfunction or sleep disturbance or used supportive medication to enhance sexual function. Erectile function was evaluated by using the 5-item version of the international index of erectile function (IIEF-5). NPT was observed using the nocturnal electrobioimpedance volumetric assessment (NEVA(®) ). The NPT values of healthy men aged 20-60 years varied from 268.7% to 202.3%. The NPT differed significantly between age groups (P < 0.0009); however, no significant differences between men aged 30-39 and 40-49 (P = 0.593) were observed. Age was weakly associated with IIEF-5 scores (P = 0.004), whereas a strong and negative correlation between age and NPT (P < 0.0001) was found. IEF-5 scores were not significantly associated with NPT (P = 0.95). Therefore, the standard values for NPT testing should be considered in the evaluation of the nocturnal penile activity of men of all ages. PMID:26498135

  5. Generating Random Samples of a Given Size Using Social Security Numbers.

    ERIC Educational Resources Information Center

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  6. Random Sampling of Quantum States: a Survey of Methods. And Some Issues Regarding the Overparametrized Method

    NASA Astrophysics Data System (ADS)

    Maziero, Jonas

    2015-12-01

    The numerical generation of random quantum states (RQS) is an important procedure for investigations in quantum information science. Here, we review some methods that may be used for performing that task. We start by presenting a simple procedure for generating random state vectors, for which the main tool is the random sampling of unbiased discrete probability distributions (DPD). Afterwards, the creation of random density matrices is addressed. In this context, we first present the standard method, which consists in using the spectral decomposition of a quantum state for getting RQS from random DPDs and random unitary matrices. In the sequence, the Bloch vector parametrization method is described. This approach, despite being useful in several instances, is not in general convenient for RQS generation. In the last part of the article, we regard the overparametrized method (OPM) and the related Ginibre and Bures techniques. The OPM can be used to create random positive semidefinite matrices with unit trace from randomly produced general complex matrices in a simple way that is friendly for numerical implementations. We consider a physically relevant issue related to the possible domains that may be used for the real and imaginary parts of the elements of such general complex matrices. Subsequently, a too fast concentration of measure in the quantum state space that appears in this parametrization is noticed.

  7. Output-only modal identification by compressed sensing: Non-uniform low-rate random sampling

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Nagarajaiah, Satish

    2015-05-01

    Modal identification or testing of structures consists of two phases, namely, data acquisition and data analysis. Some structures, such as aircrafts, high-speed machines, and plate-like civil structures, have active modes in the high-frequency range when subjected to high-speed or broadband excitation in their operational conditions. In the data acquisition stage, the Shannon-Nyquist sampling theorem indicates that capturing the high-frequency modes (signals) requires uniform high-rate sampling, resulting in sensing too many samples, which potentially impose burdens on the data transfer (especially in wireless platform) and data analysis stage. This paper explores a new-emerging, alternative, signal sampling and analysis technique, compressed sensing, and investigates the feasibility of a new method for output-only modal identification of structures in a non-uniform low-rate random sensing framework based on a combination of compressed sensing (CS) and blind source separation (BSS). Specifically, in the data acquisition stage, CS sensors sample few non-uniform low-rate random measurements of the structural responses signals, which turn out to be sufficient to capture the underlying mode information. Then in the data analysis stage, the proposed method uses the BSS technique, complexity pursuit (CP) recently explored by the authors, to directly decouple the non-uniform low-rate random samples of the structural responses, simultaneously yielding the mode shape matrix as well as the non-uniform low-rate random samples of the modal responses. Finally, CS with ℓ1-minimization recovers the uniform high-rate modal response from the CP-decoupled non-uniform low-rate random samples of the modal response, thereby enabling estimation of the frequency and damping ratio. Because CS sensors are currently in laboratory prototypes and not yet commercially available, their functionality-randomly sensing few non-uniform samples-is simulated in this study, which is performed on the

  8. Random Sampling Process Leads to Overestimation of β-Diversity of Microbial Communities

    PubMed Central

    Zhou, Jizhong; Jiang, Yi-Huei; Deng, Ye; Shi, Zhou; Zhou, Benjamin Yamin; Xue, Kai; Wu, Liyou; He, Zhili; Yang, Yunfeng

    2013-01-01

    ABSTRACT The site-to-site variability in species composition, known as β-diversity, is crucial to understanding spatiotemporal patterns of species diversity and the mechanisms controlling community composition and structure. However, quantifying β-diversity in microbial ecology using sequencing-based technologies is a great challenge because of a high number of sequencing errors, bias, and poor reproducibility and quantification. Herein, based on general sampling theory, a mathematical framework is first developed for simulating the effects of random sampling processes on quantifying β-diversity when the community size is known or unknown. Also, using an analogous ball example under Poisson sampling with limited sampling efforts, the developed mathematical framework can exactly predict the low reproducibility among technically replicate samples from the same community of a certain species abundance distribution, which provides explicit evidences of random sampling processes as the main factor causing high percentages of technical variations. In addition, the predicted values under Poisson random sampling were highly consistent with the observed low percentages of operational taxonomic unit (OTU) overlap (<30% and <20% for two and three tags, respectively, based on both Jaccard and Bray-Curtis dissimilarity indexes), further supporting the hypothesis that the poor reproducibility among technical replicates is due to the artifacts associated with random sampling processes. Finally, a mathematical framework was developed for predicting sampling efforts to achieve a desired overlap among replicate samples. Our modeling simulations predict that several orders of magnitude more sequencing efforts are needed to achieve desired high technical reproducibility. These results suggest that great caution needs to be taken in quantifying and interpreting β-diversity for microbial community analysis using next-generation sequencing technologies. PMID:23760464

  9. Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling

    PubMed Central

    Tian, Jiayong

    2015-01-01

    This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion. PMID:26089958

  10. An Upper Bound for the Expected Range of a Random Sample

    ERIC Educational Resources Information Center

    Marengo, James; Lopez, Manuel

    2010-01-01

    We consider the expected range of a random sample of points chosen from the interval [0, 1] according to some probability distribution. We then use the notion of convexity to derive an upper bound for this expected range which is valid for all possible choices of this distribution. Finally we show that there is only one distribution for which this…

  11. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    ERIC Educational Resources Information Center

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  12. THE SELECTION OF A NATIONAL RANDOM SAMPLE OF TEACHERS FOR EXPERIMENTAL CURRICULUM EVALUATION.

    ERIC Educational Resources Information Center

    WELCH, WAYNE W.; AND OTHERS

    MEMBERS OF THE EVALUATION SECTION OF HARVARD PROJECT PHYSICS, DESCRIBING WHAT IS SAID TO BE THE FIRST ATTEMPT TO SELECT A NATIONAL RANDOM SAMPLE OF (HIGH SCHOOL PHYSICS) TEACHERS, LIST THE STEPS AS (1) PURCHASE OF A LIST OF PHYSICS TEACHERS FROM THE NATIONAL SCIENCE TEACHERS ASSOCIATION (MOST COMPLETE AVAILABLE), (2) SELECTION OF 136 NAMES BY A…

  13. Power and sample size calculations for Mendelian randomization studies using one genetic instrument.

    PubMed

    Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary

    2013-08-01

    Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size. PMID:23934314

  14. A Model for Predicting Behavioural Sleep Problems in a Random Sample of Australian Pre-Schoolers

    ERIC Educational Resources Information Center

    Hall, Wendy A.; Zubrick, Stephen R.; Silburn, Sven R.; Parsons, Deborah E.; Kurinczuk, Jennifer J.

    2007-01-01

    Behavioural sleep problems (childhood insomnias) can cause distress for both parents and children. This paper reports a model describing predictors of high sleep problem scores in a representative population-based random sample survey of non-Aboriginal singleton children born in 1995 and 1996 (1085 girls and 1129 boys) in Western Australia.…

  15. HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA

    EPA Science Inventory

    Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...

  16. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    ERIC Educational Resources Information Center

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  17. Recidivism among Child Sexual Abusers: Initial Results of a 13-Year Longitudinal Random Sample

    ERIC Educational Resources Information Center

    Patrick, Steven; Marsh, Robert

    2009-01-01

    In the initial analysis of data from a random sample of all those charged with child sexual abuse in Idaho over a 13-year period, only one predictive variable was found that related to recidivism of those convicted. Variables such as ethnicity, relationship, gender, and age differences did not show a significant or even large association with…

  18. Optimal Sampling of Units in Three-Level Cluster Randomized Designs: An Ancova Framework

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2011-01-01

    Field experiments with nested structures assign entire groups such as schools to treatment and control conditions. Key aspects of such cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. The units at each level of the…

  19. A Nationwide Random Sampling Survey of Potential Complicated Grief in Japan

    ERIC Educational Resources Information Center

    Mizuno, Yasunao; Kishimoto, Junji; Asukai, Nozomu

    2012-01-01

    To investigate the prevalence of significant loss, potential complicated grief (CG), and its contributing factors, we conducted a nationwide random sampling survey of Japanese adults aged 18 or older (N = 1,343) using a self-rating Japanese-language version of the Complicated Grief Brief Screen. Among them, 37.0% experienced their most significant…

  20. Sample size calculations for micro-randomized trials in mHealth.

    PubMed

    Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A

    2016-05-30

    The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26707831

  1. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    NASA Astrophysics Data System (ADS)

    Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.

    2013-05-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.

  2. On the comparison of the interval estimation of the Pareto parameter under simple random sampling and ranked set sampling techniques

    NASA Astrophysics Data System (ADS)

    Aissa, Aissa Omar; Ibrahim, Kamarulzaman; Dayyeh, Walid Abu; Zin, Wan Zawiah Wan

    2015-02-01

    Ranked set sampling (RSS) is recognized as a useful sampling scheme for improving the precision of the parameter estimates and increasing the efficiency of estimation. This type of scheme is appropriate when the variable of interest is expensive or time consuming to be quantified, but easy and cheap to be ranked. In this study, the estimation of the shape parameter of the Pareto distribution of the first type when the scale is known is studied for the data that are gathered under simple random sampling (SRS), RSS, and selective order statistics based on the maximum (SORSS(max)). The confidence intervals for the shape parameter of Pareto distribution under the sampling techniques considered are determined. A simulation study is carried out to compare the confidence intervals in terms of coverage probabilities (CPs) and expected lengths (ELs). When the coverage probabilities and expected lengths for the confidence intervals of the shape parameter of Pareto distribution determined based on the different sampling methods are compared, the coverage probabilities and expected lengths are found to be more precise under RSS as opposed to SRS. In particular, it is found that the coverage probabilities under SORSS(max) is closest to the nominal value of 0.95.

  3. Nonuniform sampling of hypercomplex multidimensional NMR experiments: Dimensionality, quadrature phase and randomization

    PubMed Central

    Schuyler, Adam D; Maciejewski, Mark W; Stern, Alan S; Hoch, Jeffrey C

    2015-01-01

    Nonuniform sampling (NUS) in multidimensional NMR permits the exploration of higher dimensional experiments and longer evolution times than the Nyquist Theorem practically allows for uniformly sampled experiments. However, the spectra of NUS data include sampling-induced artifacts and may be subject to distortions imposed by sparse data reconstruction techniques, issues not encountered with the discrete Fourier transform (DFT) applied to uniformly sampled data. The characterization of these NUS-induced artifacts allows for more informed sample schedule design and improved spectral quality. The DFT–Convolution Theorem, via the point-spread function (PSF) for a given sampling scheme, provides a useful framework for exploring the nature of NUS sampling artifacts. In this work, we analyze the PSFs for a set of specially constructed NUS schemes to quantify the interplay between randomization and dimensionality for reducing artifacts relative to uniformly undersampled controls. In particular, we find a synergistic relationship between the indirect time dimensions and the “quadrature phase dimension” (i.e. the hypercomplex components collected for quadrature detection). The quadrature phase dimension provides additional degrees of freedom that enable partial-component NUS (collecting a subset of quadrature components) to further reduce sampling-induced aliases relative to traditional full-component NUS (collecting all quadrature components). The efficacy of artifact reduction is exponentially related to the dimensionality of the sample space. Our results quantify the utility of partial-component NUS as an additional means for introducing decoherence into sampling schemes and reducing sampling artifacts in high dimensional experiments. PMID:25899289

  4. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    SciTech Connect

    Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H; Barber, Samuel K; Bouet, Nathalie; McKinney, Wayne R; Takacs, Peter Z; Voronov, Dmitriy L

    2010-09-17

    Verification of the reliability of metrology data from high quality x-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [Proc. SPIE 7077-7 (2007), Opt. Eng. 47(7), 073602-1-5 (2008)} and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [Nucl. Instr. and Meth. A 616, 172-82 (2010)]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.

  5. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    NASA Astrophysics Data System (ADS)

    Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.

    2011-09-01

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1,2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  6. Characterization of Electron Microscopes with Binary Pseudo-random Multilayer Test Samples

    SciTech Connect

    V Yashchuk; R Conley; E Anderson; S Barber; N Bouet; W McKinney; P Takacs; D Voronov

    2011-12-31

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  7. On analysis-based two-step interpolation methods for randomly sampled seismic data

    NASA Astrophysics Data System (ADS)

    Yang, Pengliang; Gao, Jinghuai; Chen, Wenchao

    2013-02-01

    Interpolating the missing traces of regularly or irregularly sampled seismic record is an exceedingly important issue in the geophysical community. Many modern acquisition and reconstruction methods are designed to exploit the transform domain sparsity of the few randomly recorded but informative seismic data using thresholding techniques. In this paper, to regularize randomly sampled seismic data, we introduce two accelerated, analysis-based two-step interpolation algorithms, the analysis-based FISTA (fast iterative shrinkage-thresholding algorithm) and the FPOCS (fast projection onto convex sets) algorithm from the IST (iterative shrinkage-thresholding) algorithm and the POCS (projection onto convex sets) algorithm. A MATLAB package is developed for the implementation of these thresholding-related interpolation methods. Based on this package, we compare the reconstruction performance of these algorithms, using synthetic and real seismic data. Combined with several thresholding strategies, the accelerated convergence of the proposed methods is also highlighted.

  8. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    PubMed

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  9. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    SciTech Connect

    Yashchuk, V.V.; Conley, R.; Anderson, E.H.; Barber, S.K.; Bouet, N.; McKinney, W.R.; Takacs, P.Z. and Voronov, D.L.

    2010-12-08

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binarypseudo-random (BPR) gratings and arrays has been suggested and and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer. Here we describe the details of development of binarypseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML testsamples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  10. The effects of spatial sampling on random noise for gyrokinetic PIC simulations in real space

    NASA Astrophysics Data System (ADS)

    Kiviniemi, T. P.; Sauerwein, U.

    2016-06-01

    We study the effects of cloud-in-cell sampling and gyroaveraging on random noise in real space (as opposed to the common Fourier space presentation), and show that together, these can reduce the noise by a factor of 3 compared to nearest grid point sampling without gyroaveraging. Hence an order of magnitude less test particles are needed for the given noise level. We derive equations for noise level as a function of Larmor radius and also investigate the effect of gyroaveraging on noise in local gradients. The effect of number of gyropoints on noise is also discussed.

  11. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention. PMID:25019136

  12. Sample size determination for testing equality in a cluster randomized trial with noncompliance.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2011-01-01

    For administrative convenience or cost efficiency, we may often employ a cluster randomized trial (CRT), in which randomized units are clusters of patients rather than individual patients. Furthermore, because of ethical reasons or patient's decision, it is not uncommon to encounter data in which there are patients not complying with their assigned treatments. Thus, the development of a sample size calculation procedure for a CRT with noncompliance is important and useful in practice. Under the exclusion restriction model, we have developed an asymptotic test procedure using a tanh(-1)(x) transformation for testing equality between two treatments among compliers for a CRT with noncompliance. We have further derived a sample size formula accounting for both noncompliance and the intraclass correlation for a desired power 1 - β at a nominal α level. We have employed Monte Carlo simulation to evaluate the finite-sample performance of the proposed test procedure with respect to type I error and the accuracy of the derived sample size calculation formula with respect to power in a variety of situations. Finally, we use the data taken from a CRT studying vitamin A supplementation to reduce mortality among preschool children to illustrate the use of sample size calculation proposed here. PMID:21191850

  13. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional...

  14. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Sample selection by random number generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional...

  15. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Sample selection by random number generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional...

  16. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Sample selection by random number generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional...

  17. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Sample selection by random number generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional...

  18. Sample size requirements to detect a two- or three-way interaction in longitudinal cluster randomized clinical trials with second level randomization

    PubMed Central

    Heo, Moonseong; Xue, Xiaonan; Kim, Mimi Y.

    2014-01-01

    Background When randomizations are assigned at the cluster level for longitudinal cluster randomized trials (longitudinal-CRT) with a continuous outcome, formulae for determining the required sample size to detect a two-way interaction effect between time and intervention are available. Purpose To show that: 1) those same formulae can also be applied to longitudinal trials when randomizations are assigned at the subject level within clusters; and 2) this property can be extended to 2-by-2 factorial longitudinal-CRTs with two treatments and different levels of randomization for which testing a three-way interaction between time and the two interventions is of primary interest. Methods We show that slope estimates from different treatment arms are uncorrelated regardless of whether randomization occurs at the third or second level and also regardless of whether slopes are considered fixed or random in the mixed-effects model for testing two-way or three-way interactions. Sample size formulae are extended to unbalanced designs. Simulation studies were applied to verify the findings. Results Sample size formulae for testing two-way and three-way interactions in longitudinal-CRTs with second level randomization are identical to those for trials with third level randomization. In addition, the total number of observations required for testing a three-way interaction is demonstrated to be four times as large as that required for testing a two-way interaction regardless of level of randomization for both fixed and random slope models. Limitations The findings may be only applicable to longitudinal-CRTs with normally-distributed continuous outcome. Conclusions All of the findings are validated by simulation studies and enable the design of longitudinal clinical trials to be more flexible in regard to level of randomization and allocation of clusters and subjects. PMID:24837325

  19. Acute stress symptoms during the second Lebanon war in a random sample of Israeli citizens.

    PubMed

    Cohen, Miri; Yahav, Rivka

    2008-02-01

    The aims of this study were to assess prevalence of acute stress disorder (ASD) and acute stress symptoms (ASS) in Israel during the second Lebanon war. A telephone survey was conducted in July 2006 of a random sample of 235 residents of northern Israel, who were subjected to missile attacks, and of central Israel, who were not subjected to missile attacks. Results indicate that ASS scores were higher in the northern respondents; 6.8% of the northern sample and 3.9% of the central sample met ASD criteria. Appearance of each symptom ranged from 15.4% for dissociative to 88.4% for reexperiencing, with significant differences between northern and central respondents only for reexperiencing and arousal. A low ASD rate and a moderate difference between areas subjected and not subjected to attack were found. PMID:18302184

  20. Expected value of sample information for multi-arm cluster randomized trials with binary outcomes.

    PubMed

    Welton, Nicky J; Madan, Jason J; Caldwell, Deborah M; Peters, Tim J; Ades, Anthony E

    2014-04-01

    Expected value of sample information (EVSI) measures the anticipated net benefit gained from conducting new research with a specific design to add to the evidence on which reimbursement decisions are made. Cluster randomized trials raise specific issues for EVSI calculations because 1) a hierarchical model is necessary to account for between-cluster variability when incorporating new evidence and 2) heterogeneity between clusters needs to be carefully characterized in the cost-effectiveness analysis model. Multi-arm trials provide parameter estimates that are correlated, which needs to be accounted for in EVSI calculations. Furthermore, EVSI is computationally intensive when the net benefit function is nonlinear, due to the need for an inner-simulation step. We develop a method for the computation of EVSI that avoids the inner simulation step for cluster randomized multi-arm trials with a binary outcome, where the net benefit function is linear in the probability of an event but nonlinear in the log-odds ratio parameters. We motivate and illustrate the method with an example of a cluster randomized 2 × 2 factorial trial for interventions to increase attendance at breast screening in the UK, using a previously reported cost-effectiveness model. We highlight assumptions made in our approach, extensions to individually randomized trials and inclusion of covariates, and areas for further developments. We discuss computation time, the research-design space, and the ethical implications of an EVSI approach. We suggest that EVSI is a practical and appropriate tool for the design of cluster randomized trials. PMID:24085289

  1. Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data

    NASA Technical Reports Server (NTRS)

    Sree, David

    1992-01-01

    Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.

  2. Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data

    NASA Astrophysics Data System (ADS)

    Sree, David

    1992-09-01

    Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.

  3. On the Creation of Representative Samples of Random Quasi-Orders

    PubMed Central

    Schrepp, Martin; Ünlü, Ali

    2015-01-01

    Dependencies between educational test items can be represented as quasi-orders on the item set of a knowledge domain and used for an efficient adaptive assessment of knowledge. One approach to uncovering such dependencies is by exploratory algorithms of item tree analysis (ITA). There are several methods of ITA available. The basic tool to compare such algorithms concerning their quality are large-scale simulation studies that are crucially set up on a large collection of quasi-orders. A serious problem is that all known ITA algorithms are sensitive to the structure of the underlying quasi-order. Thus, it is crucial to base any simulation study that tries to compare the algorithms upon samples of quasi-orders that are representative, meaning each quasi-order is included in a sample with the same probability. Up to now, no method to create representative quasi-orders on larger item sets is known. Non-optimal algorithms for quasi-order generation were used in previous studies, which caused misinterpretations and erroneous conclusions. In this paper, we present a method for creating representative random samples of quasi-orders. The basic idea is to consider random extensions of quasi-orders from lower to higher dimension and to discard extensions that do not satisfy the transitivity property. PMID:26640450

  4. Random sampling of skewed distributions implies Taylor's power law of fluctuation scaling.

    PubMed

    Cohen, Joel E; Xu, Meng

    2015-06-23

    Taylor's law (TL), a widely verified quantitative pattern in ecology and other sciences, describes the variance in a species' population density (or other nonnegative quantity) as a power-law function of the mean density (or other nonnegative quantity): Approximately, variance = a(mean)(b), a > 0. Multiple mechanisms have been proposed to explain and interpret TL. Here, we show analytically that observations randomly sampled in blocks from any skewed frequency distribution with four finite moments give rise to TL. We do not claim this is the only way TL arises. We give approximate formulae for the TL parameters and their uncertainty. In computer simulations and an empirical example using basal area densities of red oak trees from Black Rock Forest, our formulae agree with the estimates obtained by least-squares regression. Our results show that the correlated sampling variation of the mean and variance of skewed distributions is statistically sufficient to explain TL under random sampling, without the intervention of any biological or behavioral mechanisms. This finding connects TL with the underlying distribution of population density (or other nonnegative quantity) and provides a baseline against which more complex mechanisms of TL can be compared. PMID:25852144

  5. Improving Ramsey spectroscopy in the extreme-ultraviolet region with a random-sampling approach

    SciTech Connect

    Eramo, R.; Bellini, M.; Corsi, C.; Liontos, I.; Cavalieri, S.

    2011-04-15

    Ramsey-like techniques, based on the coherent excitation of a sample by delayed and phase-correlated pulses, are promising tools for high-precision spectroscopic tests of QED in the extreme-ultraviolet (xuv) spectral region, but currently suffer experimental limitations related to long acquisition times and critical stability issues. Here we propose a random subsampling approach to Ramsey spectroscopy that, by allowing experimentalists to reach a given spectral resolution goal in a fraction of the usual acquisition time, leads to substantial improvements in high-resolution spectroscopy and may open the way to a widespread application of Ramsey-like techniques to precision measurements in the xuv spectral region.

  6. Large sample randomization inference of causal effects in the presence of interference

    PubMed Central

    Liu, Lan; Hudgens, Michael G.

    2013-01-01

    Recently, increasing attention has focused on making causal inference when interference is possible. In the presence of interference, treatment may have several types of effects. In this paper, we consider inference about such effects when the population consists of groups of individuals where interference is possible within groups but not between groups. A two stage randomization design is assumed where in the first stage groups are randomized to different treatment allocation strategies and in the second stage individuals are randomized to treatment or control conditional on the strategy assigned to their group in the first stage. For this design, the asymptotic distributions of estimators of the causal effects are derived when either the number of individuals per group or the number of groups grows large. Under certain homogeneity assumptions, the asymptotic distributions provide justification for Wald-type confidence intervals (CIs) and tests. Empirical results demonstrate the Wald CIs have good coverage in finite samples and are narrower than CIs based on either the Chebyshev or Hoeffding inequalities provided the number of groups is not too small. The methods are illustrated by two examples which consider the effects of cholera vaccination and an intervention to encourage voting. PMID:24659836

  7. Convergence Properties of Crystal Structure Prediction by Quasi-Random Sampling

    PubMed Central

    2015-01-01

    Generating sets of trial structures that sample the configurational space of crystal packing possibilities is an essential step in the process of ab initio crystal structure prediction (CSP). One effective methodology for performing such a search relies on low-discrepancy, quasi-random sampling, and our implementation of such a search for molecular crystals is described in this paper. Herein we restrict ourselves to rigid organic molecules and, by considering their geometric properties, build trial crystal packings as starting points for local lattice energy minimization. We also describe a method to match instances of the same structure, which we use to measure the convergence of our packing search toward completeness. The use of these tools is demonstrated for a set of molecules with diverse molecular characteristics and as representative of areas of application where CSP has been applied. An important finding is that the lowest energy crystal structures are typically located early and frequently during a quasi-random search of phase space. It is usually the complete sampling of higher energy structures that requires extended sampling. We show how the procedure can first be refined, through targetting the volume of the generated crystal structures, and then extended across a range of space groups to make a full CSP search and locate experimentally observed and lists of hypothetical polymorphs. As the described method has also been created to lie at the base of more involved approaches to CSP, which are being developed within the Global Lattice Energy Explorer (Glee) software, a few of these extensions are briefly discussed. PMID:26716361

  8. Insulation workers in Belfast. 1. Comparison of a random sample with a control population1

    PubMed Central

    Wallace, William F. M.; Langlands, Jean H. M.

    1971-01-01

    Wallace, W. F. M., and Langlands, J. H. M. (1971).Brit. J. industr. Med.,28, 211-216. Insulation workers in Belfast. 1. Comparison of a random sample with a control population. A sample of 50 men was chosen at random from the population of asbestos insulators in Belfast and matched with a control series of men of similar occupational group with respect to age, height, and smoking habit. Significantly more of the insulators complained of cough and sputum and had basal rales on examination. Clubbing was assessed by means of measurements of the hyponychial angle of both index fingers. These angles were significantly greater in the group of insulators. Twenty-one insulators had ϰ-rays which showed pleural calcification with or without pulmonary fibrosis; one control ϰ-ray showed pulmonary fibrosis. The insulators had no evidence of airways obstruction but static lung volume was reduced and their arterial oxygen tension was lower than that of the controls and their alveolar-arterial oxygen gradient was greater. PMID:5557841

  9. Random sample community-based health surveys: does the effort to reach participants matter?

    PubMed Central

    Messiah, Antoine; Castro, Grettel; Rodríguez de la Vega, Pura; Acuna, Juan M

    2014-01-01

    Objectives Conducting health surveys with community-based random samples are essential to capture an otherwise unreachable population, but these surveys can be biased if the effort to reach participants is insufficient. This study determines the desirable amount of effort to minimise such bias. Design A household-based health survey with random sampling and face-to-face interviews. Up to 11 visits, organised by canvassing rounds, were made to obtain an interview. Setting Single-family homes in an underserved and understudied population in North Miami-Dade County, Florida, USA. Participants Of a probabilistic sample of 2200 household addresses, 30 corresponded to empty lots, 74 were abandoned houses, 625 households declined to participate and 265 could not be reached and interviewed within 11 attempts. Analyses were performed on the 1206 remaining households. Primary outcome Each household was asked if any of their members had been told by a doctor that they had high blood pressure, heart disease including heart attack, cancer, diabetes, anxiety/ depression, obesity or asthma. Responses to these questions were analysed by the number of visit attempts needed to obtain the interview. Results Return per visit fell below 10% after four attempts, below 5% after six attempts and below 2% after eight attempts. As the effort increased, household size decreased, while household income and the percentage of interviewees active and employed increased; proportion of the seven health conditions decreased, four of which did so significantly: heart disease 20.4–9.2%, high blood pressure 63.5–58.1%, anxiety/depression 24.4–9.2% and obesity 21.8–12.6%. Beyond the fifth attempt, however, cumulative percentages varied by less than 1% and precision varied by less than 0.1%. Conclusions In spite of the early and steep drop, sustaining at least five attempts to reach participants is necessary to reduce selection bias. PMID:25510887

  10. TemperSAT: A new efficient fair-sampling random k-SAT solver

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.

    The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.

  11. Validation of the 2008 Landsat Burned Area Ecv Product for North America Using Stratified Random Sampling

    NASA Astrophysics Data System (ADS)

    Brunner, N. M.; Mladinich, C. S.; Caldwell, M. K.; Beal, Y. J. G.

    2014-12-01

    The U.S. Geological Survey is generating a suite of Essential Climate Variables (ECVs) products, as defined by the Global Climate Observing System, from the Landsat data archive. Validation protocols for these products are being established, incorporating the Committee on Earth Observing Satellites Land Product Validation Subgroup's best practice guidelines and validation hierarchy stages. The sampling design and accuracy measures follow the methodology developed by the European Space Agency's Climate Change Initiative Fire Disturbance (fire_cci) project (Padilla and others, 2014). A rigorous validation was performed on the 2008 Burned Area ECV (BAECV) prototype product, using a stratified random sample of 48 Thiessen scene areas overlaying Landsat path/rows distributed across several terrestrial biomes throughout North America. The validation reference data consisted of fourteen sample sites acquired from the fire_cci project and the remaining new samples sites generated from a densification of the stratified sampling for North America. The reference burned area polygons were generated using the ABAMS (Automatic Burned Area Mapping) software (Bastarrika and others, 2011; Izagirre, 2014). Accuracy results will be presented indicating strengths and weaknesses of the BAECV algorithm.Bastarrika, A., Chuvieco, E., and Martín, M.P., 2011, Mapping burned areas from Landsat TM/ETM+ data with a two-phase algorithm: Balancing omission and commission errors: Remote Sensing of Environment, v. 115, no. 4, p. 1003-1012.Izagirre, A.B., 2014, Automatic Burned Area Mapping Software (ABAMS), Preliminary Documentation, Version 10 v4,: Vitoria-Gasteiz, Spain, University of Basque Country, p. 27.Padilla, M., Chuvieco, E., Hantson, S., Theis, R., and Sandow, C., 2014, D2.1 - Product Validation Plan: UAH - University of Alcalá de Henares (Spain), 37 p.

  12. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    PubMed

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability. PMID:25257023

  13. Object motion tracking in the NDE laboratory by random sample iterative closest point

    NASA Astrophysics Data System (ADS)

    Radkowski, Rafael; Wehr, David; Gregory, Elizabeth; Holland, Stephen D.

    2016-02-01

    We present a computationally efficient technique for real-time motion tracking in the NDE laboratory. Our goal is to track object shapes in an flash thermography test stand to determine the position and orientation of the specimen which facilitates to register thermography data to a 3D part model. Object shapes can be different specimens and fixtures. Specimens can be manually aligned at any test stand, the position and orientation of every a-priori known shape can be computed and forwarded to the data management software. Our technique relies on the random sample consensus (RANSAC) approach to the iterative closest point (ICP) problem for identifying object shapes, thus, it is robust in different situations. The paper introduces the computational techniques and experiments along with the results.

  14. Random sampling of the Green’s Functions for reversible reactions with an intermediate state

    SciTech Connect

    Plante, Ianik; Devroye, Luc; Cucinotta, Francis A.

    2013-06-01

    Exact random variate generators were developed to sample Green’s functions used in Brownian Dynamics (BD) algorithms for the simulations of chemical systems. These algorithms, which use less than a kilobyte of memory, provide a useful alternative to the table look-up method that has been used in similar work. The cases that are studied with this approach are (1) diffusion-influenced reactions; (2) reversible diffusion-influenced reactions and (3) reactions with an intermediate state such as enzymatic catalysis. The results are validated by comparison with those obtained by the Independent Reaction Times (IRT) method. This work is part of our effort in developing models to understand the role of radiation chemistry in the radiation effects on human body and may eventually be included in event-based models of space radiation risk.

  15. Random Photon Absorption Model Elucidates How Early Gain Control in Fly Photoreceptors Arises from Quantal Sampling

    PubMed Central

    Song, Zhuoyi; Zhou, Yu; Juusola, Mikko

    2016-01-01

    Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779

  16. I am the NRA: an analysis of a national random sample of gun owners.

    PubMed

    Weil, D S; Hemenway, D

    1993-01-01

    Data from a national random sample of gun owners (N = 605) were used to determine whether members of the National Rifle Association (NRA) are a representative sample of all gun owners and how well the NRA's lobbying positions on gun control reflect the views of its membership and of nonmember gun owners. No obvious demographic distinctions were identified between member and nonmember gun owners, but handgun owners (odds ratio [OR], 1.69; 95% confidence interval [CI], 1.19 to 2.39) and individuals who owned six or more guns as opposed to just one gun (OR, 1.95; 95% CI, 1.22 to 3.10) were more likely to belong to the NRA. Nonmembers were more supportive of specific proposals to regulate gun ownership (OR, 1.82; 95% CI, 1.14 to 2.91), but a majority of both member and nonmember gun owners favored a waiting period for the purchase of a handgun (77% and 89%, respectively) and mandatory registration of handguns (59% and 75%). PMID:8060908

  17. Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials.

    PubMed

    Carlisle, J B; Dexter, F; Pandit, J J; Shafer, S L; Yentis, S M

    2015-07-01

    In a previous paper, one of the authors (JBC) used a chi-squared method to analyse the means (SD) of baseline variables, such as height or weight, from randomised controlled trials by Fujii et al., concluding that the probabilities that the reported distributions arose by chance were infinitesimally small. Subsequent testing of that chi-squared method, using simulation, suggested that the method was incorrect. This paper corrects the chi-squared method and tests its performance and the performance of Monte Carlo simulations and ANOVA to analyse the probability of random sampling. The corrected chi-squared method and ANOVA method became inaccurate when applied to means that were reported imprecisely. Monte Carlo simulations confirmed that baseline data from 158 randomised controlled trials by Fujii et al. were different to those from 329 trials published by other authors and that the distribution of Fujii et al.'s data were different to the expected distribution, both p < 10(-16) . The number of Fujii randomised controlled trials with unlikely distributions was less with Monte Carlo simulation than with the 2012 chi-squared method: 102 vs 117 trials with p < 0.05; 60 vs 86 for p < 0.01; 30 vs 56 for p < 0.001; and 12 vs 24 for p < 0.00001, respectively. The Monte Carlo analysis nevertheless confirmed the original conclusion that the distribution of the data presented by Fujii et al. was extremely unlikely to have arisen from observed data. The Monte Carlo analysis may be an appropriate screening tool to check for non-random (i.e. unreliable) data in randomised controlled trials submitted to journals. PMID:26032950

  18. Reality Check for the Chinese Microblog Space: A Random Sampling Approach

    PubMed Central

    Fu, King-wa; Chau, Michael

    2013-01-01

    Chinese microblogs have drawn global attention to this online application’s potential impact on the country’s social and political environment. However, representative and reliable statistics on Chinese microbloggers are limited. Using a random sampling approach, this study collected Chinese microblog data from the service provider, analyzing the profile and the pattern of usage for 29,998 microblog accounts. From our analysis, 57.4% (95% CI 56.9%,58.0%) of the accounts’ timelines were empty. Among the 12,774 non-zero statuses samples, 86.9% (95% CI 86.2%,87.4%) did not make original post in a 7-day study period. By contrast, 0.51% (95% CI 0.4%,0.65%) wrote twenty or more original posts and 0.45% (95% CI 0.35%,0.60%) reposted more than 40 unique messages within the 7-day period. A small group of microbloggers created a majority of contents and drew other users’ attention. About 4.8% (95% CI 4.4%,5.2%) of the 12,774 users contributed more than 80% (95% CI,78.6%,80.3%) of the original posts and about 4.8% (95% CI 4.5%,5.2%) managed to create posts that were reposted or received comments at least once. Moreover, a regression analysis revealed that volume of followers is a key determinant of creating original microblog posts, reposting messages, being reposted, and receiving comments. Volume of friends is found to be linked only with the number of reposts. Gender differences and regional disparities in using microblogs in China are also observed. PMID:23520502

  19. Dietary magnesium, lung function, wheezing, and airway hyperreactivity in a random adult population sample.

    PubMed

    Britton, J; Pavord, I; Richards, K; Wisniewski, A; Knox, A; Lewis, S; Tattersfield, A; Weiss, S

    1994-08-01

    Magnesium is involved in a wide range of biological activities, including some that may protect against the development of asthma and chronic airflow obstruction. We tested the hypothesis that high dietary magnesium intake is associated with better lung function, and a reduced risk of airway hyper-reactivity and wheezing in a random sample of adults. In 2633 adults aged 18-70 sampled from the electoral register of an administrative area of Nottingham, UK, we measured dietary magnesium intake by semiquantitative food-frequency questionnaire, lung function as the 1-sec forced expiratory volume (FEV1), and atopy as the mean skin-prick test response to three common environmental allergens. We measured airway reactivity to methacholine in 2415 individuals, defining hyper-reactivity as a 20% fall in FEV1 after a cumulative dose of 12.25 mumol or less. Mean (SD) daily intake of magnesium was 380 (114) mg/day. After adjusting for age, sex, and height, and for the effects of atopy and smoking, a 100 mg/day higher magnesium intake was associated with a 27.7 (95% CI, 11.9-43.5) mL higher FEV1, and a reduction in the relative odds of hyper-reactivity by a ratio of 0.82 (0.72-0.93). The same incremental difference in magnesium intake was also associated with a reduction in the odds of self-reported wheeze within the past 12 months, adjusted for age, sex, smoking, atopy, and kilojoule intake, by a ratio of 0.85 (0.76-0.95). Dietary magnesium intake is independently related to lung function and the occurrence of airway hyper-reactivity and self-reported wheezing in the general population. Low magnesium intake may therefore be involved in the aetiology of asthma and chronic obstructive airways disease. PMID:7914305

  20. Neurofeedback Against Binge Eating: A Randomized Controlled Trial in a Female Subclinical Threshold Sample.

    PubMed

    Schmidt, Jennifer; Martin, Alexandra

    2016-09-01

    Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. PMID:27121224

  1. Misperceptions of spoken words: Data from a random sample of American English words

    PubMed Central

    Albert Felty, Robert; Buchwald, Adam; Gruenenfelder, Thomas M.; Pisoni, David B.

    2013-01-01

    This study reports a detailed analysis of incorrect responses from an open-set spoken word recognition experiment of 1428 words designed to be a random sample of the entire American English lexicon. The stimuli were presented in six-talker babble to 192 young, normal-hearing listeners at three signal-to-noise ratios (0, +5, and +10 dB). The results revealed several patterns: (1) errors tended to have a higher frequency of occurrence than did the corresponding target word, and frequency of occurrence of error responses was significantly correlated with target frequency of occurrence; (2) incorrect responses were close to the target words in terms of number of phonemes and syllables but had a mean edit distance of 3; (3) for syllables, substitutions were much more frequent than either deletions or additions; for phonemes, deletions were slightly more frequent than substitutions; both were more frequent than additions; and (4) for errors involving just a single segment, substitutions were more frequent than either deletions or additions. The raw data are being made available to other researchers as supplementary material to form the beginnings of a database of speech errors collected under controlled laboratory conditions. PMID:23862832

  2. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    SciTech Connect

    Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  3. Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks

    PubMed Central

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789

  4. Model-wise and point-wise random sample consensus for robust regression and outlier detection.

    PubMed

    El-Melegy, Moumen T

    2014-11-01

    Popular regression techniques often suffer at the presence of data outliers. Most previous efforts to solve this problem have focused on using an estimation algorithm that minimizes a robust M-estimator based error criterion instead of the usual non-robust mean squared error. However the robustness gained from M-estimators is still low. This paper addresses robust regression and outlier detection in a random sample consensus (RANSAC) framework. It studies the classical RANSAC framework and highlights its model-wise nature for processing the data. Furthermore, it introduces for the first time a point-wise strategy of RANSAC. New estimation algorithms are developed following both the model-wise and point-wise RANSAC concepts. The proposed algorithms' theoretical robustness and breakdown points are investigated in a novel probabilistic setting. While the proposed concepts and algorithms are generic and general enough to adopt many regression machineries, the paper focuses on multilayered feed-forward neural networks in solving regression problems. The algorithms are evaluated on synthetic and real data, contaminated with high degrees of outliers, and compared to existing neural network training algorithms. Furthermore, to improve the time performance, parallel implementations of the two algorithms are developed and assessed to utilize the multiple CPU cores available on nowadays computers. PMID:25047916

  5. FREQUENCY OF ATTENDANCE AT RELIGIOUS SERVICES, CARDIOVASCULAR DISEASE, METABOLIC RISK FACTORS AND DIETARY INTAKE IN AMERICANS: AN AGE-STRATIFIED EXPLORATORY ANALYSIS

    PubMed Central

    OBISESAN, THOMAS; LIVINGSTON, IVOR; TRULEAR, HAROLD DEAN; GILLUM, FRANK

    2011-01-01

    Background Few data have been published on the association of attendance at religious services with cardiovascular morbidity and dietary and metabolic risk factors in representative samples of populations despite a known inverse association with mortality and smoking. Objective To test the null hypothesis that frequency of attendance at religious services is unrelated to prevalence or levels of cardiovascular disease, dietary and metabolic risk factors. Design Cross-sectional survey of a large national sample. Participants American men and women aged 20 years and over with complete data in the Third National Health and Nutrition Examination Survey (N = 14,192). Measurements Self-reported frequency of attendance at religious services, history of doctor-diagnosed diseases, food intake frequency, 24-hour dietary intake, health status, socio-demographic variables and measured serum lipids and body mass index. Results Weekly attenders were significantly less likely to report stroke, even after adjusting for multiple variables only in African American women OR = 0.35, 95% CI 0.19–0.66, p < 0.01. No association was seen for heart attack or diabetes. Fish intake at least weekly was more common in weekly attenders, significantly so only in African American women (odds ratio 1.24, 95% CI 1.01–1.58, p < 0.05) and in older Mexican American men (odds ratio 2.57, 95% CI 1.45–2.57, p < 0.01). In linear regression analyses, no significant independent associations were seen between attendance frequency and serum lipid levels or dietary intake of energy, or fat in g and % of kcal. Conclusion Hypotheses generated by these analyses are that in African American women stroke is less prevalent and weekly fish intake more prevalent among weekly attenders than others and that there are no significant independent associations of serum lipids, dietary intake, prevalent CHD, or diabetes with frequency of attendance of religious services. Independent testing of these hypotheses in other

  6. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is

  7. Experiments with central-limit properties of spatial samples from locally covariant random fields

    USGS Publications Warehouse

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  8. Computerized stratified random site-selection approaches for design of a ground-water-quality sampling network

    USGS Publications Warehouse

    Scott, J.C.

    1990-01-01

    Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.

  9. Wavefront watermarking of technical and biological samples using digital random phase modulation and phase retrieval

    NASA Astrophysics Data System (ADS)

    Carpio, Justine Patricia L.; Almoro, Percival F.

    2014-10-01

    A technique for digital watermarking of smooth object wavefronts using digital random phase modulation and multiple-plane iterative phase retrieval is demonstrated experimentally. A complex-valued watermark is first encrypted using two random phase masks of known distributions before being superposed onto a set of host wavefront intensity patterns. Encryption scaling factor and depth of randomization of the masks are optimized such that the amplitude and phase watermarks are decrypted successfully and are not distorting the host wavefront. Given that the watermarked intensity patterns and the numerous decryption keys are available (i.e. distances between recording planes, light source wavelength, pixel size, random phase masks and their distances to the planes are all known), increasing the number of watermarked patterns used results in enhanced quality of decrypted watermarks. The main advantage of wavefront watermarking via the phase retrieval approach compared to the holographic approach is the avoidance of reference wave-induced aberration. Watermarking of wavefronts from lenses and unstained human cheek cells demonstrate the effectiveness of the technique.

  10. RELATIONSHIP OF NONSPECIFIC BRONCHIAL RESPONSIVENESS TO RESPIRATORY SYMPTOMS IN A RANDOM POPULATION SAMPLE (JOURNAL VERSION)

    EPA Science Inventory

    The relationship of airways responsiveness to respiratory symptom prevalence has been studied in a cross sectional analysis of a random subpopulation from a large scale population study on chronic obstructive lung disease (COLD) being conducted in the Netherlands. In 1905 subject...

  11. Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach

    ERIC Educational Resources Information Center

    Rotondi, Michael A.; Donner, Allan

    2009-01-01

    The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…

  12. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as... sampling by halves. Assume that the area to sample is a 1 meter square surface area (a square that has..., i.e., regardless of which way the surface is divided, each half is 1 half meter wide by 1 meter...

  13. Polytobacco use and multiple-product smoking among a random community sample of African-American adults

    PubMed Central

    Corral, Irma; Landrine, Hope; Simms, Denise Adams; Bess, Jukelia J

    2013-01-01

    Objectives Little is known about polytobacco use among African-American adults. This study is the first to explore this among a random, statewide, community sample of African-American adults. Setting Community-based sampling obtained a random, household-probability sample of African-American adults from California, surveyed door to door in randomly selected census tracts statewide. Participants Participants were a statewide, random-household sample of N=2118 African-American adults from California who completed a survey on past 30-day smoking of cigarettes, blunts, bidis, kreteks, cigarillos, marijuana and cigars. Results Almost half (49.3%) of the African-American cigarette-smokers and 14.9% of the cigarette non-smokers had smoked at least one non-cigarette product in the past 30 days. Smokers had a substantial prevalence of smoking cigarillos (28.7%) and blunts (27.7%). Logistic regressions revealed that the odds of smoking most of the non-cigarette products were higher for cigarette smokers and men, inversely related to age, and unrelated to socioeconomic status. However, smoking of blunts, bidis and kreteks was not predicted by cigarette smoking. Conclusions Smoking of cigarillos (eg, Phillies, Black & Mild) and blunts may be prevalent among African-American cigarette-smokers and non-smokers alike, but such products are not examined in most population-level smoking research. Smoking of these products should be included in surveillance studies, in cancer prevention programmes and in healthcare provider-assessment of smoking, and addressed in smoking cessation programmes as well. PMID:24334154

  14. Code to generate random identifiers and select QA/QC samples

    USGS Publications Warehouse

    Mehnert, Edward

    1992-01-01

    SAMPLID is a PC-based, FORTRAN-77 code which generates unique numbers for identification of samples, selection of QA/QC samples, and generation of labels. These procedures are tedious, but using a computer code such as SAMPLID can increase efficiency and reduce or eliminate errors and bias. The algorithm, used in SAMPLID, for generation of pseudorandom numbers is free of statistical flaws present in commonly available algorithms.

  15. Occurrence of aflatoxin M1 in randomly selected North African milk and cheese samples.

    PubMed

    Elgerbi, A M; Aidoo, K E; Candlish, A A G; Tester, R F

    2004-06-01

    Forty-nine samples of raw cow's milk and 20 samples of fresh white soft cheese were collected directly from 20 local dairy factories in the north-west of Libya and analysed for the presence of aflatoxin M1 (AFM1). The samples were analysed using a high-performance liquid chromatography technique for toxin detection and quantification. Thirty-five of the 49 milk samples (71.4%) showed AFM1 levels between 0.03 and 3.13 ng ml(-1) milk. Multiple analyses of five milk samples free of AFM1 artificially contaminated with concentrations of AFM1 at 0.01, 0.05, 0.1, 1.0 and 3.0 ng ml(-1) showed average recoveries of 66.85, 72.41, 83.29, 97.94 and 98.25%, with coefficients of variations of 3.77, 4.11, 1.57, 1.29 and 0.54%, respectively. Fifteen of 20 white soft cheese samples (75.0%) showed the presence of AFM1 in concentrations between 0. 11 and 0.52 ng g(-1) of cheese. Multiple assays of five cheese samples free of AFM1 spiked with different concentration of AFM1 (0.1, 0.5, 1.0 and 3.0 ng g(-1)) showed average recoveries of 63.23, 78.14,83.29 and 88.68%, with coefficients of variation of 1.53, 9.90, 4.87 and 3.79%, respectively. The concentrations of AFM1 were lower in the cheese products than in the raw milk samples. PMID:15204538

  16. Hybrid random walk-linear discriminant analysis method for unwrapping quantitative phase microscopy images of biological samples

    PubMed Central

    Kim, Diane N. H.; Teitell, Michael A.; Reed, Jason; Zangle, Thomas A.

    2015-01-01

    Abstract. Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples. PMID:26305212

  17. Hybrid random walk-linear discriminant analysis method for unwrapping quantitative phase microscopy images of biological samples.

    PubMed

    Kim, Diane N H; Teitell, Michael A; Reed, Jason; Zangle, Thomas A

    2015-01-01

    Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples. PMID:26305212

  18. Hybrid random walk-linear discriminant analysis method for unwrapping quantitative phase microscopy images of biological samples

    NASA Astrophysics Data System (ADS)

    Kim, Diane N. H.; Teitell, Michael A.; Reed, Jason; Zangle, Thomas A.

    2015-11-01

    Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples.

  19. The revised Temperament and Character Inventory: normative data by sex and age from a Spanish normal randomized sample

    PubMed Central

    Labad, Javier; Martorell, Lourdes; Gaviria, Ana; Bayón, Carmen; Vilella, Elisabet; Cloninger, C. Robert

    2015-01-01

    Objectives. The psychometric properties regarding sex and age for the revised version of the Temperament and Character Inventory (TCI-R) and its derived short version, the Temperament and Character Inventory (TCI-140), were evaluated with a randomized sample from the community. Methods. A randomized sample of 367 normal adult subjects from a Spanish municipality, who were representative of the general population based on sex and age, participated in the current study. Descriptive statistics and internal consistency according to α coefficient were obtained for all of the dimensions and facets. T-tests and univariate analyses of variance, followed by Bonferroni tests, were conducted to compare the distributions of the TCI-R dimension scores by age and sex. Results. On both the TCI-R and TCI-140, women had higher scores for Harm Avoidance, Reward Dependence and Cooperativeness than men, whereas men had higher scores for Persistence. Age correlated negatively with Novelty Seeking, Reward Dependence and Cooperativeness and positively with Harm Avoidance and Self-transcendence. Young subjects between 18 and 35 years had higher scores than older subjects in NS and RD. Subjects between 51 and 77 years scored higher in both HA and ST. The alphas for the dimensions were between 0.74 and 0.87 for the TCI-R and between 0.63 and 0.83 for the TCI-140. Conclusion. Results, which were obtained with a randomized sample, suggest that there are specific distributions of personality traits by sex and age. Overall, both the TCI-R and the abbreviated TCI-140 were reliable in the ‘good-to-excellent’ range. A strength of the current study is the representativeness of the sample. PMID:26713237

  20. Performance of analytical methods for overdispersed counts in cluster randomized trials: sample size, degree of clustering and imbalance.

    PubMed

    Durán Pacheco, Gonzalo; Hattendorf, Jan; Colford, John M; Mäusezahl, Daniel; Smith, Thomas

    2009-10-30

    Many different methods have been proposed for the analysis of cluster randomized trials (CRTs) over the last 30 years. However, the evaluation of methods on overdispersed count data has been based mostly on the comparison of results using empiric data; i.e. when the true model parameters are not known. In this study, we assess via simulation the performance of five methods for the analysis of counts in situations similar to real community-intervention trials. We used the negative binomial distribution to simulate overdispersed counts of CRTs with two study arms, allowing the period of time under observation to vary among individuals. We assessed different sample sizes, degrees of clustering and degrees of cluster-size imbalance. The compared methods are: (i) the two-sample t-test of cluster-level rates, (ii) generalized estimating equations (GEE) with empirical covariance estimators, (iii) GEE with model-based covariance estimators, (iv) generalized linear mixed models (GLMM) and (v) Bayesian hierarchical models (Bayes-HM). Variation in sample size and clustering led to differences between the methods in terms of coverage, significance, power and random-effects estimation. GLMM and Bayes-HM performed better in general with Bayes-HM producing less dispersed results for random-effects estimates although upward biased when clustering was low. GEE showed higher power but anticonservative coverage and elevated type I error rates. Imbalance affected the overall performance of the cluster-level t-test and the GEE's coverage in small samples. Important effects arising from accounting for overdispersion are illustrated through the analysis of a community-intervention trial on Solar Water Disinfection in rural Bolivia. PMID:19672840

  1. Sociodemographic factors associated with AIDS knowledge in a random sample of university students.

    PubMed

    Robb, H; Beltran, E D; Katz, D; Foxman, B

    1991-06-01

    A telephone survey was used to assess knowledge of the transmission, prevalence, and infectivity of acquired immunodeficiency syndrome (AIDS), and the safety of casual contact among 214 randomly selected university students. Males were more knowledgeable than females overall (odds ratio [OR], men/women = 4.8). Although most students understood the dangers of unprotected sex and intravenous needle sharing, up to 30% believed some kinds of casual contact (e.g., shared eating utensils) can transmit AIDS. Older students (greater than or equal to 23 yrs) were more knowledgeable than those 17 to 19 years old about the safety of casual contact (OR = 3.8). Students are in need of education programs that stress the ways AIDS is not transmitted. Since most students identified newspapers and television as their main sources of information, these may be effective vehicles for education efforts. PMID:1924104

  2. Cloud Removal from SENTINEL-2 Image Time Series Through Sparse Reconstruction from Random Samples

    NASA Astrophysics Data System (ADS)

    Cerra, D.; Bieniarz, J.; Müller, R.; Reinartz, P.

    2016-06-01

    In this paper we propose a cloud removal algorithm for scenes within a Sentinel-2 satellite image time series based on synthetisation of the affected areas via sparse reconstruction. For this purpose, a clouds and clouds shadow mask must be given. With respect to previous works, the process has an increased automation degree. Several dictionaries, on the basis of which the data are reconstructed, are selected randomly from cloud-free areas around the cloud, and for each pixel the dictionary yielding the smallest reconstruction error in non-corrupted images is chosen for the restoration. The values below a cloudy area are therefore estimated by observing the spectral evolution in time of the non-corrupted pixels around it. The proposed restoration algorithm is fast and efficient, requires minimal supervision and yield results with low overall radiometric and spectral distortions.

  3. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    SciTech Connect

    Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.

    2010-07-09

    We discuss the results of SEM and TEM measurements with the BPRML test samples fabricated from a BPRML (WSi2/Si with fundamental layer thickness of 3 nm) with a Dual Beam FIB (focused ion beam)/SEM technique. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.

  4. Sampling reactive pathways with random walks in chemical space: Applications to molecular dissociation and catalysis

    NASA Astrophysics Data System (ADS)

    Habershon, Scott

    2015-09-01

    Automatically generating chemical reaction pathways is a significant computational challenge, particularly in the case where a given chemical system can exhibit multiple reactants and products, as well as multiple pathways connecting these. Here, we outline a computational approach to allow automated sampling of chemical reaction pathways, including sampling of different chemical species at the reaction end-points. The key features of this scheme are (i) introduction of a Hamiltonian which describes a reaction "string" connecting reactant and products, (ii) definition of reactant and product species as chemical connectivity graphs, and (iii) development of a scheme for updating the chemical graphs associated with the reaction end-points. By performing molecular dynamics sampling of the Hamiltonian describing the complete reaction pathway, we are able to sample multiple different paths in configuration space between given chemical products; by periodically modifying the connectivity graphs describing the chemical identities of the end-points we are also able to sample the allowed chemical space of the system. Overall, this scheme therefore provides a route to automated generation of a "roadmap" describing chemical reactivity. This approach is first applied to model dissociation pathways in formaldehyde, H2CO, as described by a parameterised potential energy surface (PES). A second application to the HCo(CO)3 catalyzed hydroformylation of ethene (oxo process), using density functional tight-binding to model the PES, demonstrates that our graph-based approach is capable of sampling the intermediate paths in the commonly accepted catalytic mechanism, as well as several secondary reactions. Further algorithmic improvements are suggested which will pave the way for treating complex multi-step reaction processes in a more efficient manner.

  5. Random sampling of the Central European bat fauna reveals the existence of numerous hitherto unknown adenoviruses.

    PubMed

    Vidovszky, Márton; Kohl, Claudia; Boldogh, Sándor; Görföl, Tamás; Wibbelt, Gudrun; Kurth, Andreas; Harrach, Balázs

    2015-12-01

    From over 1250 extant species of the order Chiroptera, 25 and 28 are known to occur in Germany and Hungary, respectively. Close to 350 samples originating from 28 bat species (17 from Germany, 27 from Hungary) were screened for the presence of adenoviruses (AdVs) using a nested PCR that targets the DNA polymerase gene of AdVs. An additional PCR was designed and applied to amplify a fragment from the gene encoding the IVa2 protein of mastadenoviruses. All German samples originated from organs of bats found moribund or dead. The Hungarian samples were excrements collected from colonies of known bat species, throat or rectal swab samples, taken from live individuals that had been captured for faunistic surveys and migration studies, as well as internal organs of dead specimens. Overall, 51 samples (14.73%) were found positive. We detected 28 seemingly novel and six previously described bat AdVs by sequencing the PCR products. The positivity rate was the highest among the guano samples of bat colonies. In phylogeny reconstructions, the AdVs detected in bats clustered roughly, but not perfectly, according to the hosts' families (Vespertilionidae, Rhinolophidae, Hipposideridae, Phyllostomidae and Pteropodidae). In a few cases, identical sequences were derived from animals of closely related species. On the other hand, some bat species proved to harbour more than one type of AdV. The high prevalence of infection and the large number of chiropteran species worldwide make us hypothesise that hundreds of different yet unknown AdV types might circulate in bats. PMID:26599097

  6. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial

    PubMed Central

    Tavernier, Elsa; Giraudeau, Bruno

    2015-01-01

    We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT). In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review). Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was < 60%, as compared with the 80% nominal power); 41%, 16% and 6%, respectively, were overpowered (i.e., with real power > 90%). Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined. PMID:26173007

  7. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  8. Seroincidence of non-typhoid Salmonella infections: convenience vs. random community-based sampling.

    PubMed

    Emborg, H-D; Simonsen, J; Jørgensen, C S; Harritshøj, L H; Krogfelt, K A; Linneberg, A; Mølbak, K

    2016-01-01

    The incidence of reported infections of non-typhoid Salmonella is affected by biases inherent to passive laboratory surveillance, whereas analysis of blood sera may provide a less biased alternative to estimate the force of Salmonella transmission in humans. We developed a mathematical model that enabled a back-calculation of the annual seroincidence of Salmonella based on measurements of specific antibodies. The aim of the present study was to determine the seroincidence in two convenience samples from 2012 (Danish blood donors, n = 500, and pregnant women, n = 637) and a community-based sample of healthy individuals from 2006 to 2007 (n = 1780). The lowest antibody levels were measured in the samples from the community cohort and the highest in pregnant women. The annual Salmonella seroincidences were 319 infections/1000 pregnant women [90% credibility interval (CrI) 210-441], 182/1000 in blood donors (90% CrI 85-298) and 77/1000 in the community cohort (90% CrI 45-114). Although the differences between study populations decreased when accounting for different age distributions the estimates depend on the study population. It is important to be aware of this issue and define a certain population under surveillance in order to obtain consistent results in an application of serological measures for public health purposes. PMID:26119415

  9. Lévy-Ciesielski random series as a useful platform for Monte Carlo path integral sampling.

    PubMed

    Predescu, Cristian

    2005-04-01

    We demonstrate that the Lévy-Ciesielski implementation of Lie-Trotter products enjoys several properties that make it extremely suitable for path-integral Monte Carlo simulations: fast computation of paths, fast Monte Carlo sampling, and the ability to use different numbers of time slices for the different degrees of freedom, commensurate with the quantum effects. It is demonstrated that a Monte Carlo simulation for which particles or small groups of variables are updated in a sequential fashion has a statistical efficiency that is always comparable to or better than that of an all-particle or all-variable update sampler. The sequential sampler results in significant computational savings if updating a variable costs only a fraction of the cost for updating all variables simultaneously or if the variables are independent. In the Lévy-Ciesielski representation, the path variables are grouped in a small number of layers, with the variables from the same layer being statistically independent. The superior performance of the fast sampling algorithm is shown to be a consequence of these observations. Both mathematical arguments and numerical simulations are employed in order to quantify the computational advantages of the sequential sampler, the Lévy-Ciesielski implementation of path integrals, and the fast sampling algorithm. PMID:15903818

  10. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States

    PubMed Central

    Steimer, Andreas; Schindler, Kaspar

    2015-01-01

    Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing

  11. Global stratigraphy of Venus: Analysis of a random sample of thirty-six test areas

    NASA Technical Reports Server (NTRS)

    Basilevsky, Alexander T.; Head, James W., III

    1995-01-01

    The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. Mapping of such units and structures in 36 randomly distributed large regions shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky, 1993) is the earliest event detected. Our stratigraphic analyses suggest that following tessera formation, extensive volcanic flooding resurfaced at least 85% of the planet in the form of the presently-ridged and fractured plains. Several lines of evidence favor a high flux in the post-tessera period but we have no independent evidence for the absolute duration of ridged plains emplacement. During this time, the net state of stress in the lithosphere apparently changed from extensional to compressional, first in the form of extensive ridge belt development, followed by the formation of extensive wrinkle ridges on the flow units. Subsequently, there occurred local emplacement of smooth and lobate plains units which are presently essentially undeformed. The major events in the latest 10% of the presently preserved history of Venus are continued rifting and some associated volcanism, and the redistribution of eolian material largely derived from impact crater deposits. Detailed geologic mapping and stratigraphic synthesis are necessary to test this sequence and to address many of

  12. The Prospective and Retrospective Memory Questionnaire: a population-based random sampling study.

    PubMed

    Piauilino, D C; Bueno, O F A; Tufik, S; Bittencourt, L R; Santos-Silva, R; Hachul, H; Gorenstein, C; Pompéia, S

    2010-05-01

    The Prospective and Retrospective Memory Questionnaire (PRMQ) has been shown to have acceptable reliability and factorial, predictive, and concurrent validity. However, the PRMQ has never been administered to a probability sample survey representative of all ages in adulthood, nor have previous studies controlled for factors that are known to influence metamemory, such as affective status. Here, the PRMQ was applied in a survey adopting a probabilistic three-stage cluster sample representative of the population of Sao Paulo, Brazil, according to gender, age (20-80 years), and economic status (n=1042). After excluding participants who had conditions that impair memory (depression, anxiety, used psychotropics, and/or had neurological/psychiatric disorders), in the remaining 664 individuals we (a) used confirmatory factor analyses to test competing models of the latent structure of the PRMQ, and (b) studied effects of gender, age, schooling, and economic status on prospective and retrospective memory complaints. The model with the best fit confirmed the same tripartite structure (general memory factor and two orthogonal prospective and retrospective memory factors) previously reported. Women complained more of general memory slips, especially those in the first 5 years after menopause, and there were more complaints of prospective than retrospective memory, except in participants with lower family income. PMID:20408038

  13. Sequential sampling model for multiattribute choice alternatives with random attention time and processing order

    PubMed Central

    Diederich, Adele; Oswald, Peter

    2014-01-01

    A sequential sampling model for multiattribute binary choice options, called multiattribute attention switching (MAAS) model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered—the attention time—influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability p0 > 0 of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process. PMID:25249963

  14. Enhancement of the low resolution image quality using randomly sampled data for multi-slice MR imaging

    PubMed Central

    Pang, Yong; Yu, Baiying

    2014-01-01

    Low resolution images are often acquired in in vivo MR applications involving in large field-of-view (FOV) and high speed imaging, such as, whole-body MRI screening and functional MRI applications. In this work, we investigate a multi-slice imaging strategy for acquiring low resolution images by using compressed sensing (CS) MRI to enhance the image quality without increasing the acquisition time. In this strategy, low resolution images of all the slices are acquired using multiple-slice imaging sequence. In addition, extra randomly sampled data in one center slice are acquired by using the CS strategy. These additional randomly sampled data are multiplied by the weighting functions generated from low resolution full k-space images of the two slices, and then interpolated into the k-space of other slices. In vivo MR images of human brain were employed to investigate the feasibility and the performance of the proposed method. Quantitative comparison between the conventional low resolution images and those from the proposed method was also performed to demonstrate the advantage of the method. PMID:24834426

  15. Sample size estimation for alternating logistic regressions analysis of multilevel randomized community trials of under-age drinking.

    PubMed

    Reboussin, Beth A; Preisser, John S; Song, Eun-Young; Wolfson, Mark

    2012-07-01

    Under-age drinking is an enormous public health issue in the USA. Evidence that community level structures may impact on under-age drinking has led to a proliferation of efforts to change the environment surrounding the use of alcohol. Although the focus of these efforts is to reduce drinking by individual youths, environmental interventions are typically implemented at the community level with entire communities randomized to the same intervention condition. A distinct feature of these trials is the tendency of the behaviours of individuals residing in the same community to be more alike than that of others residing in different communities, which is herein called 'clustering'. Statistical analyses and sample size calculations must account for this clustering to avoid type I errors and to ensure an appropriately powered trial. Clustering itself may also be of scientific interest. We consider the alternating logistic regressions procedure within the population-averaged modelling framework to estimate the effect of a law enforcement intervention on the prevalence of under-age drinking behaviours while modelling the clustering at multiple levels, e.g. within communities and within neighbourhoods nested within communities, by using pairwise odds ratios. We then derive sample size formulae for estimating intervention effects when planning a post-test-only or repeated cross-sectional community-randomized trial using the alternating logistic regressions procedure. PMID:24347839

  16. [Estimation of quantitative proteinuria using a new dipstick in random urine samples].

    PubMed

    Morishita, Yoshiyuki; Kusano, Eiji; Umino, Tetsuo; Nemoto, Jun; Tanba, Kaichirou; Ando, Yasuhiro; Muto, Shigeaki; Asano, Yasushi

    2004-02-01

    Proteinuria is quantified for diagnostic and prognostic purposes and to assess responses to therapy. Methods used to assess urinary protein include 24-hour urine collection (24-Up) and determination of the ratio of protein to creatinine concentration (Up/Ucr) in simple voided urine samples (Up/Ucr quantitative method). However, these methods are costly and time consuming. The Multistix PRO 11 (Bayer Medical Co., Ltd., Tokyo, Japan) is a new urine dipstick that allows rapid measurement of Up/Ucr. Results obtained with the Multistix PRO 11 coincided well with those obtained with the 24-Up method (kappa = 0.68) and the Up/Ucr quantitative method (kappa = 0.75). However, Multistix PRO 11 did not accurately measure moderate to severe proteinuria (> or = 500 mg/g. Cr). Our findings suggest that Multistix PRO 11 is useful for the screening, assessment, and follow-up of mild proteinuria. PMID:15058105

  17. Global Stratigraphy of Venus: Analysis of a Random Sample of Thirty-Six Test Areas

    NASA Technical Reports Server (NTRS)

    Basilevsky, Alexander T.; Head, James W., III

    1995-01-01

    The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. These units and structures form a major stratigraphic and geologic sequence (from oldest to youngest): (1) tessera terrain; (2) densely fractured terrains associated with coronae and in the form of remnants among plains; (3) fractured and ridged plains and ridge belts; (4) plains with wrinkle ridges; (5) ridges associated with coronae annulae and ridges of arachnoid annulae which are contemporary with wrinkle ridges of the ridged plains; (6) smooth and lobate plains; (7) fractures of coronae annulae, and fractures not related to coronae annulae, which disrupt ridged and smooth plains; (8) rift-associated fractures; and (9) craters with associated dark paraboloids, which represent the youngest 1O% of the Venus impact crater population (Campbell et al.), and are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; surficial streaks and patches are approximately contemporary with dark-paraboloid craters. Mapping of such units and structures in 36 randomly distributed large regions (each approximately 10(exp 6) sq km) shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky) is the earliest event detected. In the terminal stages of tessera fon

  18. Loneliness and Ethnic Composition of the School Class: A Nationally Random Sample of Adolescents.

    PubMed

    Madsen, Katrine Rich; Damsgaard, Mogens Trab; Rubin, Mark; Jervelund, Signe Smith; Lasgaard, Mathias; Walsh, Sophie; Stevens, Gonneke G W J M; Holstein, Bjørn E

    2016-07-01

    Loneliness is a public health concern that increases the risk for several health, behavioral and academic problems among adolescents. Some studies have suggested that adolescents with an ethnic minority background have a higher risk for loneliness than adolescents from the majority population. The increasing numbers of migrant youth around the world mean growing numbers of heterogeneous school environments in many countries. Even though adolescents spend a substantial amount of time at school, there is currently very little non-U.S. research that has examined the importance of the ethnic composition of school classes for loneliness in adolescence. The present research aimed to address this gap by exploring the association between loneliness and three dimensions of the ethnic composition in the school class: (1) membership of ethnic majority in the school class, (2) the size of own ethnic group in the school class, and (3) the ethnic diversity of the school class. We used data from the Danish 2014 Health Behaviour in School-aged Children survey: a nationally representative sample of 4383 (51.2 % girls) 11-15-year-olds. Multilevel logistic regression analyses revealed that adolescents who did not belong to the ethnic majority in the school class had increased odds for loneliness compared to adolescents that belonged to the ethnic majority. Furthermore, having more same-ethnic classmates lowered the odds for loneliness. We did not find any statistically significant association between the ethnic diversity of the school classes and loneliness. The study adds novel and important findings to how ethnicity in a school class context, as opposed to ethnicity per se, influences adolescents' loneliness. PMID:26861709

  19. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge

    PubMed Central

    Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Frédéric; Petignat, Patrick

    2015-01-01

    Background Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. Methods A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY) or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA). After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET). HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic. Results HPV prevalence for high-risk types was 62.3% (95%CI: 53.7–70.2) detected by s-DRY, 56.2% (95%CI: 47.6–64.4) by Dr-WET, and 54.6% (95%CI: 46.1–62.9) by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34), and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56). Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+) were: 64.0% (95%CI: 44.5–79.8) for s-FTA, 84.6% (95%CI: 66.5–93.9) for s-DRY, and 76.9% (95%CI: 58.0–89.0) for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%). Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD)/per card vs. ~1 USD/per swab). Conclusion Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): 43310942 PMID:26630353

  20. Alcohol consumption and cognitive performance in a random sample of Australian soldiers who served in the Second World War.

    PubMed Central

    Dent, O. F.; Sulway, M. R.; Broe, G. A.; Creasey, H.; Kos, S. C.; Jorm, A. F.; Tennant, C.; Fairley, M. J.

    1997-01-01

    OBJECTIVE: To examine the association between the average daily alcohol intake of older men in 1982 and cognitive performance and brain atrophy nine years later. SUBJECTS: Random sample of 209 Australian men living in the community who were veterans of the second world war. Their mean age in 1982 was 64.3 years. MAIN OUTCOME MEASURES: 18 standard neuropsychological tests measuring a range of intellectual functions. Cortical, sylvian, and vermian atrophy on computed tomography. RESULTS: Compared with Australian men of the same age in previous studies these men had sustained a high rate of alcohol consumption into old age. However, there was no significant correlation, linear or non-linear, between alcohol consumption in 1982 and results in any of the neuropsychological tests in 1991; neither was alcohol consumption associated with brain atrophy on computed tomography. CONCLUSION: No evidence was found that apparently persistent lifelong consumption of alcohol was related to the cognitive functioning of these men in old age. PMID:9180067

  1. Enhancing positive parent-child interactions and family functioning in a poverty sample: a randomized control trial.

    PubMed

    Negrão, Mariana; Pereira, Mariana; Soares, Isabel; Mesman, Judi

    2014-01-01

    This study tested the attachment-based intervention program Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) in a randomized controlled trial with poor families of toddlers screened for professional's concerns about the child's caregiving environment. The VIPP-SD is an evidence-based intervention, but has not yet been tested in the context of poverty. The sample included 43 families with 1- to 4-year-old children: mean age at the pretest was 29 months and 51% were boys. At the pretest and posttest, mother-child interactions were observed at home, and mothers reported on family functioning. The VIPP-SD proved to be effective in enhancing positive parent-child interactions and positive family relations in a severely deprived context. Results are discussed in terms of implications for support services provided to such poor families in order to reduce intergenerational risk transmission. PMID:24972101

  2. BEHAVIORAL RISK DISPARITIES IN A RANDOM SAMPLE OF SELF-IDENTIFYING GAY AND NON-GAY MALE UNIVERSITY STUDENTS

    PubMed Central

    Rhodes, Scott D.; McCoy, Thomas P.; Wilkin, Aimee M.; Wolfson, Mark

    2013-01-01

    This internet-based study was designed to compare health risk behaviors of gay and non-gay university students from stratified random cross-sectional samples of undergraduate students. Mean age of the 4,167 male participants was 20.5 (±2.7) years. Of these, 206 (4.9%) self-identified as gay and 3,961 (95.1%) self-identified as heterosexual. After adjusting for selected characteristics and clustering within university, gay men had higher odds of reporting: multiple sexual partners; cigarette smoking; methamphetamine use; gamma-hydroxybutyrate (GHB) use; other illicit drug use within the past 30 days and during lifetime; and intimate partner violence (IPV). Understanding the health risk behaviors of gay and heterosexual men is crucial to identifying associated factors and intervening upon them using appropriate and tailored strategies to reduce behavioral risk disparities and improve health outcomes. PMID:19882428

  3. Comparison of risk-based versus random sampling in the monitoring of antimicrobial residues in Danish finishing pigs.

    PubMed

    Alban, Lis; Rugbjerg, Helene; Petersen, Jesper Valentin; Nielsen, Liza Rosenbaum

    2016-06-01

    more residue cases with higher cost-effectiveness than random monitoring. Sampling 7500 HR pigs and 5000 LR pigs resulted in the most cost-effective monitoring among the alternative scenarios. The associated costs would increase by 4%. A scenario involving testing of 5000 HR and 5000 LR animals would result in slightly fewer positives, but 17% savings in costs. The advantages of using HPLC LC-MS/MS compared to the bioassay are a fast response and a high sensitivity for all relevant substances used in pigs. The Danish abattoir companies have implemented a risk-based monitoring similar to the above per January 2016. PMID:27237394

  4. Random sample consensus combined with partial least squares regression (RANSAC-PLS) for microbial metabolomics data mining and phenotype improvement.

    PubMed

    Teoh, Shao Thing; Kitamura, Miki; Nakayama, Yasumune; Putri, Sastia; Mukai, Yukio; Fukusaki, Eiichiro

    2016-08-01

    In recent years, the advent of high-throughput omics technology has made possible a new class of strain engineering approaches, based on identification of possible gene targets for phenotype improvement from omic-level comparison of different strains or growth conditions. Metabolomics, with its focus on the omic level closest to the phenotype, lends itself naturally to this semi-rational methodology. When a quantitative phenotype such as growth rate under stress is considered, regression modeling using multivariate techniques such as partial least squares (PLS) is often used to identify metabolites correlated with the target phenotype. However, linear modeling techniques such as PLS require a consistent metabolite-phenotype trend across the samples, which may not be the case when outliers or multiple conflicting trends are present in the data. To address this, we proposed a data-mining strategy that utilizes random sample consensus (RANSAC) to select subsets of samples with consistent trends for construction of better regression models. By applying a combination of RANSAC and PLS (RANSAC-PLS) to a dataset from a previous study (gas chromatography/mass spectrometry metabolomics data and 1-butanol tolerance of 19 yeast mutant strains), new metabolites were indicated to be correlated with tolerance within certain subsets of the samples. The relevance of these metabolites to 1-butanol tolerance were then validated from single-deletion strains of corresponding metabolic genes. The results showed that RANSAC-PLS is a promising strategy to identify unique metabolites that provide additional hints for phenotype improvement, which could not be detected by traditional PLS modeling using the entire dataset. PMID:26861498

  5. A Random Sample

    ERIC Educational Resources Information Center

    Cochran, Wendell

    1976-01-01

    Presented is a review of papers presented at the 25th International Geological Congress held August 16-25, 1976, Sydney, Australia. Topics include precambrian geology, tectonics, biostratigraphy, geochemistry, quaternary geology, engineering geology, planetology, geological education, and stress environments. (SL)

  6. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  7. Exploring equivalence domain in non-linear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-02-01

    This paper presents a methodology to sample equivalence domain (ED) in non-linear PDE-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo (MCMC) algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of Magneotelluric, Controlled-source Electromagnetic (EM) and Global EM induction data.

  8. The Association between Childhood and Adolescent Sexual Abuse and Proxies for Sexual Risk Behavior: A Random Sample of the General Population of Sweden

    ERIC Educational Resources Information Center

    Steel, Jennifer L.; Herlitz, Claes A.

    2005-01-01

    Objective: Several studies with small and ''high risk'' samples have demonstrated that a history of childhood or adolescent sexual abuse (CASA) is associated with sexual risk behaviors (SRBs). However, few studies with large random samples from the general population have specifically examined the relationship between CASA and SRBs with a…

  9. 454 Pyrosequencing Analysis on Faecal Samples from a Randomized DBPC Trial of Colicky Infants Treated with Lactobacillus reuteri DSM 17938

    PubMed Central

    Roos, Stefan; Dicksved, Johan; Tarasco, Valentina; Locatelli, Emanuela; Ricceri, Fulvio; Grandin, Ulf; Savino, Francesco

    2013-01-01

    Objective To analyze the global microbial composition, using large-scale DNA sequencing of 16 S rRNA genes, in faecal samples from colicky infants given L. reuteri DSM 17938 or placebo. Methods Twenty-nine colicky infants (age 10–60 days) were enrolled and randomly assigned to receive either Lactobacillus reuteri (108 cfu) or a placebo once daily for 21 days. Responders were defined as subjects with a decrease of 50% in daily crying time at day 21 compared with the starting point. The microbiota of faecal samples from day 1 and 21 were analyzed using 454 pyrosequencing. The primers: Bakt_341F and Bakt_805R, complemented with 454 adapters and sample specific barcodes were used for PCR amplification of the 16 S rRNA genes. The structure of the data was explored by using permutational multivariate analysis of variance and effects of different variables were visualized with ordination analysis. Results The infants’ faecal microbiota were composed of Proteobacteria, Firmicutes, Actinobacteria and Bacteroidetes as the four main phyla. The composition of the microbiota in infants with colic had very high inter-individual variability with Firmicutes/Bacteroidetes ratios varying from 4000 to 0.025. On an individual basis, the microbiota was, however, relatively stable over time. Treatment with L. reuteri DSM 17938 did not change the global composition of the microbiota, but when comparing responders with non-responders the group responders had an increased relative abundance of the phyla Bacteroidetes and genus Bacteroides at day 21 compared with day 0. Furthermore, the phyla composition of the infants at day 21 could be divided into three enterotype groups, dominated by Firmicutes, Bacteroidetes, and Actinobacteria, respectively. Conclusion L. reuteri DSM 17938 did not affect the global composition of the microbiota. However, the increase of Bacteroidetes in the responder infants indicated that a decrease in colicky symptoms was linked to changes of the microbiota

  10. Small sample performance of bias-corrected sandwich estimators for cluster-randomized trials with binary outcomes.

    PubMed

    Li, Peng; Redden, David T

    2015-01-30

    The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z-test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10 and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t-test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes because of fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

  11. A Cross-Sectional, Randomized Cluster Sample Survey of Household Vulnerability to Extreme Heat among Slum Dwellers in Ahmedabad, India

    PubMed Central

    Tran, Kathy V.; Azhar, Gulrez S.; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy

    2013-01-01

    Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile. PMID:23778061

  12. Mental health impact of the 2010 Haiti earthquake on the Miami Haitian population: A random-sample survey

    PubMed Central

    Messiah, Antoine; Acuna, Juan M; Castro, Grettel; de la Vega, Pura Rodríguez; Vaiva, Guillaume; Shultz, James; Neria, Yuval; De La Rosa, Mario

    2015-01-01

    This study examined the mental health consequences of the January 2010 Haiti earthquake on Haitians living in Miami-Dade County, Florida, 2–3 years following the event. A random-sample household survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants (N = 421) were assessed for their earthquake exposure and its impact on family, friends, and household finances; and for symptoms of posttraumatic stress disorder (PTSD), anxiety, and major depression; using standardized screening measures and thresholds. Exposure was considered as “direct” if the interviewee was in Haiti during the earthquake. Exposure was classified as “indirect” if the interviewee was not in Haiti during the earthquake but (1) family members or close friends were victims of the earthquake, and/or (2) family members were hosted in the respondent's household, and/or (3) assets or jobs were lost because of the earthquake. Interviewees who did not qualify for either direct or indirect exposure were designated as “lower” exposure. Eight percent of respondents qualified for direct exposure, and 63% qualified for indirect exposure. Among those with direct exposure, 19% exceeded threshold for PTSD, 36% for anxiety, and 45% for depression. Corresponding percentages were 9%, 22% and 24% for respondents with indirect exposure, and 6%, 14%, and 10% for those with lower exposure. A majority of Miami Haitians were directly or indirectly exposed to the earthquake. Mental health distress among them remains considerable two to three years post-earthquake. PMID:26753105

  13. Tobacco Smoking Surveillance: Is Quota Sampling an Efficient Tool for Monitoring National Trends? A Comparison with a Random Cross-Sectional Survey

    PubMed Central

    Guignard, Romain; Wilquin, Jean-Louis; Richard, Jean-Baptiste; Beck, François

    2013-01-01

    Objectives It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. Design / Outcome Measures In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs “mobile-only”), and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew “hard-to-reach” people on the prevalence found. Results Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old) than in the quota sample (respectively 30.2% and 25.3%). In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey). The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. Conclusion Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations. PMID:24194924

  14. Rationale, design, methodology and sample characteristics for the family partners for health study: a cluster randomized controlled study

    PubMed Central

    2012-01-01

    Background Young children who are overweight are at increased risk of becoming obese and developing type 2 diabetes and cardiovascular disease later in life. Therefore, early intervention is critical. This paper describes the rationale, design, methodology, and sample characteristics of a 5-year cluster randomized controlled trial being conducted in eight elementary schools in rural North Carolina, United States. Methods/Design The first aim of the trial is to examine the effects of a two-phased intervention on weight status, adiposity, nutrition and exercise health behaviors, and self-efficacy in overweight or obese 2nd, 3 rd, and 4th grade children and their overweight or obese parents. The primary outcome in children is stabilization of BMI percentile trajectory from baseline to 18 months. The primary outcome in parents is a decrease in BMI from baseline to 18 months. Secondary outcomes for both children and parents include adiposity, nutrition and exercise health behaviors, and self-efficacy from baseline to 18 months. A secondary aim of the trial is to examine in the experimental group, the relationships between parents and children's changes in weight status, adiposity, nutrition and exercise health behaviors, and self-efficacy. An exploratory aim is to determine whether African American, Hispanic, and non-Hispanic white children and parents in the experimental group benefit differently from the intervention in weight status, adiposity, health behaviors, and self-efficacy. A total of 358 African American, non-Hispanic white, and bilingual Hispanic children with a BMI ≥ 85th percentile and 358 parents with a BMI ≥ 25 kg/m2 have been inducted over 3 1/2 years and randomized by cohort to either an experimental or a wait-listed control group. The experimental group receives a 12-week intensive intervention of nutrition and exercise education, coping skills training and exercise (Phase I), 9 months of continued monthly contact (Phase II) and then 6 months

  15. A therapeutic application of the experience sampling method in the treatment of depression: a randomized controlled trial.

    PubMed

    Kramer, Ingrid; Simons, Claudia J P; Hartmann, Jessica A; Menne-Lothmann, Claudia; Viechtbauer, Wolfgang; Peeters, Frenk; Schruers, Koen; van Bemmel, Alex L; Myin-Germeys, Inez; Delespaul, Philippe; van Os, Jim; Wichers, Marieke

    2014-02-01

    In depression, the ability to experience daily life positive affect predicts recovery and reduces relapse rates. Interventions based on the experience sampling method (ESM-I) are ideally suited to provide insight in personal, contextualized patterns of positive affect. The aim of this study was to examine whether add-on ESM-derived feedback on personalized patterns of positive affect is feasible and useful to patients, and results in a reduction of depressive symptomatology. Depressed outpatients (n=102) receiving pharmacological treatment participated in a randomized controlled trial with three arms: an experimental group receiving add-on ESM-derived feedback, a pseudo-experimental group participating in ESM but receiving no feedback, and a control group. The experimental group participated in an ESM procedure (three days per week over a 6-week period) using a palmtop. This group received weekly standardized feedback on personalized patterns of positive affect. Hamilton Depression Rating Scale - 17 (HDRS) and Inventory of Depressive Symptoms (IDS) scores were obtained before and after the intervention. During a 6-month follow-up period, five HDRS and IDS assessments were completed. Add-on ESM-derived feedback resulted in a significant and clinically relevant stronger decrease in HDRS score relative to the control group (p<0.01; -5.5 point reduction in HDRS at 6 months). Compared to the pseudo-experimental group, a clinically relevant decrease in HDRS score was apparent at 6 months (B=-3.6, p=0.053). Self-reported depressive complaints (IDS) yielded the same pattern over time. The use of ESM-I was deemed acceptable and the provided feedback easy to understand. Patients attempted to apply suggestions from ESM-derived feedback to daily life. These data suggest that the efficacy of traditional passive pharmacological approach to treatment of major depression can be enhanced by using person-tailored daily life information regarding positive affect. PMID:24497255

  16. Differentiating intraprofessional attitudes toward paradigms in health care delivery among chiropractic factions: results from a randomly sampled survey

    PubMed Central

    2014-01-01

    Background As health care has increased in complexity and health care teams have been offered as a solution, so too is there an increased need for stronger interprofessional collaboration. However the intraprofessional factions that exist within every profession challenge interprofessional communication through contrary paradigms. As a contender in the conservative spinal health care market, factions within chiropractic that result in unorthodox practice behaviours may compromise interprofessional relations and that profession’s progress toward institutionalization. The purpose of this investigation was to quantify the professional stratification among Canadian chiropractic practitioners and evaluate the practice perceptions of those factions. Methods A stratified random sample of 740 Canadian chiropractors was surveyed to determine faction membership and how professional stratification could be related to views that could be considered unorthodox to current evidence-based care and guidelines. Stratification in practice behaviours is a stated concern of mainstream medicine when considering interprofessional referrals. Results Of 740 deliverable questionnaires, 503 were returned for a response rate of 68%. Less than 20% of chiropractors (18.8%) were aligned with a predefined unorthodox perspective of the conditions they treat. Prediction models suggest that unorthodox perceptions of health practice related to treatment choices, x-ray use and vaccinations were strongly associated with unorthodox group membership (X2 =13.4, p = 0.0002). Conclusion Chiropractors holding unorthodox views may be identified based on response to specific beliefs that appear to align with unorthodox health practices. Despite continued concerns by mainstream medicine, only a minority of the profession has retained a perspective in contrast to current scientific paradigms. Understanding the profession’s factions is important to the anticipation of care delivery when considering

  17. Testing how voluntary participation requirements in an environmental study affect the planned random sample design outcomes: implications for the predictions of values and their uncertainty.

    NASA Astrophysics Data System (ADS)

    Ander, Louise; Lark, Murray; Smedley, Pauline; Watts, Michael; Hamilton, Elliott; Fletcher, Tony; Crabbe, Helen; Close, Rebecca; Studden, Mike; Leonardi, Giovanni

    2015-04-01

    Random sampling design is optimal in order to be able to assess outcomes, such as the mean of a given variable across an area. However, this optimal sampling design may be compromised to an unknown extent by unavoidable real-world factors: the extent to which the study design can still be considered random, and the influence this may have on the choice of appropriate statistical data analysis is examined in this work. We take a study which relied on voluntary participation for the sampling of private water tap chemical composition in England, UK. This study was designed and implemented as a categorical, randomised study. The local geological classes were grouped into 10 types, which were considered to be most important in likely effects on groundwater chemistry (the source of all the tap waters sampled). Locations of the users of private water supplies were made available to the study group from the Local Authority in the area. These were then assigned, based on location, to geological groups 1 to 10 and randomised within each group. However, the permission to collect samples then required active, voluntary participation by householders and thus, unlike many environmental studies, could not always follow the initial sample design. Impediments to participation ranged from 'willing but not available' during the designated sampling period, to a lack of response to requests to sample (assumed to be wholly unwilling or unable to participate). Additionally, a small number of unplanned samples were collected via new participants making themselves known to the sampling teams, during the sampling period. Here we examine the impact this has on the 'random' nature of the resulting data distribution, by comparison with the non-participating known supplies. We consider the implications this has on choice of statistical analysis methods to predict values and uncertainty at un-sampled locations.

  18. Inadequate reporting of research ethics review and informed consent in cluster randomised trials: review of random sample of published trials

    PubMed Central

    McRae, Andrew D; Weijer, Charles; Bennett, Carol; Dixon, Stephanie; Taleban, Julia; Skea, Zoe; Eccles, Martin P; Brehaut, Jamie C; Donner, Allan; Saginur, Raphael; Boruch, Robert F; Grimshaw, Jeremy M

    2011-01-01

    Objectives To investigate the extent to which authors of cluster randomised trials adhered to two basic requirements of the World Medical Association’s Declaration of Helsinki and the International Committee of Medical Journal Editors’ uniform requirements for manuscripts (namely, reporting of research ethics review and informed consent), to determine whether the adequacy of reporting has improved over time, and to identify characteristics of cluster randomised trials associated with reporting of ethics practices. Design Review of a random sample of published cluster randomised trials from an electronic search in Medline. Setting Cluster randomised trials in health research published in English language journals from 2000 to 2008. Study sample 300 cluster randomised trials published in 150 journals. Results 77 (26%, 95% confidence interval 21% to 31%) trials failed to report ethics review. The proportion reporting ethics review increased significantly over time (P<0.001). Trials with data collection interventions at the individual level were more likely to report ethics review than were trials that used routine data sources only (79% (n=151) v 55% (23); P=0.008). Trials that accounted for clustering in the design and analysis were more likely to report ethics review. The median impact factor of the journal of publication was higher for trials that reported ethics review (3.4 v 2.3; P<0.001). 93 (31%, 26% to 36%) trials failed to report consent. Reporting of consent increased significantly over time (P<0.001). Trials with interventions targeting participants at the individual level were more likely to report consent than were trials with interventions targeting the cluster level (87% (90) v 48% (41); P<0.001). Trials with data collection interventions at the individual level were more likely to report consent than were those that used routine data sources only (78% (146) v 29% (11); P<0.001). Conclusions Reporting of research ethics protections in cluster

  19. Cranial Capacity Related to Sex, Rank, and Race in a Stratified Random Sample of 6,325 U.S. Military Personnel.

    ERIC Educational Resources Information Center

    Rushton, J. Philippe

    1992-01-01

    Cranial capacities were calculated from external head measurements reported for a stratified random sample of 6,325 Army personnel measured in 1988. Data suggest that human populations differ in brain size by race and sex. The major source of variation in data was sex; race was second and rank last. (Author/SLD)

  20. Who Really Uses Condoms?: Findings from a Large Internet-Recruited Random Sample of Unmarried Heterosexual College Students in the Southeastern United States

    ERIC Educational Resources Information Center

    Rhodes, Scott D.; McCoy, Thomas; Omli, Morrow R.; Cohen, Gail; Champion, Heather; Durant, Robert H.

    2006-01-01

    Using data collected from an online internet-based assessment, we explored condom use rates and the characteristics of condom users among sexually active, unmarried heterosexual college students within a stratified random sample of 2,645 students from 10 universities in North Carolina. Of 1,417 students who fit the inclusion criteria, 39% were…

  1. From Planning to Implementation: An Examination of Changes in the Research Design, Sample Size, and Precision of Group Randomized Trials Launched by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica

    2013-01-01

    This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…

  2. The Accuracy of Pass/Fail Decisions in Random and Difficulty-Balanced Domain-Sampling Tests.

    ERIC Educational Resources Information Center

    Schnipke, Deborah L.

    A common practice in some certification fields (e.g., information technology) is to draw items from an item pool randomly and apply a common passing score, regardless of the items administered. Because these tests are commonly used, it is important to determine how accurate the pass/fail decisions are for such tests and whether fairly small,…

  3. Random versus fixed-site sampling when monitoring relative abundance of fishes in headwater streams of the upper Colorado River basin

    USGS Publications Warehouse

    Quist, M.C.; Gerow, K.G.; Bower, M.R.; Hubert, W.A.

    2006-01-01

    Native fishes of the upper Colorado River basin (UCRB) have declined in distribution and abundance due to habitat degradation and interactions with normative fishes. Consequently, monitoring populations of both native and nonnative fishes is important for conservation of native species. We used data collected from Muddy Creek, Wyoming (2003-2004), to compare sample size estimates using a random and a fixed-site sampling design to monitor changes in catch per unit effort (CPUE) of native bluehead suckers Catostomus discobolus, flannelmouth suckers C. latipinnis, roundtail chub Gila robusta, and speckled dace Rhinichthys osculus, as well as nonnative creek chub Semotilus atromaculatus and white suckers C. commersonii. When one-pass backpack electrofishing was used, detection of 10% or 25% changes in CPUE (fish/100 m) at 60% statistical power required 50-1,000 randomly sampled reaches among species regardless of sampling design. However, use of a fixed-site sampling design with 25-50 reaches greatly enhanced the ability to detect changes in CPUE. The addition of seining did not appreciably reduce required effort. When detection of 25-50% changes in CPUE of native and nonnative fishes is acceptable, we recommend establishment of 25-50 fixed reaches sampled by one-pass electrofishing in Muddy Creek. Because Muddy Creek has habitat and fish assemblages characteristic of other headwater streams in the UCRB, our results are likely to apply to many other streams in the basin. ?? Copyright by the American Fisheries Society 2006.

  4. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    PubMed

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments. PMID:19548709

  5. Differentiating emotions across contexts: comparing adults with and without social anxiety disorder using random, social interaction, and daily experience sampling.

    PubMed

    Kashdan, Todd B; Farmer, Antonina S

    2014-06-01

    The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning. PMID:24512246

  6. Policies pertaining to complementary and alternative medical therapies in a random sample of 39 academic health centers.

    PubMed

    Cohen, Michael H; Sandler, Lynne; Hrbek, Andrea; Davis, Roger B; Eisenberg, David M

    2005-01-01

    This research documents policies in 39 randomly selected academic medical centers integrating complementary and alternative medical (CAM) services into conventional care. Twenty-three offered CAM services-most commonly, acupuncture, massage, dietary supplements, mind-body therapies, and music therapy. None had written policies concerning credentialing practices or malpractice liability. Only 10 reported a written policy governing use of dietary supplements, although three sold supplements in inpatient formularies, one in the psychiatry department, and five in outpatient pharmacies. Thus, few academic medical centers have sufficiently integrated CAM services into conventional care by developing consensus-written policies governing credentialing, malpractice liability, and dietary supplement use. PMID:15712764

  7. Effects of music therapy on pain responses induced by blood sampling in premature infants: A randomized cross-over trial

    PubMed Central

    Shabani, Fidan; Nayeri, Nahid Dehghan; Karimi, Roghiyeh; Zarei, Khadijeh; Chehrazi, Mohammad

    2016-01-01

    Background: Premature infants are subjected to many painful procedures during care and treatment. The aim of this study was to assess the effect of music therapy on physiological and behavioral pain responses of premature infants during and after blood sampling. Materials and Methods: This study was a cross-over clinical trial conducted on 20 infants in a hospital affiliated to Tehran University of Medical Sciences for a 5-month period in 2011. In the experimental group, Transitions music was played from 5 min before until 10 min after blood sampling. The infants’ facial expressions and physiological measures were recorded from 10 min before until 10 min after sampling. All steps and measurements, except music therapy, were the same for the control group. Data were analyzed using SAS and SPSS software through analysis of variance (ANOVA) and Chi-square tests. Results: There were significant differences between the experimental and control groups (P = 0.022) in terms of heart rate during needle extraction and at the first 5 min after sampling (P = 0.005). Considering the infant's sleep–wake state in the second 5 min before sampling, the statistical difference was significant (P = 0.044). Difference was significant (P = 0.045) during injection of the needle, in the first 5 min after sampling (P = 0.002), and in the second 5 min after sampling (P = 0.005). There were significant difference in infants’ facial expressions of pain in the first 5 min after sampling (P = 0.001). Conclusions: Music therapy reduces the physiological and behavioral responses of pain during and after blood sampling. PMID:27563323

  8. Genetically predicted body mass index and Alzheimer's disease-related phenotypes in three large samples: Mendelian randomization analyses.

    PubMed

    Mukherjee, Shubhabrata; Walter, Stefan; Kauwe, John S K; Saykin, Andrew J; Bennett, David A; Larson, Eric B; Crane, Paul K; Glymour, M Maria

    2015-12-01

    Observational research shows that higher body mass index (BMI) increases Alzheimer's disease (AD) risk, but it is unclear whether this association is causal. We applied genetic variants that predict BMI in Mendelian randomization analyses, an approach that is not biased by reverse causation or confounding, to evaluate whether higher BMI increases AD risk. We evaluated individual-level data from the AD Genetics Consortium (ADGC: 10,079 AD cases and 9613 controls), the Health and Retirement Study (HRS: 8403 participants with algorithm-predicted dementia status), and published associations from the Genetic and Environmental Risk for AD consortium (GERAD1: 3177 AD cases and 7277 controls). No evidence from individual single-nucleotide polymorphisms or polygenic scores indicated BMI increased AD risk. Mendelian randomization effect estimates per BMI point (95% confidence intervals) were as follows: ADGC, odds ratio (OR) = 0.95 (0.90-1.01); HRS, OR = 1.00 (0.75-1.32); GERAD1, OR = 0.96 (0.87-1.07). One subscore (cellular processes not otherwise specified) unexpectedly predicted lower AD risk. PMID:26079416

  9. Stochastic sampled data robust stabilisation of T-S fuzzy neutral systems with randomly occurring uncertainties and time-varying delays

    NASA Astrophysics Data System (ADS)

    Rakkiyappan, R.; Chandrasekar, A.; Lakshmanan, S.

    2016-07-01

    This paper is concerned with the stochastic sampled data robust stabilisation of T-S fuzzy neutral systems with randomly occurring uncertainties and time-varying delays. The sampling period is assumed to be m in number, whose occurrence probabilities are given constants and satisfy Bernoulli distribution. By introducing an improved Lyapunov-Krasovskii functional with new triple integral terms and by combining both the convex combination technique and reciprocal convex technique, delay-dependent robust stability criteria are obtained in terms of linear matrix inequalities. These linear matrix inequalities can be easily solved by using standard convex optimisation algorithms. The designed stochastic sampled data fuzzy controller gain can be obtained. Finally, three numerical examples are given to illustrate the effectiveness of the proposed methods.

  10. Aerobic physical activity and resistance training: an application of the theory of planned behavior among adults with type 2 diabetes in a random, national sample of Canadians

    PubMed Central

    Plotnikoff, Ronald C; Courneya, Kerry S; Trinh, Linda; Karunamuni, Nandini; Sigal, Ronald J

    2008-01-01

    Background Aerobic physical activity (PA) and resistance training are paramount in the treatment and management of type 2 diabetes (T2D), but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB) in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random national sample which was created by generating a random list of household phone numbers. The list was proportionate to the actual number of household telephone numbers for each Canadian province (with the exception of Quebec). These individuals completed self-report TPB constructs of attitude, subjective norm, perceived behavioral control and intention, and a 3-month follow-up that assessed aerobic PA and resistance training. Results TPB explained 10% and 8% of the variance respectively for aerobic PA and resistance training; and accounted for 39% and 45% of the variance respectively for aerobic PA and resistance training intentions. Conclusion These results may guide the development of appropriate PA interventions for aerobic PA and resistance training based on the TPB. PMID:19055725

  11. Screening for proteinuria in a rheumatology clinic: comparison of dipstick testing, 24 hour urine quantitative protein, and protein/creatinine ratio in random urine samples.

    PubMed

    Ralston, S H; Caine, N; Richards, I; O'Reilly, D; Sturrock, R D; Capell, H A

    1988-09-01

    Measurements of protein/creatinine ratio in 'spot' urine samples were compared with measurements of 24 hour quantitative proteinuria and side room 'dipstick' testing in 104 samples from 90 patients presenting consecutively to a rheumatology unit. Linear regression analysis showed a highly significant correlation between the random urinary protein/creatinine ratio and total protein excretion in 24 hour urine samples (r = 0.92, p less than 0.001, y = 6.55x + 0.04). Although an approximation of 24 hour urinary protein excretion could have been made from the regression line: 24 hour urine protein = 6.55 x protein/creatinine ratio + 0.04 (g/l), there was a wide scatter of values, particularly in patients with greater than 1 g/24 h urinary protein excretion. Nevertheless, significant proteinuria (greater than 300 mg/24 h) could have been confirmed or excluded with a sensitivity and specificity of 97% by adopting random protein/creatinine values of less than 0.04 as 'normal'. Specificity and sensitivity could have been increased to 100%, however, by excluding patients with values lying between 0.01 and 0.10 as all the false negatives (n = 3) and false positives (n = 3) lay within this range. In comparison, dipstick testing, although 100% sensitive, had a poor specificity due to the high false positive rate (40/83 (48%] in patients with 1+ to 3+ readings. Assessment of random urinary protein/creatinine ratio may obviate the need for 24 hour urine collections in the initial assessment of suspected proteinuria. A wider application of this technique seems indicated in view of the obvious advantages in terms of cost, time, and patient convenience. PMID:3263087

  12. Diagnostic and treatment methods used by chiropractors: A random sample survey of Canada’s English-speaking provinces

    PubMed Central

    Puhl, Aaron A.; Reinhart, Christine J; Injeyan, H. Stephen

    2015-01-01

    Objective: It is important to understand how chiropractors practice beyond their formal education. The objective of this analysis was to assess the diagnostic and treatment methods used by chiropractors in English-speaking Canadian provinces. Methods: A questionnaire was created that examined practice patterns amongst chiropractors. This was sent by mail to 749 chiropractors, randomly selected and stratified proportionally across the nine English-speaking Canadian provinces. Participation was voluntary and anonymous. Data were entered into an Excel spreadsheet, and descriptive statistics were calculated. Results: The response rate was 68.0%. Almost all (95.1%) of respondents reported performing differential diagnosis procedures with their new patients; most commonly orthopaedic testing, palpation, history taking, range of motion testing and neurological examination. Palpation and painful joint findings were the most commonly used methods to determine the appropriate joint to apply manipulation. The most common treatment methods were manual joint manipulation/mobilization, stretching and exercise, posture/ergonomic advice and soft-tissue therapies. Conclusions: Differential diagnosis is a standard part of the assessment of new chiropractic patients in English-speaking Canadian provinces and the most common methods used to determine the site to apply manipulation are consistent with current scientific literature. Patients are treated with a combination of manual and/or manipulative interventions directed towards the joints and/or soft-tissues, as well as exercise instruction and postural/ergonomic advice. PMID:26500362

  13. [Position statement. Protein/creatinine in a randomly obtained urine sample in the diagnosis of proteinuria in pregnant patients with arterial hypertension].

    PubMed

    2012-01-01

    Leaños Miranda and collaborators published that the measurement of protein/creatinine ratio in a single random urine sample is a reliable indicator of significant proteinuria and may be reasonably used as alternative to the 24-hours urine collection method as a diagnostic criteria for urinary protein, and it is also a criterion for identifying the disease severity. This leads us to present this successful result of the investigation as a position statement in the care of pregnant women with hypertension. PMID:23282273

  14. A novel approach to non-biased systematic random sampling: A stereologic estimate of Purkinje cells in the human cerebellum

    PubMed Central

    Agashiwala, Rajiv M.; Louis, Elan D.; Hof, Patrick R.; Perl, Daniel P.

    2010-01-01

    Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well. PMID:18725208

  15. Association of adiponectin with hepatic steatosis: a study of 1,349 subjects in a random population sample

    PubMed Central

    2014-01-01

    Background Objective of the present study was to examine the association between adiponectin and hepatic steatosis, and other biochemical and anthropometric parameters in healthy subjects. Results A total of 1349 subjects (age 18–65 years) underwent ultrasound examination of the liver. Mean adiponectin concentration for the study collective was 11.35 ± 6.28 μg/mL. The following parameters were assessed for their association with adiponectin: body-mass index (BMI); age; sex; arterial blood pressure; nicotine use; alcohol consumption; physical activity; metabolic syndrome; total, low-density lipoprotein (LDL) and high-density lipoprotein (HDL) cholesterol; triglycerides; aspartate aminotransferase (AST); alanine aminotransferase (ALT); γ-glutamyltransferase (GGT); alkaline phosphatase (AP); C-reactive protein (CRP); insulin sensitivity according to the Homeostasis Model Assessment (HOMA); random blood glucose; and the degree of steatosis of the liver. The numerical differences in the variables influencing adiponectin returned in the descriptive analysis were confirmed at bivariate analysis for BMI, ALT, AST, GGT, AP, total and HDL cholesterol, triglycerides, CRP, arterial blood pressure, metabolic syndrome, nicotine use and alcohol consumption. The logistic regression of the multivariate analysis showed that male sex, hepatic steatosis, BMI, metabolic syndrome, tobacco smoking and CRP correlate negatively with adiponectin, while age, moderate alcohol consumption and HDL cholesterol exhibit a positive association. Conclusions The results of the present study confirm the findings of previous research. Adiponectin correlates negatively with cardiometabolic risk factors and is an independent indicator for non-alcoholic fatty liver disease (NAFLD). PMID:24693952

  16. Undergraduate student drinking and related harms at an Australian university: web-based survey of a large random sample

    PubMed Central

    2012-01-01

    Background There is considerable interest in university student hazardous drinking among the media and policy makers. However there have been no population-based studies in Australia to date. We sought to estimate the prevalence and correlates of hazardous drinking and secondhand effects among undergraduates at a Western Australian university. Method We invited 13,000 randomly selected undergraduate students from a commuter university in Australia to participate in an online survey of university drinking. Responses were received from 7,237 students (56%), who served as participants in this study. Results Ninety percent had consumed alcohol in the last 12 months and 34% met criteria for hazardous drinking (AUDIT score ≥ 8 and greater than 6 standard drinks in one sitting in the previous month). Men and Australian/New Zealand residents had significantly increased odds (OR: 2.1; 95% CI: 1.9-2.3; OR: 5.2; 95% CI: 4.4-6.2) of being categorised as dependent (AUDIT score 20 or over) than women and non-residents. In the previous 4 weeks, 13% of students had been insulted or humiliated and 6% had been pushed, hit or otherwise assaulted by others who were drinking. One percent of respondents had experienced sexual assault in this time period. Conclusions Half of men and over a third of women were drinking at hazardous levels and a relatively large proportion of students were negatively affected by their own and other students' drinking. There is a need for intervention to reduce hazardous drinking early in university participation. Trial registration ACTRN12608000104358 PMID:22248011

  17. Intra individual variability in markers of proteinuria for normal subjects and those with cadmium induced renal dysfunction: interpretation of results from untimed, random urine samples.

    PubMed

    Howard J Mason Alison J Stevenson Nerys Williams Michael Morgan

    1999-01-01

    The project aimed to help interpretation of urinary protein measurements, namely -2-microglobulin, retinol-binding protein, albumin and total protein in untimed, random urine samples as indicating significant changes in renal tubular reabsorption and glomerular permeability in an individual. A standard methodology used in clinical laboratory medicine was applied to calculate the intra-individual biological variation for these analytes. This parameter in conjunction with a laboratory's analytical variation allows definition of uncertainty about a single urine protein measurement, significant changes above normal variation in serial measurements within an individual and a defined level of maximum acceptable analytical imprecision. Repeat urine samples were obtained over a period of one week from a group of cadmium-exposed workers, 90% of whom had long-term tubular proteinuria, and a group of five unexposed volunteers with normal renal function. Dilute samples defined as having creatinines less than 3 mmol l-1 were excluded, as were urines with pH less than 5.5 for -2-microglobulin. Samples were analysed twice after randomisation in large batches. There was no evidence of any diurnal variation in the four protein measurements from samples collected between early morning and 16:00 hours. Creatinine or specific gravity correction of urine results for all four proteins only marginally reduced the uncertainty associated with an individual measurement asreflecting the true excretion value. For those subjects with defined tubular proteinuria, variability in retinol-binding protein excretion was less than that for -2- microglobulin. About 30% of the samples had urine pHs of 5.5 or less where -2- microglobulin degradation occurs. Using our laboratory analytical precision the minimum changes between serial creatinine-corrected measurements that are needed to be considered statistically significant (p < 0.05) is 110% for retinol-binding protein, 177% for -2-microglobulin, 70

  18. Random Sampling of Squamate Reptiles in Spanish Natural Reserves Reveals the Presence of Novel Adenoviruses in Lacertids (Family Lacertidae) and Worm Lizards (Amphisbaenia)

    PubMed Central

    Szirovicza, Leonóra; López, Pilar; Kopena, Renáta; Benkő, Mária; Martín, José; Pénzes, Judit J.

    2016-01-01

    Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs) in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni), nine Iberian worm lizards (Blanus cinereus), and two Iberian green lizards (Lacerta schreiberi), respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses. PMID:27399970

  19. Random Sampling of Squamate Reptiles in Spanish Natural Reserves Reveals the Presence of Novel Adenoviruses in Lacertids (Family Lacertidae) and Worm Lizards (Amphisbaenia).

    PubMed

    Szirovicza, Leonóra; López, Pilar; Kopena, Renáta; Benkő, Mária; Martín, José; Pénzes, Judit J

    2016-01-01

    Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs) in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni), nine Iberian worm lizards (Blanus cinereus), and two Iberian green lizards (Lacerta schreiberi), respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses. PMID:27399970

  20. Characterizing stand-level forest canopy cover and height using Landsat time series, samples of airborne LiDAR, and the Random Forest algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.

    2015-03-01

    Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.

  1. Association between Spouse/Child Separation and Migration-Related Stress among a Random Sample of Rural-to-Urban Migrants in Wuhan, China

    PubMed Central

    Guo, Yan; Chen, Xinguang; Gong, Jie; Li, Fang; Zhu, Chaoyang; Yan, Yaqiong; Wang, Liang

    2016-01-01

    Background Millions of people move from rural areas to urban areas in China to pursue new opportunities while leaving their spouses and children at rural homes. Little is known about the impact of migration-related separation on mental health of these rural migrants in urban China. Methods Survey data from a random sample of rural-to-urban migrants (n = 1113, aged 18–45) from Wuhan were analyzed. The Domestic Migration Stress Questionnaire (DMSQ), an instrument with four subconstructs, was used to measure migration-related stress. The relationship between spouse/child separation and stress was assessed using survey estimation methods to account for the multi-level sampling design. Results 16.46% of couples were separated from their spouses (spouse-separation only), 25.81% of parents were separated from their children (child separation only). Among the participants who married and had children, 5.97% were separated from both their spouses and children (double separation). Spouse-separation only and double separation did not scored significantly higher on DMSQ than those with no separation. Compared to parents without child separation, parents with child separation scored significantly higher on DMSQ (mean score = 2.88, 95% CI: [2.81, 2.95] vs. 2.60 [2.53, 2.67], p < .05). Stratified analysis by separation type and by gender indicated that the association was stronger for child-separation only and for female participants. Conclusion Child-separation is an important source of migration-related stress, and the effect is particularly strong for migrant women. Public policies and intervention programs should consider these factors to encourage and facilitate the co-migration of parents with their children to mitigate migration-related stress. PMID:27124768

  2. Application of bimodal distribution to the detection of changes in uranium concentration in drinking water collected by random daytime sampling method from a large water supply zone.

    PubMed

    Garboś, Sławomir; Święcicka, Dorota

    2015-11-01

    The random daytime (RDT) sampling method was used for the first time in the assessment of average weekly exposure to uranium through drinking water in a large water supply zone. Data set of uranium concentrations determined in 106 RDT samples collected in three runs from the water supply zone in Wroclaw (Poland), cannot be simply described by normal or log-normal distributions. Therefore, a numerical method designed for the detection and calculation of bimodal distribution was applied. The extracted two distributions containing data from the summer season of 2011 and the winter season of 2012 (nI=72) and from the summer season of 2013 (nII=34) allowed to estimate means of U concentrations in drinking water: 0.947 μg/L and 1.23 μg/L, respectively. As the removal efficiency of uranium during applied treatment process is negligible, the effect of increase in uranium concentration can be explained by higher U concentration in the surface-infiltration water used for the production of drinking water. During the summer season of 2013, heavy rains were observed in Lower Silesia region, causing floods over the territory of the entire region. Fluctuations in uranium concentrations in surface-infiltration water can be attributed to releases of uranium from specific sources - migration from phosphate fertilizers and leaching from mineral deposits. Thus, exposure to uranium through drinking water may increase during extreme rainfall events. The average chronic weekly intakes of uranium through drinking water, estimated on the basis of central values of the extracted normal distributions, accounted for 3.2% and 4.1% of tolerable weekly intake. PMID:26143355

  3. Impact of an Educational Intervention on Women's Knowledge and Acceptability of Human Papillomavirus Self-Sampling: A Randomized Controlled Trial in Cameroon

    PubMed Central

    Sossauer, Gaëtan; Zbinden, Michel; Tebeu, Pierre-Marie; Fosso, Gisèle K.; Untiet, Sarah; Vassilakos, Pierre; Petignat, Patrick

    2014-01-01

    Objective Human papillomavirus (HPV) self-sampling (Self-HPV) may be used as a primary cervical cancer screening method in a low resource setting. Our aim was to evaluate whether an educational intervention would improve women's knowledge and confidence in the Self-HPV method. Method Women aged between 25 and 65 years old, eligible for cervical cancer screening, were randomly chosen to receive standard information (control group) or standard information followed by educational intervention (interventional group). Standard information included explanations about what the test detects (HPV), the link between HPV and cervical cancer and how to perform HPV self-sampling. The educational intervention consisted of a culturally tailored video about HPV, cervical cancer, Self-HPV and its relevancy as a screening test. All participants completed a questionnaire that assessed sociodemographic data, women's knowledge about cervical cancer and acceptability of Self-HPV. Results A total of 302 women were enrolled in 4 health care centers in Yaoundé and the surrounding countryside. 301 women (149 in the “control group” and 152 in the “intervention group”) completed the full process and were included into the analysis. Participants who received the educational intervention had a significantly higher knowledge about HPV and cervical cancer than the control group (p<0.05), but no significant difference on Self-HPV acceptability and confidence in the method was noticed between the two groups. Conclusion Educational intervention promotes an increase in knowledge about HPV and cervical cancer. Further investigation should be conducted to determine if this intervention can be sustained beyond the short term and influences screening behavior. Trials Registration International Standard Randomised Controlled Trial Number (ISRCTN) Register ISRCTN78123709 PMID:25333793

  4. Random Selection for Drug Screening

    SciTech Connect

    Center for Human Reliability Studies

    2007-05-01

    Simple random sampling is generally the starting point for a random sampling process. This sampling technique ensures that each individual within a group (population) has an equal chance of being selected. There are a variety of ways to implement random sampling in a practical situation.

  5. A Profile of US-Mexico Border Mobility Among a Stratified Random Sample of Hispanics Living in the El Paso-Juarez Area

    PubMed Central

    Lapeyrouse, L. M.; Morera, O.; Heyman, J. M. C.; Amaya, M. A.; Pingitore, N. E.; Balcazar, H.

    2016-01-01

    Examination of border-specific characteristics such as trans-border mobility and transborder health service illuminates the heterogeneity of border Hispanics and may provide greater insight toward understanding differential health behaviors and status among these populations. In this study, we create a descriptive profile of the concept of trans-border mobility by exploring the relationship between mobility status and a series of demographic, economic and socio-cultural characteristics among mobile and non-mobile Hispanics living in the El Paso-Juarez border region. Using a two-stage stratified random sampling design, bilingual interviewers collected survey data from border residents (n = 1,002). Findings show that significant economic, cultural, and behavioral differences exist between mobile and non-mobile respondents. While non-mobile respondents were found to have higher social economic status than their mobile counterparts, mobility across the border was found to offer less acculturated and poorer Hispanics access to alternative sources of health care and other services. PMID:21336846

  6. Job strain, work place social support, and cardiovascular disease: a cross-sectional study of a random sample of the Swedish working population.

    PubMed Central

    Johnson, J V; Hall, E M

    1988-01-01

    This cross-sectional study investigates the relationship between the psychosocial work environment and cardiovascular disease (CVD) prevalence in a randomly selected, representative sample of 13,779 Swedish male and female workers. It was found that self-reported psychological job demands, work control, and co-worker social support combined greater then multiplicatively in relation to CVD prevalence. An age-adjusted prevalence ratio (PR) of 2.17 (95% CI-1.32, 3.56) was observed among workers with high demands, low control, and low social support compared to a low demand, high control, and high social support reference group. PRs of approximately 2.00 were observed in this group after consecutively controlling for the effects of age together with 11 other potential confounding factors. The magnitude of the age-adjusted PRs was greatest for blue collar males. Due to the cross-sectional nature of the study design, causal inferences cannot be made. The limitations of design and measurement are discussed in the context of the methodological weaknesses of the work stress field. PMID:3421392

  7. RANDOM LASSO.

    PubMed

    Wang, Sijian; Nan, Bin; Rosset, Saharon; Zhu, Ji

    2011-03-01

    We propose a computationally intensive method, the random lasso method, for variable selection in linear models. The method consists of two major steps. In step 1, the lasso method is applied to many bootstrap samples, each using a set of randomly selected covariates. A measure of importance is yielded from this step for each covariate. In step 2, a similar procedure to the first step is implemented with the exception that for each bootstrap sample, a subset of covariates is randomly selected with unequal selection probabilities determined by the covariates' importance. Adaptive lasso may be used in the second step with weights determined by the importance measures. The final set of covariates and their coefficients are determined by averaging bootstrap results obtained from step 2. The proposed method alleviates some of the limitations of lasso, elastic-net and related methods noted especially in the context of microarray data analysis: it tends to remove highly correlated variables altogether or select them all, and maintains maximal flexibility in estimating their coefficients, particularly with different signs; the number of selected variables is no longer limited by the sample size; and the resulting prediction accuracy is competitive or superior compared to the alternatives. We illustrate the proposed method by extensive simulation studies. The proposed method is also applied to a Glioblastoma microarray data analysis. PMID:22997542

  8. Methicillin-Sensitive and Methicillin-Resistant Staphylococcus aureus Nasal Carriage in a Random Sample of Non-Hospitalized Adult Population in Northern Germany

    PubMed Central

    Mehraj, Jaishri; Akmatov, Manas K.; Strömpl, Julia; Gatzemeier, Anja; Layer, Franziska; Werner, Guido; Pieper, Dietmar H.; Medina, Eva; Witte, Wolfgang; Pessler, Frank; Krause, Gérard

    2014-01-01

    Objective The findings from truly randomized community-based studies on Staphylococcus aureus nasal colonization are scarce. Therefore we have examined point prevalence and risk factors of S. aureus nasal carriage in a non-hospitalized population of Braunschweig, northern Germany. Methods A total of 2026 potential participants were randomly selected through the resident's registration office and invited by mail. They were requested to collect a nasal swab at home and return it by mail. S. aureus was identified by culture and PCR. Logistic regression was used to determine risk factors of S. aureus carriage. Results Among the invitees, 405 individuals agreed to participate and 389 provided complete data which was included in the analysis. The median age of the participants was 49 years (IQR: 39–61) and 61% were females. S. aureus was isolated in 85 (21.9%; 95% CI: 18.0–26.2%) of the samples, five of which were MRSA (1.29%; 95% CI: 0.55–2.98%). In multiple logistic regression, male sex (OR = 3.50; 95% CI: 2.01–6.11) and presence of allergies (OR = 2.43; 95% CI: 1.39–4.24) were found to be associated with S. aureus nasal carriage. Fifty five different spa types were found, that clustered into nine distinct groups. MRSA belonged to the hospital-associated spa types t032 and t025 (corresponds to MLST CC 22), whereas MSSA spa types varied and mostly belonged to spa-CC 012 (corresponds to MLST CC 30), and spa-CC 084 (corresponds to MLST CC 15). Conclusion This first point prevalence study of S. aureus in a non-hospitalized population of Germany revealed prevalence, consistent with other European countries and supports previous findings on male sex and allergies as risk factors of S. aureus carriage. The detection of hospital-associated MRSA spa types in the community indicates possible spread of these strains from hospitals into the community. PMID:25251407

  9. Self-help interventions for adjustment disorder problems: a randomized waiting-list controlled study in a sample of burglary victims.

    PubMed

    Bachem, Rahel; Maercker, Andreas

    2016-09-01

    Adjustment disorders (AjD) are among the most frequent mental disorders yet often remain untreated. The high prevalence, comparatively mild symptom impairment, and transient nature make AjD a promising target for low-threshold self-help interventions. Bibliotherapy represents a potential treatment for AjD problems. This study investigates the effectiveness of a cognitive behavioral self-help manual specifically directed at alleviating AjD symptoms in a homogenous sample of burglary victims. Participants with clinical or subclinical AjD symptoms following experience of burglary were randomized to an intervention group (n = 30) or waiting-list control group (n = 24). The new explicit stress response syndrome model for diagnosing AjD was applied. Participants received no therapist support and assessments took place at baseline, after the one-month intervention, and at three-month follow-up. Based on completer analyses, group by time interactions indicated that the intervention group showed more improvement in AjD symptoms of preoccupation and in post-traumatic stress symptoms. Post-intervention between-group effect sizes ranged from Cohen's d = .17 to .67 and the proportion of participants showing reliable change was consistently higher in the intervention group than in the control group. Engagement with the self-help manual was high: 87% of participants had worked through at least half the manual. This is the first published RCT of a bibliotherapeutic self-help intervention for AjD problems. The findings provide evidence that a low-threshold self-help intervention without therapist contact is a feasible and effective treatment for symptoms of AjD. PMID:27299909

  10. Validity of Footprint Analysis to Determine Flatfoot Using Clinical Diagnosis as the Gold Standard in a Random Sample Aged 40 Years and Older

    PubMed Central

    Pita-Fernández, Salvador; González-Martín, Cristina; Seoane-Pillado, Teresa; López-Calviño, Beatriz; Pértega-Díaz, Sonia; Gil-Guillén, Vicente

    2015-01-01

    Background Research is needed to determine the prevalence and variables associated with the diagnosis of flatfoot, and to evaluate the validity of three footprint analysis methods for diagnosing flatfoot, using clinical diagnosis as a benchmark. Methods We conducted a cross-sectional study of a population-based random sample ≥40 years old (n = 1002) in A Coruña, Spain. Anthropometric variables, Charlson’s comorbidity score, and podiatric examination (including measurement of Clarke’s angle, the Chippaux-Smirak index, and the Staheli index) were used for comparison with a clinical diagnosis method using a podoscope. Multivariate regression was performed. Informed patient consent and ethical review approval were obtained. Results Prevalence of flatfoot in the left and right footprint, measured using the podoscope, was 19.0% and 18.9%, respectively. Variables independently associated with flatfoot diagnosis were age (OR 1.07), female gender (OR 3.55) and BMI (OR 1.39). The area under the receiver operating characteristic curve (AUC) showed that Clarke’s angle is highly accurate in predicting flatfoot (AUC 0.94), followed by the Chippaux-Smirak (AUC 0.83) and Staheli (AUC 0.80) indices. Sensitivity values were 89.8% for Clarke’s angle, 94.2% for the Chippaux-Smirak index, and 81.8% for the Staheli index, with respective positive likelihood ratios or 9.7, 2.1, and 2.0. Conclusions Age, gender, and BMI were associated with a flatfoot diagnosis. The indices studied are suitable for diagnosing flatfoot in adults, especially Clarke’s angle, which is highly accurate for flatfoot diagnosis in this population. PMID:25382154

  11. Descriptive analysis of the prevalence of anemia in a randomly selected sample of elderly people living at home: some results of an Italian multicentric study.

    PubMed

    Inelmen, E M; D'Alessio, M; Gatto, M R; Baggio, M B; Jimenez, G; Bizzotto, M G; Enzi, G

    1994-04-01

    We studied hematological indexes (RBC, HB, HT, MCV), serum iron and serum ferritin values in 1784 randomly selected subjects aged 65 and over (725 males and 1059 females) divided into five age groups (65-69, 70-74, 75-79, 80-84, > or = 85 years). The subjects were classified as anemic and normochromic according to the criteria for a "geriatric" level of anemia (HB < or = 12 g/dL in both sexes) as well as "W.H.O." levels for anemia (HB < 13 g/dL in males and < 12 g/dL in females). Macrocytosis (MCV > 100 fl) and low serum ferritin level (< or = 12 ng/dL) were classified according to MCV and serum ferritin values. Mean HB values in males were 14.85 +/- 1.33; 14.82 +/- 1.40; 14.77 +/- 1.43; 14.59 +/- 1.47 and 13.83 +/- 1.13 in the five age groups (65-69, 70-74, 75-79, 80-84 and > or = 85 years) respectively; in females, they were 13.77 +/- 1.15; 13.75 +/- 1.27; 13.44 +/- 1.39; 13.44 +/- 1.52 and 13.34 +/- 1.61, respectively. There was a low frequency of anemia in the entire sample: 2.9% in males and 9.9% in females according to the "geriatric" level, and 9.4% in males and 8.8% in females according to the "W.H.O." level. There was a higher prevalence of macrocytosis in males (6.3%) than in females (3.3%). We conclude that red cell parameters tend to decrease in aging, and further investigations are needed that exclude persons with existing chronic conditions, and incorporate data on nutritional status. PMID:7918735

  12. Effectiveness of Housing First with Intensive Case Management in an Ethnically Diverse Sample of Homeless Adults with Mental Illness: A Randomized Controlled Trial

    PubMed Central

    Stergiopoulos, Vicky; Gozdzik, Agnes; Misir, Vachan; Skosireva, Anna; Connelly, Jo; Sarang, Aseefa; Whisler, Adam; Hwang, Stephen W.; O’Campo, Patricia; McKenzie, Kwame

    2015-01-01

    Housing First (HF) is being widely disseminated in efforts to end homelessness among homeless adults with psychiatric disabilities. This study evaluates the effectiveness of HF with Intensive Case Management (ICM) among ethnically diverse homeless adults in an urban setting. 378 participants were randomized to HF with ICM or treatment-as-usual (TAU) in Toronto (Canada), and followed for 24 months. Measures of effectiveness included housing stability, physical (EQ5D-VAS) and mental (CSI, GAIN-SS) health, social functioning (MCAS), quality of life (QoLI20), and health service use. Two-thirds of the sample (63%) was from racialized groups and half (50%) were born outside Canada. Over the 24 months of follow-up, HF participants spent a significantly greater percentage of time in stable residences compared to TAU participants (75.1% 95% CI 70.5 to 79.7 vs. 39.3% 95% CI 34.3 to 44.2, respectively). Similarly, community functioning (MCAS) improved significantly from baseline in HF compared to TAU participants (change in mean difference = +1.67 95% CI 0.04 to 3.30). There was a significant reduction in the number of days spent experiencing alcohol problems among the HF compared to TAU participants at 24 months (ratio of rate ratios = 0.47 95% CI 0.22 to 0.99) relative to baseline, a reduction of 53%. Although the number of emergency department visits and days in hospital over 24 months did not differ significantly between HF and TAU participants, fewer HF participants compared to TAU participants had 1 or more hospitalizations during this period (70.4% vs. 81.1%, respectively; P=0.044). Compared to non-racialized HF participants, racialized HF participants saw an increase in the amount of money spent on alcohol (change in mean difference = $112.90 95% CI 5.84 to 219.96) and a reduction in physical community integration (ratio of rate ratios = 0.67 95% CI 0.47 to 0.96) from baseline to 24 months. Secondary analyses found a significant reduction in the number of days

  13. Children’s Quality of Life Based on the KIDSCREEN-27: Child Self-Report, Parent Ratings and Child-Parent Agreement in a Swedish Random Population Sample

    PubMed Central

    Berman, Anne H.; Liu, Bojing; Ullman, Sara; Jadbäck, Isabel; Engström, Karin

    2016-01-01

    Background The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL), with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11–16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured. Methods A random population sample consisting of 600 children aged 11–16, 100 per age group and one of their parents (N = 1200), were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK) coefficient for ordinal data (PABAK-OS); dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots. Results Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU) but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77), Parent relations and autonomy (55.1/49.99), Social Support and peers (54.1/49.94) and School (55.8/50.01). Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences

  14. Confidence Intervals, Power Calculation, and Sample Size Estimation for the Squared Multiple Correlation Coefficient under the Fixed and Random Regression Models: A Computer Program and Useful Standard Tables.

    ERIC Educational Resources Information Center

    Mendoza, Jorge L.; Stafford, Karen L.

    2001-01-01

    Introduces a computer package written for Mathematica, the purpose of which is to perform a number of difficult iterative functions with respect to the squared multiple correlation coefficient under the fixed and random models. These functions include computation of the confidence interval upper and lower bounds, power calculation, calculation of…

  15. Investigating the Randomness of Numbers

    ERIC Educational Resources Information Center

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  16. Random Selection for Drug Screening

    SciTech Connect

    Center for Human Reliability Studies

    2007-05-01

    Sampling is the process of choosing some members out of a group or population. Probablity sampling, or random sampling, is the process of selecting members by chance with a known probability of each individual being chosen.

  17. Effect of sample size on multi-parametric prediction of tissue outcome in acute ischemic stroke using a random forest classifier

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Fiehler, Jens

    2015-03-01

    The tissue outcome prediction in acute ischemic stroke patients is highly relevant for clinical and research purposes. It has been shown that the combined analysis of diffusion and perfusion MRI datasets using high-level machine learning techniques leads to an improved prediction of final infarction compared to single perfusion parameter thresholding. However, most high-level classifiers require a previous training and, until now, it is ambiguous how many subjects are required for this, which is the focus of this work. 23 MRI datasets of acute stroke patients with known tissue outcome were used in this work. Relative values of diffusion and perfusion parameters as well as the binary tissue outcome were extracted on a voxel-by- voxel level for all patients and used for training of a random forest classifier. The number of patients used for training set definition was iteratively and randomly reduced from using all 22 other patients to only one other patient. Thus, 22 tissue outcome predictions were generated for each patient using the trained random forest classifiers and compared to the known tissue outcome using the Dice coefficient. Overall, a logarithmic relation between the number of patients used for training set definition and tissue outcome prediction accuracy was found. Quantitatively, a mean Dice coefficient of 0.45 was found for the prediction using the training set consisting of the voxel information from only one other patient, which increases to 0.53 if using all other patients (n=22). Based on extrapolation, 50-100 patients appear to be a reasonable tradeoff between tissue outcome prediction accuracy and effort required for data acquisition and preparation.

  18. Random Walks on Random Graphs

    NASA Astrophysics Data System (ADS)

    Cooper, Colin; Frieze, Alan

    The aim of this article is to discuss some of the notions and applications of random walks on finite graphs, especially as they apply to random graphs. In this section we give some basic definitions, in Section 2 we review applications of random walks in computer science, and in Section 3 we focus on walks in random graphs.

  19. The association of health-care use and hepatitis C virus infection in a random sample of urban slum community residents in southern India.

    PubMed

    Marx, Melissa A; Murugavel, K G; Sivaram, Sudha; Balakrishnan, P; Steinhoff, Mark; Anand, S; Thomas, David L; Solomon, Suniti; Celentano, David D

    2003-02-01

    To determine whether health-care use was associated with prevalent hepatitis C virus (HCV) infection in Chennai, India, 1,947 adults from 30 slum communities were randomly selected to be interviewed about parenteral and sexual risks for HCV infection and to provide biological specimens for HCV and sexually transmitted infection (STI) testing. Prevalent HCV infection was detected in 2.4% of non-injection drug using (IDU) participants. Controlling for other associated factors, and excluding IDU, men who used informal health-care providers were five times as likely to be HCV infected as those who did not use informal providers (Adjusted Odds Ratio, AOR = 5.83; 95% confidence interval [CI]: 1.57, 21.6), a finding not detected in women. More research is needed to determine the extent to which HCV infection is associated with reuse of contaminated injection equipment in health-care settings in developing countries. PMID:12641422

  20. Selection of Common Items as an Unrecognized Source of Variability in Test Equating: A Bootstrap Approximation Assuming Random Sampling of Common Items

    ERIC Educational Resources Information Center

    Michaelides, Michalis P.; Haertel, Edward H.

    2014-01-01

    The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…

  1. Survey: Attitudes Toward Women as School District Administrators. Summary of Responses to a Survey of a Random Sample of Superintendents and School Board Presidents.

    ERIC Educational Resources Information Center

    American Association of School Administrators, Arlington, VA.

    A survey sample of superintendents and school board presidents in districts across the United States, stratified into four groups according to district size, provides data concerning the effects of the attitudes of those responsible for hiring school district administrators on the professional opportunities for women in the field. The two…

  2. Does the "Marriage Benefit" Extend to Partners in Gay and Lesbian Relationships?: Evidence from a Random Sample of Sexually Active Adults

    ERIC Educational Resources Information Center

    Wienke, Chris; Hill, Gretchen J.

    2009-01-01

    Prior research indicates that the married enjoy higher levels of well-being than the unmarried, including unmarried cohabiters. Yet, comparisons of married and unmarried persons routinely exclude partnered gays and lesbians. Using a large probability sample, this study assessed how the well-being of partnered gays and lesbians (282) compares with…

  3. Parent-Child Associations in Pedometer-Determined Physical Activity and Sedentary Behaviour on Weekdays and Weekends in Random Samples of Families in the Czech Republic

    PubMed Central

    Sigmundová, Dagmar; Sigmund, Erik; Vokáčová, Jana; Kopčáková, Jaroslava

    2014-01-01

    This study investigates whether more physically active parents bring up more physically active children and whether parents’ level of physical activity helps children achieve step count recommendations on weekdays and weekends. The participants (388 parents aged 35–45 and their 485 children aged 9–12) were randomly recruited from 21 Czech government-funded primary schools. The participants recorded pedometer step counts for seven days (≥10 h a day) during April–May and September–October of 2013. Logistic regression (Enter method) was used to examine the achievement of the international recommendations of 11,000 steps/day for girls and 13,000 steps/day for boys. The children of fathers and mothers who met the weekend recommendation of 10,000 steps were 5.48 (95% confidence interval: 1.65; 18.19; p < 0.01) and 3.60 times, respectively (95% confidence interval: 1.21; 10.74; p < 0.05) more likely to achieve the international weekend recommendation than the children of less active parents. The children of mothers who reached the weekday pedometer-based step count recommendation were 4.94 times (95% confidence interval: 1.45; 16.82; p < 0.05) more likely to fulfil the step count recommendation on weekdays than the children of less active mothers. PMID:25026084

  4. Effect of initial seed and number of samples on simple-random and Latin-Hypercube Monte Carlo probabilities (confidence interval considerations)

    SciTech Connect

    ROMERO,VICENTE J.

    2000-05-04

    In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.

  5. Gene expression changes caused by the p38 MAPK inhibitor dilmapimod in COPD patients: analysis of blood and sputum samples from a randomized, placebo-controlled clinical trial

    PubMed Central

    Betts, Joanna C; Mayer, Ruth J; Tal-Singer, Ruth; Warnock, Linda; Clayton, Chris; Bates, Stewart; Hoffman, Bryan E; Larminie, Christopher; Singh, Dave

    2015-01-01

    The p38 mitogen-activated protein kinase (MAPK) intracellular signaling pathway responds to a variety of extracellular stimuli, including cytokines, Toll-like receptor agonists, and components of cigarette smoke to influence the expression of proinflammatory mediators. Activation of p38 MAPK is increased within the lungs of chronic obstructive pulmonary disease (COPD) patients. In clinical trials, treatment of COPD patients with p38 MAPK inhibitors has been shown to reduce systemic inflammation plasma biomarkers C-reactive protein (CRP) and fibrinogen. As CRP and fibrinogen have been associated with poor clinical outcomes in COPD patients, such as mortality, exacerbation, and hospitalization, we analyzed gene expression data from COPD subjects treated with dilmapimod with the aim of understanding the effects of p38 MAPK inhibition on the inflammatory genome of immune cells within the systemic circulation. Whole blood and induced sputum samples were used to measure mRNA levels by gene array and PCR. Pathway and network analysis showed STAT1, MMP-9, CAV1, and IL-1β as genes regulated by dilmapimod that could also influence fibrinogen levels, while only IL-1β was identified as a gene regulated by dilmapimod that could influence CRP levels. This suggests that p38 MAPK inhibits specific inflammatory pathways, leading to to differential effects on CRP and fibrinogen levels in COPD patients. PMID:25692013

  6. Correction of deposit ages for inherited ages of charcoal: implications for sediment dynamics inferred from random sampling of deposits on headwater valley floors

    NASA Astrophysics Data System (ADS)

    Frueh, W. Terry; Lancaster, Stephen T.

    2014-03-01

    Inherited age is defined herein as the difference between times of carbon fixation in a material and deposition of that material within sediments from which it is eventually sampled in order to estimate deposit age via radiocarbon dating. Inheritance generally leads to over-estimation of the age by an unknown amount and therefore represents unquantified bias and uncertainty that could potentially lead to erroneous inferences. Inherited ages in charcoal are likely to be larger, and therefore detectable relative to analytic error, where forests are dominated by longer-lived trees, material is stored for longer periods upslope, and downstream post-fire delivery of that material is dominated by mass movements, such as in the near-coastal mountains of northwestern North America. Inherited age distribution functions were estimated from radiocarbon dating of 126 charcoal pieces from 14 stream-bank exposures of debris-flow deposits, fluvial fines, and fluvial gravels along a headwater stream in the southern Oregon Coast Range, USA. In the region, these 3 facies are representative of the nearly continuous coalescing fan-fill complexes blanketing valley floors of headwater streams where the dominant transport mechanism shifts from debris-flow to fluvial. Within each depositional unit, and for each charcoal piece within that unit, convolution of the calibrated age distribution with that of the youngest piece yielded an inherited age distribution for the unit. Fits to the normalized sums of inherited age distributions for units of like facies provided estimates of facies-specific inherited age distribution functions. Finally, convolution of these distribution functions with calibrated deposit age distributions yielded corrections to published valley-floor deposit ages and residence time distributions from nearby similar sites. Residence time distributions were inferred from the normalized sums of distributions of ˜30 deposit ages at each of 4 sites: 2 adjacent valley reaches

  7. Rationale and design of the iPap trial: a randomized controlled trial of home-based HPV self-sampling for improving participation in cervical screening by never- and under-screened women in Australia

    PubMed Central

    2014-01-01

    Background Organized screening based on Pap tests has substantially reduced deaths from cervical cancer in many countries, including Australia. However, the impact of the program depends upon the degree to which women participate. A new method of screening, testing for human papillomavirus (HPV) DNA to detect the virus that causes cervical cancer, has recently become available. Because women can collect their own samples for this test at home, it has the potential to overcome some of the barriers to Pap tests. The iPap trial will evaluate whether mailing an HPV self-sampling kit increases participation by never- and under-screened women within a cervical screening program. Methods/Design The iPap trial is a parallel randomized controlled, open label, trial. Participants will be Victorian women age 30–69 years, for whom there is either no record on the Victorian Cervical Cytology Registry (VCCR) of a Pap test (never-screened) or the last recorded Pap test was between five to fifteen years ago (under-screened). Enrolment information from the Victorian Electoral Commission will be linked to the VCCR to determine the never-screened women. Variables that will be used for record linkage include full name, address and date of birth. Never- and under-screened women will be randomly allocated to either receive an invitation letter with an HPV self-sampling kit or a reminder letter to attend for a Pap test, which is standard practice for women overdue for a test in Victoria. All resources have been focus group tested. The primary outcome will be the proportion of women who participate, by returning an HPV self-sampling kit for women in the self-sampling arm, and notification of a Pap test result to the Registry for women in the Pap test arm at 3 and 6 months after mailout. The most important secondary outcome is the proportion of test-positive women who undergo further investigations at 6 and 12 months after mailout of results. Discussion The iPap trial will provide

  8. Home-based HPV self-sampling improves participation by never-screened and under-screened women: Results from a large randomized trial (iPap) in Australia.

    PubMed

    Sultana, Farhana; English, Dallas R; Simpson, Julie A; Drennan, Kelly T; Mullins, Robyn; Brotherton, Julia M L; Wrede, C David; Heley, Stella; Saville, Marion; Gertig, Dorota M

    2016-07-15

    We conducted a randomized controlled trial to determine whether HPV self-sampling increases participation in cervical screening by never- and under-screened (not screened in past 5 years) women when compared with a reminder letter for a Pap test. Never- or under-screened Victorian women aged 30-69 years, not pregnant and with no prior hysterectomy were eligible. Within each stratum (never-screened and under-screened), we randomly allocated 7,140 women to self-sampling and 1,020 to Pap test reminders. The self-sampling kit comprised a nylon tipped flocked swab enclosed in a dry plastic tube. The primary outcome was participation, as indicated by returning a swab or undergoing a Pap test; the secondary outcome, for women in the self-sampling arm with a positive HPV test, was undergoing appropriate clinical investigation. The Roche Cobas® 4800 test was used to measure presence of HPV DNA. Participation was higher for the self-sampling arm: 20.3 versus 6.0% for never-screened women (absolute difference 14.4%, 95% CI: 12.6-16.1%, p < 0.001) and 11.5 versus 6.4% for under-screened women (difference 5.1%, 95% CI: 3.4-6.8%, p < 0.001). Of the 1,649 women who returned a swab, 45 (2.7%) were positive for HPV16/18 and 95 (5.8%) were positive for other high-risk HPV types. Within 6 months, 28 (62.2%) women positive for HPV16/18 had colposcopy as recommended and nine (20%) had cytology only. Of women positive for other high-risk HPV types, 78 (82.1%) had a Pap test as recommended. HPV self-sampling improves participation in cervical screening for never- and under-screened women and most women with HPV detected have appropriate clinical investigation. PMID:26850941

  9. Blocked randomization with randomly selected block sizes.

    PubMed

    Efird, Jimmy

    2011-01-01

    When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes. PMID:21318011

  10. A Comparison of the Number of Men Who Have Sex with Men among Rural-To-Urban Migrants with Non-Migrant Rural and Urban Residents in Wuhan, China: A GIS/GPS-Assisted Random Sample Survey Study

    PubMed Central

    Chen, Xinguang; Yu, Bin; Zhou, Dunjin; Zhou, Wang; Gong, Jie; Li, Shiyue; Stanton, Bonita

    2015-01-01

    Background Mobile populations and men who have sex with men (MSM) play an increasing role in the current HIV epidemic in China and across the globe. While considerable research has addressed both of these at-risk populations, more effective HIV control requires accurate data on the number of MSM at the population level, particularly MSM among migrant populations. Methods Survey data from a random sample of male rural-to-urban migrants (aged 18-45, n=572) in Wuhan, China were analyzed and compared with those of randomly selected non-migrant urban (n=566) and rural counterparts (580). The GIS/GPS technologies were used for sampling and the survey estimation method was used for data analysis. Results HIV-related risk behaviors among rural-to-urban migrants were similar to those among the two comparison groups. The estimated proportion of MSM among migrants [95% CI] was 5.8% [4.7, 6.8], higher than 2.8% [1.2, 4.5] for rural residents and 1.0% [0.0, 2.4] for urban residents, respectively. Among these migrants, the MSM were more likely than non-MSM to be older in age, married, and migrated to more cities. They were also more likely to co-habit with others in rental properties located in new town and neighborhoods with fewer old acquaintances and more entertainment establishments. In addition, they were more likely to engage in commercial sex and less likely to consistently use condoms. Conclusion Findings of this study indicate that compared to rural and urban populations, the migrant population in Wuhan consists of a higher proportion of MSM who also exhibit higher levels of HIV-related risk behaviors. More effective interventions should target this population with a focus on neighborhood factors, social capital and collective efficacy for risk reduction. PMID:26241900

  11. Random thoughts

    NASA Astrophysics Data System (ADS)

    ajansen; kwhitefoot; panteltje1; edprochak; sudhakar, the

    2014-07-01

    In reply to the physicsworld.com news story “How to make a quantum random-number generator from a mobile phone” (16 May, http://ow.ly/xFiYc, see also p5), which describes a way of delivering random numbers by counting the number of photons that impinge on each of the individual pixels in the camera of a Nokia N9 smartphone.

  12. Population Processes Sampled at Random Times

    NASA Astrophysics Data System (ADS)

    Beghin, Luisa; Orsingher, Enzo

    2016-04-01

    In this paper we study the iterated birth process of which we examine the first-passage time distributions and the hitting probabilities. Furthermore, linear birth processes, linear and sublinear death processes at Poisson times are investigated. In particular, we study the hitting times in all cases and examine their long-range behavior. The time-changed population models considered here display upward (birth process) and downward jumps (death processes) of arbitrary size and, for this reason, can be adopted as adequate models in ecology, epidemics and finance situations, under stress conditions.

  13. SAMPLING OSCILLOSCOPE

    DOEpatents

    Sugarman, R.M.

    1960-08-30

    An oscilloscope is designed for displaying transient signal waveforms having random time and amplitude distributions. The oscilloscopc is a sampling device that selects for display a portion of only those waveforms having a particular range of amplitudes. For this purpose a pulse-height analyzer is provided to screen the pulses. A variable voltage-level shifter and a time-scale rampvoltage generator take the pulse height relative to the start of the waveform. The variable voltage shifter produces a voltage level raised one step for each sequential signal waveform to be sampled and this results in an unsmeared record of input signal waveforms. Appropriate delay devices permit each sample waveform to pass its peak amplitude before the circuit selects it for display.

  14. Random Vibrations

    NASA Technical Reports Server (NTRS)

    Messaro. Semma; Harrison, Phillip

    2010-01-01

    Ares I Zonal Random vibration environments due to acoustic impingement and combustion processes are develop for liftoff, ascent and reentry. Random Vibration test criteria for Ares I Upper Stage pyrotechnic components are developed by enveloping the applicable zonal environments where each component is located. Random vibration tests will be conducted to assure that these components will survive and function appropriately after exposure to the expected vibration environments. Methodology: Random Vibration test criteria for Ares I Upper Stage pyrotechnic components were desired that would envelope all the applicable environments where each component was located. Applicable Ares I Vehicle drawings and design information needed to be assessed to determine the location(s) for each component on the Ares I Upper Stage. Design and test criteria needed to be developed by plotting and enveloping the applicable environments using Microsoft Excel Spreadsheet Software and documenting them in a report Using Microsoft Word Processing Software. Conclusion: Random vibration liftoff, ascent, and green run design & test criteria for the Upper Stage Pyrotechnic Components were developed by using Microsoft Excel to envelope zonal environments applicable to each component. Results were transferred from Excel into a report using Microsoft Word. After the report is reviewed and edited by my mentor it will be submitted for publication as an attachment to a memorandum. Pyrotechnic component designers will extract criteria from my report for incorporation into the design and test specifications for components. Eventually the hardware will be tested to the environments I developed to assure that the components will survive and function appropriately after exposure to the expected vibration environments.

  15. Individual and contextual determinants of resident-on-resident abuse in nursing homes: a random sample telephone survey of adults with an older family member in a nursing home.

    PubMed

    Schiamberg, Lawrence B; von Heydrich, Levente; Chee, Grace; Post, Lori A

    2015-01-01

    Few empirical investigations of elder abuse in nursing homes address the frequency and determinants of resident-on-resident abuse (RRA). A random sample of 452 adults with an older adult relative, ≥65 years of age, in a nursing home completed a telephone survey regarding elder abuse experienced by that elder family member. Using a Linear Structural Relations (LISREL) modeling design, the study examined the association of nursing home resident demographic characteristics (e.g., age, gender), health and behavioral characteristics (e.g., diagnosis of Alzheimer's Disease, Activities of Daily Living (ADLs), Instrumental Activities of Daily Living (IADLs), types of staff abuse (e.g., physical, emotional), and factors beyond the immediate nursing home setting (e.g., emotional closeness of resident with family members) with RRA. Mplus statistical software was used for structural equation modeling. Main findings indicated that resident-on-resident mistreatment of elderly nursing home residents is associated with the age of the nursing home resident, all forms of staff abuse, all ADLs and IADLs, and emotional closeness of the older adult to the family. PMID:26026215

  16. GEOSTATISTICAL SAMPLING DESIGNS FOR HAZARDOUS WASTE SITES

    EPA Science Inventory

    This chapter discusses field sampling design for environmental sites and hazardous waste sites with respect to random variable sampling theory, Gy's sampling theory, and geostatistical (kriging) sampling theory. The literature often presents these sampling methods as an adversari...

  17. Is random access memory random?

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.

  18. Response of the Elderly to Disaster: An Age-Stratified Analysis.

    ERIC Educational Resources Information Center

    Bolin, Robert; Klenow, Daniel J.

    1982-01-01

    Analyzed the effect of age on elderly tornado victims' (N=62) responses to stress effects. Compared to younger victims (N=240), the elderly did not suffer disproportionate material losses, but were more likely to be injured and have a death in the household. Elderly victims had a lower incidene of emotional and family problems. (Author/JAC)

  19. Cluster randomization: a trap for the unwary.

    PubMed Central

    Underwood, M; Barnett, A; Hajioff, S

    1998-01-01

    Controlled trials that randomize by practice can provide robust evidence to inform patient care. However, compared with randomizing by each individual patient, this approach may have substantial implications for sample size calculations and the interpretation of results. An increased awareness of these effects will improve the quality of research based on randomization by practice. PMID:9624757

  20. Deterministic multidimensional nonuniform gap sampling

    NASA Astrophysics Data System (ADS)

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.

  1. Sampling and Sample Preparation

    NASA Astrophysics Data System (ADS)

    Morawicki, Rubén O.

    Quality attributes in food products, raw materials, or ingredients are measurable characteristics that need monitoring to ensure that specifications are met. Some quality attributes can be measured online by using specially designed sensors and results obtained in real time (e.g., color of vegetable oil in an oil extraction plant). However, in most cases quality attributes are measured on small portions of material that are taken periodically from continuous processes or on a certain number of small portions taken from a lot. The small portions taken for analysis are referred to as samples, and the entire lot or the entire production for a certain period of time, in the case of continuous processes, is called a population. The process of taking samples from a population is called sampling. If the procedure is done correctly, the measurable characteristics obtained for the samples become a very accurate estimation of the population.

  2. Electron microscopic stereological study of collagen fibrils in bovine articular cartilage: volume and surface densities are best obtained indirectly (from length densities and diameters) using isotropic uniform random sampling

    PubMed Central

    LÅNGSJÖ, TEEMU K.; HYTTINEN, MIKA; PELTTARI, ALPO; KIRALY, KARI; AROKOSKI, JARI; HELMINEN, HEIKKI J.

    1999-01-01

    Results obtained by the indirect zonal isotropic uniform random (IUR) estimation were compared with those obtained by the direct point and interception counting methods on vertical (VS) or IUR sections in a stereological study of bovine articular cartilage collagen fibrils at the ultrastructural level. Besides comparisons between the direct and indirect estimations (direct IUR vs indirect IUR estimations) and between different sampling methods (VS vs IUR sampling), simultaneous comparison of the 2 issues took place (direct VS vs indirect IUR estimation). Using the direct VS method, articular cartilage superficial zone collagen volume fraction (Vv 41%) was 67% and fibril surface density (Sv 0.030 nm2/nm3) 15% higher (P<0.05) than values obtained by the indirect IUR method (Vv 25% and Sv 0.026 nm2/nm3). The same was observed when the direct IUR method was used: collagen volume fraction (Vv 40%) was 63% and fibril surface density (Sv 0.032 nm2/nm3) 21% higher (P<0.05) than those obtained by the indirect IUR technique. Similarly, in the deep zone of articular cartilage direct VS and direct IUR methods gave 50 and 55% higher (P<0.05) collagen fibril volume fractions (Vv 43 and 44% vs 29%) and the direct IUR method 25% higher (P<0.05) fibril surface density values (Sv 0.025 vs 0.020 nm2/nm3) than the indirect IUR estimation. On theoretical grounds, scrutiny calculations, as well as earlier reports, it is concluded that the direct VS and direct IUR methods systematically overestimated the Vv and Sv of collagen fibrils. This bias was due to the overprojection which derives from the high section thickness in relation to collagen fibril diameter. On the other hand, factors that during estimation tend to underestimate Vv and Sv, such as profile overlapping and truncation (‘fuzzy’ profiles), seemed to cause less bias. As length density (Lv) and collagen fibril diameter are minimally biased by the high relative section thickness, the indirect IUR method, based on

  3. Randomly Hyperbranched Polymers

    NASA Astrophysics Data System (ADS)

    Konkolewicz, Dominik; Gilbert, Robert G.; Gray-Weale, Angus

    2007-06-01

    We describe a model for the structures of randomly hyperbranched polymers in solution, and find a logarithmic growth of radius with polymer mass. We include segmental overcrowding, which puts an upper limit on the density. The model is tested against simulations, against data on amylopectin, a major component of starch, on glycogen, and on polyglycerols. For samples of synthetic polyglycerol and glycogen, our model holds well for all the available data. The model reveals higher-level scaling structure in glycogen, related to the β particles seen in electron microscopy.

  4. 40 CFR 761.348 - Contemporaneous sampling.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... minutes after start up of the waste output, or if the waste is currently being generated, after the random... the start up of waste generation. Similarly, if waste output is ongoing and the random start.... (b) Determine a sample collection start time using a random number generator or a random number...

  5. Adaptive Sampling Designs.

    ERIC Educational Resources Information Center

    Flournoy, Nancy

    Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…

  6. Random broadcast on random geometric graphs

    SciTech Connect

    Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  7. Quantumness, Randomness and Computability

    NASA Astrophysics Data System (ADS)

    Solis, Aldo; Hirsch, Jorge G.

    2015-06-01

    Randomness plays a central role in the quantum mechanical description of our interactions. We review the relationship between the violation of Bell inequalities, non signaling and randomness. We discuss the challenge in defining a random string, and show that algorithmic information theory provides a necessary condition for randomness using Borel normality. We close with a view on incomputablity and its implications in physics.

  8. How random is a random vector?

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2015-12-01

    Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  9. Rationale, Study Design and Sample Characteristics of a Randomized Controlled Trial of Directly Administered Antiretroviral Therapy for HIV-infected Prisoners Transitioning to the Community - A Potential Conduit to Improved HIV Treatment Outcomes

    PubMed Central

    Tehrani, Ali S. Saber; Springer, Sandra A.; Qiu, Jingjun; Herme, Maua; Wickersham, Jeffrey; Altice, Frederick L.

    2012-01-01

    Background HIV-infected prisoners experience poor HIV treatment outcomes post-release. Directly administered antiretroviral therapy (DAART) is a CDC-designated, evidence-based adherence intervention for drug users, yet untested among released prisoners. Methods Sentenced HIV-infected prisoners on antiretroviral therapy (ART) and returning to New Haven or Hartford, Connecticut were recruited and randomized 2:1 to a controlled trial (RCT) of 6 months of DAART versus self-administered therapy (SAT); all subjects received case management services. Subjects meeting DSM-IV criteria for opioid dependence were offered immediate medication-assisted treatment. Trained outreach workers provided DAART once-daily, seven days per week, including behavioral skills training during the last intervention month. Both study groups were assessed for 6 months after the intervention period. Assessments occurred within 90 days pre-release (baseline), day of release, and then monthly for 12 months. Viral load (VL) and CD4 testing was conducted baseline and quarterly; genotypic resistance testing was conducted at baseline, 6 and 12 months. The primary outcome was pre-defined as viral suppression (VL<400 copies/mL) at 6 months. Results Between 2004 and 2009, 279 participants were screened, of which 202 met eligibility criteria and 154 were ultimately enrolled in the study; 103 subjects were randomized to DAART and 51 to SAT. Subjects were mostly male (81.2%), people of color (87.0%), had an alcohol use disorder (39.7%), had underlying depression (54.2%), were virally suppressed (78.8%) and mean CD4=390.7 cells/mL. Conclusions Outcomes from this RCT will contribute greatly to HIV treatment outcomes after release from prison, a period associated with adverse HIV and other medical consequences. PMID:22101218

  10. Comparing MTI randomization procedures to blocked randomization.

    PubMed

    Berger, Vance W; Bejleri, Klejda; Agnor, Rebecca

    2016-02-28

    Randomization is one of the cornerstones of the randomized clinical trial, and there is no shortage of methods one can use to randomize patients to treatment groups. When deciding which one to use, researchers must bear in mind that not all randomization procedures are equally adept at achieving the objective of randomization, namely, balanced treatment groups. One threat is chronological bias, and permuted blocks randomization does such a good job at controlling chronological bias that it has become the standard randomization procedure in clinical trials. But permuted blocks randomization is especially vulnerable to selection bias, so as a result, the maximum tolerated imbalance (MTI) procedures were proposed as better alternatives. In comparing the procedures, we have somewhat of a false controversy, in that actual practice goes uniformly one way (permuted blocks), whereas scientific arguments go uniformly the other way (MTI procedures). There is no argument in the literature to suggest that the permuted block design is better than or even as good as the MTI procedures, but this dearth is matched by an equivalent one regarding actual trials using the MTI procedures. So the 'controversy', if we are to call it that, pits misguided precedent against sound advice that tends to be ignored in practice. We shall review the issues to determine scientifically which of the procedures is better and, therefore, should be used. PMID:26337607

  11. Binding of non-target microorganisms from food washes to anti-Salmonella and anti-E. coli O157 immuno-magnetic beads: minimizing the errors of random sampling in extreme dilute systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    For most applications, 3-5 observations (n) are utilized for estimating total aerobic plate count in an average population greater than about 50 (mu) cells or colony forming units per sampled volume. We have chosen to utilize the 6x6 drop plate method because it offers the means to rapidly perform a...

  12. Security of practical private randomness generation

    NASA Astrophysics Data System (ADS)

    Pironio, Stefano; Massar, Serge

    2013-01-01

    Measurements on entangled quantum systems necessarily yield outcomes that are intrinsically unpredictable if they violate a Bell inequality. This property can be used to generate certified randomness in a device-independent way, i.e., without making detailed assumptions about the internal working of the quantum devices used to generate the random numbers. Furthermore these numbers are also private; i.e., they appear random not only to the user but also to any adversary that might possess a perfect description of the devices. Since this process requires a small initial random seed to sample the behavior of the quantum devices and to extract uniform randomness from the raw outputs of the devices, one usually speaks of device-independent randomness expansion. The purpose of this paper is twofold. First, we point out that in most real, practical situations, where the concept of device independence is used as a protection against unintentional flaws or failures of the quantum apparatuses, it is sufficient to show that the generated string is random with respect to an adversary that holds only classical side information; i.e., proving randomness against quantum side information is not necessary. Furthermore, the initial random seed does not need to be private with respect to the adversary, provided that it is generated in a way that is independent from the measured systems. The devices, however, will generate cryptographically secure randomness that cannot be predicted by the adversary, and thus one can, given access to free public randomness, talk about private randomness generation. The theoretical tools to quantify the generated randomness according to these criteria were already introduced in S. Pironio [Nature (London)NATUAS0028-083610.1038/nature09008 464, 1021 (2010)], but the final results were improperly formulated. The second aim of this paper is to correct this inaccurate formulation and therefore lay out a precise theoretical framework for practical device

  13. The MCNP5 Random number generator

    SciTech Connect

    Brown, F. B.; Nagaya, Y.

    2002-01-01

    MCNP and other Monte Carlo particle transport codes use random number generators to produce random variates from a uniform distribution on the interval. These random variates are then used in subsequent sampling from probability distributions to simulate the physical behavior of particles during the transport process. This paper describes the new random number generator developed for MCNP Version 5. The new generator will optionally preserve the exact random sequence of previous versions and is entirely conformant to the Fortran-90 standard, hence completely portable. In addition, skip-ahead algorithms have been implemented to efficiently initialize the generator for new histories, a capability that greatly simplifies parallel algorithms. Further, the precision of the generator has been increased, extending the period by a factor of 10{sup 5}. Finally, the new generator has been subjected to 3 different sets of rigorous and extensive statistical tests to verify that it produces a sufficiently random sequence.

  14. 9 CFR 590.350 - Procedures for selecting appeal samples.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... containers plus an equal number of containers selected at random. When the original sample containers cannot be located, the appeal sample shall consist of product taken at random from double the number of... the original sample containers plus an equal number of containers selected at random. A...

  15. Random Item IRT Models

    ERIC Educational Resources Information Center

    De Boeck, Paul

    2008-01-01

    It is common practice in IRT to consider items as fixed and persons as random. Both, continuous and categorical person parameters are most often random variables, whereas for items only continuous parameters are used and they are commonly of the fixed type, although exceptions occur. It is shown in the present article that random item parameters…

  16. Randomization in robot tasks

    NASA Technical Reports Server (NTRS)

    Erdmann, Michael

    1992-01-01

    This paper investigates the role of randomization in the solution of robot manipulation tasks. One example of randomization is shown by the strategy of shaking a bin holding a part in order to orient the part in a desired stable state with some high probability. Randomization can be useful for mobile robot navigation and as a means of guiding the design process.

  17. Importance sampling : promises and limitations.

    SciTech Connect

    West, Nicholas J.; Swiler, Laura Painton

    2010-04-01

    Importance sampling is an unbiased sampling method used to sample random variables from different densities than originally defined. These importance sampling densities are constructed to pick 'important' values of input random variables to improve the estimation of a statistical response of interest, such as a mean or probability of failure. Conceptually, importance sampling is very attractive: for example one wants to generate more samples in a failure region when estimating failure probabilities. In practice, however, importance sampling can be challenging to implement efficiently, especially in a general framework that will allow solutions for many classes of problems. We are interested in the promises and limitations of importance sampling as applied to computationally expensive finite element simulations which are treated as 'black-box' codes. In this paper, we present a customized importance sampler that is meant to be used after an initial set of Latin Hypercube samples has been taken, to help refine a failure probability estimate. The importance sampling densities are constructed based on kernel density estimators. We examine importance sampling with respect to two main questions: is importance sampling efficient and accurate for situations where we can only afford small numbers of samples? And does importance sampling require the use of surrogate methods to generate a sufficient number of samples so that the importance sampling process does increase the accuracy of the failure probability estimate? We present various case studies to address these questions.

  18. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual, appendix 2

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    The FORTRAN programs RANDOM3 and RANDOM4 are documented. They are based on fatigue strength reduction, using a probabilistic constitutive model. They predict the random lifetime of an engine component to reach a given fatigue strength. Included in this user manual are details regarding the theoretical backgrounds of RANDOM3 and RANDOM4. Appendix A gives information on the physical quantities, their symbols, FORTRAN names, and both SI and U.S. Customary units. Appendix B and C include photocopies of the actual computer printout corresponding to the sample problems. Appendices D and E detail the IMSL, Version 10(1), subroutines and functions called by RANDOM3 and RANDOM4 and SAS/GRAPH(2) programs that can be used to plot both the probability density functions (p.d.f.) and the cumulative distribution functions (c.d.f.).

  19. Quantum random number generation

    SciTech Connect

    Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; Zhang, Zhen; Qi, Bing

    2016-01-01

    Here, quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness — coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a high speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.

  20. Quantum random number generation

    DOE PAGESBeta

    Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; Zhang, Zhen; Qi, Bing

    2016-06-28

    Here, quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at amore » high speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less

  1. Adaptive Peer Sampling with Newscast

    NASA Astrophysics Data System (ADS)

    Tölgyesi, Norbert; Jelasity, Márk

    The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.

  2. Capillary sample

    MedlinePlus

    ... using capillary blood sampling. Disadvantages to capillary blood sampling include: Only a limited amount of blood can be drawn using this method. The procedure has some risks (see below). Capillary ...

  3. An improved sampling method of complex network

    NASA Astrophysics Data System (ADS)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  4. Fast generation of sparse random kernel graphs

    DOE PAGESBeta

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less

  5. Fast generation of sparse random kernel graphs

    SciTech Connect

    Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo

    2015-09-10

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.

  6. Fast Generation of Sparse Random Kernel Graphs

    PubMed Central

    2015-01-01

    The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most 𝒪(n(logn)2). As a practical example we show how to generate samples of power-law degree distribution graphs with tunable assortativity. PMID:26356296

  7. Random errors in egocentric networks.

    PubMed

    Almquist, Zack W

    2012-10-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5-20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  8. Random errors in egocentric networks

    PubMed Central

    Almquist, Zack W.

    2013-01-01

    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5–20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  9. Sampling Development

    PubMed Central

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of the enterprise. This article discusses how to sample development in order to accurately discern the shape of developmental change. The ideal solution is daunting: to summarize behavior over 24-hour intervals and collect daily samples over the critical periods of change. We discuss the magnitude of errors due to undersampling, and the risks associated with oversampling. When daily sampling is not feasible, we offer suggestions for sampling methods that can provide preliminary reference points and provisional sketches of the general shape of a developmental trajectory. Denser sampling then can be applied strategically during periods of enhanced variability, inflections in the rate of developmental change, or in relation to key events or processes that may affect the course of change. Despite the challenges of dense repeated sampling, researchers must take seriously the problem of sampling on a developmental time scale if we are to know the true shape of developmental change. PMID:22140355

  10. Sampling Development

    ERIC Educational Resources Information Center

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  11. Random pulse generator

    NASA Technical Reports Server (NTRS)

    Lindsey, R. S., Jr. (Inventor)

    1975-01-01

    An exemplary embodiment of the present invention provides a source of random width and random spaced rectangular voltage pulses whose mean or average frequency of operation is controllable within prescribed limits of about 10 hertz to 1 megahertz. A pair of thin-film metal resistors are used to provide a differential white noise voltage pulse source. Pulse shaping and amplification circuitry provide relatively short duration pulses of constant amplitude which are applied to anti-bounce logic circuitry to prevent ringing effects. The pulse outputs from the anti-bounce circuits are then used to control two one-shot multivibrators whose output comprises the random length and random spaced rectangular pulses. Means are provided for monitoring, calibrating and evaluating the relative randomness of the generator.

  12. Reflecting Random Flights

    NASA Astrophysics Data System (ADS)

    De Gregorio, Alessandro; Orsingher, Enzo

    2015-09-01

    We consider random flights in reflecting on the surface of a sphere with center at the origin and with radius R, where reflection is performed by means of circular inversion. Random flights studied in this paper are motions where the orientation of the deviations are uniformly distributed on the unit-radius sphere . We obtain the explicit probability distributions of the position of the moving particle when the number of changes of direction is fixed and equal to . We show that these distributions involve functions which are solutions of the Euler-Poisson-Darboux equation. The unconditional probability distributions of the reflecting random flights are obtained by suitably randomizing n by means of a fractional-type Poisson process. Random flights reflecting on hyperplanes according to the optical reflection form are considered and the related distributional properties derived.

  13. Quantum random number generator

    DOEpatents

    Pooser, Raphael C.

    2016-05-10

    A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.

  14. Autonomous Byte Stream Randomizer

    NASA Technical Reports Server (NTRS)

    Paloulian, George K.; Woo, Simon S.; Chow, Edward T.

    2013-01-01

    Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

  15. Experimental scattershot boson sampling

    PubMed Central

    Bentivegna, Marco; Spagnolo, Nicolò; Vitelli, Chiara; Flamini, Fulvio; Viggianiello, Niko; Latmiral, Ludovico; Mataloni, Paolo; Brod, Daniel J.; Galvão, Ernesto F.; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto; Sciarrino, Fabio

    2015-01-01

    Boson sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialized quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, scattershot boson sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric down-conversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We use recently proposed statistical tools to analyze our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy. PMID:26601164

  16. [A comparison of convenience sampling and purposive sampling].

    PubMed

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis. PMID:24899564

  17. RANDOM FORESTS FOR PHOTOMETRIC REDSHIFTS

    SciTech Connect

    Carliles, Samuel; Szalay, Alexander S.; Budavari, Tamas; Heinis, Sebastien; Priebe, Carey

    2010-03-20

    The main challenge today in photometric redshift estimation is not in the accuracy but in understanding the uncertainties. We introduce an empirical method based on Random Forests to address these issues. The training algorithm builds a set of optimal decision trees on subsets of the available spectroscopic sample, which provide independent constraints on the redshift of each galaxy. The combined forest estimates have intriguing statistical properties, notable among which are Gaussian errors. We demonstrate the power of our approach on multi-color measurements of the Sloan Digital Sky Survey.

  18. Elevating sampling

    PubMed Central

    Labuz, Joseph M.; Takayama, Shuichi

    2014-01-01

    Sampling – the process of collecting, preparing, and introducing an appropriate volume element (voxel) into a system – is often under appreciated and pushed behind the scenes in lab-on-a-chip research. What often stands in the way between proof-of-principle demonstrations of potentially exciting technology and its broader dissemination and actual use, however, is the effectiveness of sample collection and preparation. The power of micro- and nanofluidics to improve reactions, sensing, separation, and cell culture cannot be accessed if sampling is not equally efficient and reliable. This perspective will highlight recent successes as well as assess current challenges and opportunities in this area. PMID:24781100

  19. Randomness for Free

    NASA Astrophysics Data System (ADS)

    Chatterjee, Krishnendu; Doyen, Laurent; Gimbert, Hugo; Henzinger, Thomas A.

    We consider two-player zero-sum games on graphs. These games can be classified on the basis of the information of the players and on the mode of interaction between them. On the basis of information the classification is as follows: (a) partial-observation (both players have partial view of the game); (b) one-sided complete-observation (one player has complete observation); and (c) complete-observation (both players have complete view of the game). On the basis of mode of interaction we have the following classification: (a) concurrent (players interact simultaneously); and (b) turn-based (players interact in turn). The two sources of randomness in these games are randomness in transition function and randomness in strategies. In general, randomized strategies are more powerful than deterministic strategies, and randomness in transitions gives more general classes of games. We present a complete characterization for the classes of games where randomness is not helpful in: (a) the transition function (probabilistic transition can be simulated by deterministic transition); and (b) strategies (pure strategies are as powerful as randomized strategies). As consequence of our characterization we obtain new undecidability results for these games.

  20. SAMPLING SYSTEM

    DOEpatents

    Hannaford, B.A.; Rosenberg, R.; Segaser, C.L.; Terry, C.L.

    1961-01-17

    An apparatus is given for the batch sampling of radioactive liquids such as slurries from a system by remote control, while providing shielding for protection of operating personnel from the harmful effects of radiation.

  1. Fluidic sampling

    SciTech Connect

    Houck, E.D.

    1992-04-20

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate.

  2. Fluidic sampling

    NASA Astrophysics Data System (ADS)

    Houck, E. D.

    1992-04-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2-39.9 feet at an average ratio of 0.02-0.05 gpm (77-192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016-0.026 gpm (60-100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140-150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate.

  3. Understanding infidelity: correlates in a national random sample.

    PubMed

    Atkins, D C; Baucom, D H; Jacobson, N S

    2001-12-01

    Infidelity is a common phenomenon in marriages but is poorly understood. The current study examined variables related to extramarital sex using data from the 1991-1996 General Social Surveys. Predictor variables were entered into a logistic regression with presence of extramarital sex as the dependent variable. Results demonstrated that divorce, education, age when first married, and 2 "opportunity" variables--respondent's income and work status--significantly affected the likelihood of having engaged in infidelity. Also, there were 3 significant interactions related to infidelity: (a) between age and gender, (b) between marital satisfaction and religious behavior, and (c) between past divorce and educational level. Implications of these findings and directions for future research are discussed. PMID:11770478

  4. Space resection model calculation based on Random Sample Consensus algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Xinzhu; Kang, Zhizhong

    2016-03-01

    Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.

  5. Optofluidic random laser

    NASA Astrophysics Data System (ADS)

    Shivakiran Bhaktha, B. N.; Bachelard, Nicolas; Noblin, Xavier; Sebbah, Patrick

    2012-10-01

    Random lasing is reported in a dye-circulated structured polymeric microfluidic channel. The role of disorder, which results from limited accuracy of photolithographic process, is demonstrated by the variation of the emission spectrum with local-pump position and by the extreme sensitivity to a local perturbation of the structure. Thresholds comparable to those of conventional microfluidic lasers are achieved, without the hurdle of state-of-the-art cavity fabrication. Potential applications of optofluidic random lasers for on-chip sensors are discussed. Introduction of random lasers in the field of optofluidics is a promising alternative to on-chip laser integration with light and fluidic functionalities.

  6. Sampling Can Produce Solid Findings: Increase Your Effectiveness and Manage Volumes of Data.

    ERIC Educational Resources Information Center

    Champion, Robby

    2002-01-01

    Limiting data collection to a sample group is one way to increase effectiveness in dealing with data. The paper describes how to draw a sample group (random sampling, stratified random sampling, purposeful sampling, and convenient or opportunity sampling) and discusses how to determine the size of the sample group. (SM)

  7. Random array grid collimator

    DOEpatents

    Fenimore, E.E.

    1980-08-22

    A hexagonally shaped quasi-random no-two-holes touching grid collimator. The quasi-random array grid collimator eliminates contamination from small angle off-axis rays by using a no-two-holes-touching pattern which simultaneously provides for a self-supporting array increasng throughput by elimination of a substrate. The presentation invention also provides maximum throughput using hexagonally shaped holes in a hexagonal lattice pattern for diffraction limited applications. Mosaicking is also disclosed for reducing fabrication effort.

  8. Sampling apparatus

    DOEpatents

    Gordon, Norman R.; King, Lloyd L.; Jackson, Peter O.; Zulich, Alan W.

    1989-01-01

    A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface.

  9. Sampling apparatus

    DOEpatents

    Gordon, N.R.; King, L.L.; Jackson, P.O.; Zulich, A.W.

    1989-07-18

    A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface. 15 figs.

  10. Randomized Algorithms for Matrices and Data

    NASA Astrophysics Data System (ADS)

    Mahoney, Michael W.

    2012-03-01

    This chapter reviews recent work on randomized matrix algorithms. By “randomized matrix algorithms,” we refer to a class of recently developed random sampling and random projection algorithms for ubiquitous linear algebra problems such as least-squares (LS) regression and low-rank matrix approximation. These developments have been driven by applications in large-scale data analysis—applications which place very different demands on matrices than traditional scientific computing applications. Thus, in this review, we will focus on highlighting the simplicity and generality of several core ideas that underlie the usefulness of these randomized algorithms in scientific applications such as genetics (where these algorithms have already been applied) and astronomy (where, hopefully, in part due to this review they will soon be applied). The work we will review here had its origins within theoretical computer science (TCS). An important feature in the use of randomized algorithms in TCS more generally is that one must identify and then algorithmically deal with relevant “nonuniformity structure” in the data. For the randomized matrix algorithms to be reviewed here and that have proven useful recently in numerical linear algebra (NLA) and large-scale data analysis applications, the relevant nonuniformity structure is defined by the so-called statistical leverage scores. Defined more precisely below, these leverage scores are basically the diagonal elements of the projection matrix onto the dominant part of the spectrum of the input matrix. As such, they have a long history in statistical data analysis, where they have been used for outlier detection in regression diagnostics. More generally, these scores often have a very natural interpretation in terms of the data and processes generating the data. For example, they can be interpreted in terms of the leverage or influence that a given data point has on, say, the best low-rank matrix approximation; and this

  11. Quantum Random Number Generation Using a Quanta Image Sensor.

    PubMed

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  12. Quantum Random Number Generation Using a Quanta Image Sensor

    PubMed Central

    Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.

    2016-01-01

    A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698

  13. Creating Ensembles of Decision Trees Through Sampling

    SciTech Connect

    Kamath,C; Cantu-Paz, E

    2001-07-26

    Recent work in classification indicates that significant improvements in accuracy can be obtained by growing an ensemble of classifiers and having them vote for the most popular class. This paper focuses on ensembles of decision trees that are created with a randomized procedure based on sampling. Randomization can be introduced by using random samples of the training data (as in bagging or boosting) and running a conventional tree-building algorithm, or by randomizing the induction algorithm itself. The objective of this paper is to describe the first experiences with a novel randomized tree induction method that uses a sub-sample of instances at a node to determine the split. The empirical results show that ensembles generated using this approach yield results that are competitive in accuracy and superior in computational cost to boosting and bagging.

  14. Creating ensembles of decision trees through sampling

    SciTech Connect

    Kamath, C; Cantu-Paz, E

    2001-02-02

    Recent work in classification indicates that significant improvements in accuracy can be obtained by growing an ensemble of classifiers and having them vote for the most popular class. This paper focuses on ensembles of decision trees that are created with a randomized procedure based on sampling. Randomization can be introduced by using random samples of the training data (as in bagging or arcing) and running a conventional tree-building algorithm, or by randomizing the induction algorithm itself. The objective of this paper is to describe our first experiences with a novel randomized tree induction method that uses a subset of samples at a node to determine the split. Our empirical results show that ensembles generated using this approach yield results that are competitive in accuracy and superior in computational cost.

  15. Random phase textures: theory and synthesis.

    PubMed

    Galerne, Bruno; Gousseau, Yann; Morel, Jean-Michel

    2011-01-01

    This paper explores the mathematical and algorithmic properties of two sample-based texture models: random phase noise (RPN) and asymptotic discrete spot noise (ADSN). These models permit to synthesize random phase textures. They arguably derive from linearized versions of two early Julesz texture discrimination theories. The ensuing mathematical analysis shows that, contrarily to some statements in the literature, RPN and ADSN are different stochastic processes. Nevertheless, numerous experiments also suggest that the textures obtained by these algorithms from identical samples are perceptually similar. The relevance of this study is enhanced by three technical contributions providing solutions to obstacles that prevented the use of RPN or ADSN to emulate textures. First, RPN and ADSN algorithms are extended to color images. Second, a preprocessing is proposed to avoid artifacts due to the nonperiodicity of real-world texture samples. Finally, the method is extended to synthesize textures with arbitrary size from a given sample. PMID:20550995

  16. Random laser action in bovine semen

    NASA Astrophysics Data System (ADS)

    Smuk, Andrei; Lazaro, Edgar; Olson, Leif P.; Lawandy, N. M.

    2011-03-01

    Experiments using bovine semen reveal that the addition of a high-gain water soluble dye results in random laser action when excited by a Q-switched, frequency doubled, Nd:Yag laser. The data shows that the linewidth collapse of the emission is correlated to the sperm count of the individual samples, potentially making this a rapid, low sample volume approach to count determination.

  17. Statistical properties of randomization in clinical trials.

    PubMed

    Lachin, J M

    1988-12-01

    This is the first of five articles on the properties of different randomization procedures used in clinical trials. This paper presents definitions and discussions of the statistical properties of randomization procedures as they relate to both the design of a clinical trial and the statistical analysis of trial results. The subsequent papers consider, respectively, the properties of simple (complete), permuted-block (i.e., blocked), and urn (adaptive biased-coin) randomization. The properties described herein are the probabilities of treatment imbalances and the potential effects on the power of statistical tests; the permutational basis for statistical tests; and the potential for experimental biases in the assessment of treatment effects due either to the predictability of the random allocations (selection bias) or the susceptibility of the randomization procedure to covariate imbalances (accidental bias). For most randomization procedures, the probabilities of overall treatment imbalances are readily computed, even when a stratified randomization is used. This is important because treatment imbalance may affect statistical power. It is shown, however, that treatment imbalance must be substantial before power is more than trivially affected. The differences between a population versus a permutation model as a basis for a statistical test are reviewed. It is argued that a population model can only be invoked in clinical trials as an untestable assumption, rather than being formally based on sampling at random from a population. On the other hand, a permutational analysis based on the randomization actually employed requires no assumptions regarding the origin of the samples of patients studied. The large sample permutational distribution of the family of linear rank tests is described as a basis for easily conducting a variety of permutation tests. Subgroup (stratified) analyses, analyses when some data are missing, and regression model analyses are also

  18. Sampling Strategy

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Three locations to the right of the test dig area are identified for the first samples to be delivered to the Thermal and Evolved Gas Analyzer (TEGA), the Wet Chemistry Lab (WCL), and the Optical Microscope (OM) on NASA's Phoenix Mars Lander. These sampling areas are informally labeled 'Baby Bear', 'Mama Bear', and 'Papa Bear' respectively. This image was taken on the seventh day of the Mars mission, or Sol 7 (June 1, 2008) by the Surface Stereo Imager aboard NASA's Phoenix Mars Lander.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  19. Random walks on networks

    NASA Astrophysics Data System (ADS)

    Donnelly, Isaac

    Random walks on lattices are a well used model for diffusion on continuum. They have been to model subdiffusive systems, systems with forcing and reactions as well as a combination of the three. We extend the traditional random walk framework to the network to obtain novel results. As an example due to the small graph diameter, the early time behaviour of subdiffusive dynamics dominates the observed system which has implications for models of the brain or airline networks. I would like to thank the Australian American Fulbright Association.

  20. Intermittency and random matrices

    NASA Astrophysics Data System (ADS)

    Sokoloff, Dmitry; Illarionov, E. A.

    2015-08-01

    A spectacular phenomenon of intermittency, i.e. a progressive growth of higher statistical moments of a physical field excited by an instability in a random medium, attracted the attention of Zeldovich in the last years of his life. At that time, the mathematical aspects underlying the physical description of this phenomenon were still under development and relations between various findings in the field remained obscure. Contemporary results from the theory of the product of independent random matrices (the Furstenberg theory) allowed the elaboration of the phenomenon of intermittency in a systematic way. We consider applications of the Furstenberg theory to some problems in cosmology and dynamo theory.

  1. Optimal hemicube sampling

    SciTech Connect

    Max, N. |

    1992-12-17

    Radiosity algorithms for global illumination, either ``gathering`` or ``shooting`` versions, depend on the calculation of form factors. It is possible to calculate the form factors analytically, but this is difficult when occlusion is involved, so sampling methods are usually preferred. The necessary visibility information can be obtained by ray tracing in the sampled directions. However, area coherence makes it more efficient to project and scan-convert the scene onto a number of planes, for example, the faces of a hemicube. The hemicube faces have traditionally been divided into equal square pixels, but more general subdivisions are practical, and can reduce the variance of the form factor estimates. The hemicube estimates of form factors are based on a finite set of sample directions. We obtain several optimal arrangements of sample directions, which minimize the variance of this estimate. Four approaches are changing the size of the pixels, the shape of the pixels, the shape of the hemicube, or using non-uniform pixel grids. The best approach reduces the variance by 43%. The variance calculation is based on the assumption that the errors in the estimate are caused by the projections of single edges of polygonal patches, and that the positions and orientations of these edges are random.

  2. Optimal hemicube sampling

    SciTech Connect

    Max, N. California Univ., Davis, CA )

    1992-12-17

    Radiosity algorithms for global illumination, either gathering'' or shooting'' versions, depend on the calculation of form factors. It is possible to calculate the form factors analytically, but this is difficult when occlusion is involved, so sampling methods are usually preferred. The necessary visibility information can be obtained by ray tracing in the sampled directions. However, area coherence makes it more efficient to project and scan-convert the scene onto a number of planes, for example, the faces of a hemicube. The hemicube faces have traditionally been divided into equal square pixels, but more general subdivisions are practical, and can reduce the variance of the form factor estimates. The hemicube estimates of form factors are based on a finite set of sample directions. We obtain several optimal arrangements of sample directions, which minimize the variance of this estimate. Four approaches are changing the size of the pixels, the shape of the pixels, the shape of the hemicube, or using non-uniform pixel grids. The best approach reduces the variance by 43%. The variance calculation is based on the assumption that the errors in the estimate are caused by the projections of single edges of polygonal patches, and that the positions and orientations of these edges are random.

  3. Randomization Does Not Help Much, Comparability Does

    PubMed Central

    Saint-Mont, Uwe

    2015-01-01

    According to R.A. Fisher, randomization “relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed.” Since, in particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result, it has become very popular. This contribution challenges the received view. First, looking for quantitative support, we study a number of straightforward, mathematically simple models. They all demonstrate that the optimism surrounding randomization is questionable: In small to medium-sized samples, random allocation of units to treatments typically yields a considerable imbalance between the groups, i.e., confounding due to randomization is the rule rather than the exception. In the second part of this contribution, the reasoning is extended to a number of traditional arguments in favour of randomization. This discussion is rather non-technical, and sometimes touches on the rather fundamental Frequentist/Bayesian debate. However, the result of this analysis turns out to be quite similar: While the contribution of randomization remains doubtful, comparability contributes much to a compelling conclusion. Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable. PMID:26193621

  4. On Random Numbers and Design

    ERIC Educational Resources Information Center

    Ben-Ari, Morechai

    2004-01-01

    The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…

  5. A discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Liu, Zhengjun; Zhao, Haifa; Liu, Shutian

    2005-11-01

    We propose a discrete fractional random transform based on a generalization of the discrete fractional Fourier transform with an intrinsic randomness. Such discrete fractional random transform inheres excellent mathematical properties of the fractional Fourier transform along with some fantastic features of its own. As a primary application, the discrete fractional random transform has been used for image encryption and decryption.

  6. Uniform random number generators

    NASA Technical Reports Server (NTRS)

    Farr, W. R.

    1971-01-01

    Methods are presented for the generation of random numbers with uniform and normal distributions. Subprogram listings of Fortran generators for the Univac 1108, SDS 930, and CDC 3200 digital computers are also included. The generators are of the mixed multiplicative type, and the mathematical method employed is that of Marsaglia and Bray.

  7. Generating "Random" Integers

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2011-01-01

    One of the author's undergraduate students recently asked him whether it was possible to generate a random positive integer. After some thought, the author realised that there were plenty of interesting mathematical ideas inherent in her question. So much so in fact, that the author decided to organise a workshop, open both to undergraduates and…

  8. Random lattice superstrings

    SciTech Connect

    Feng Haidong; Siegel, Warren

    2006-08-15

    We propose some new simplifying ingredients for Feynman diagrams that seem necessary for random lattice formulations of superstrings. In particular, half the fermionic variables appear only in particle loops (similarly to loop momenta), reducing the supersymmetry of the constituents of the type IIB superstring to N=1, as expected from their interpretation in the 1/N expansion as super Yang-Mills.

  9. Bootstrapped models for intrinsic random functions

    SciTech Connect

    Campbell, K.

    1987-01-01

    The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.

  10. Bootstrapped models for intrinsic random functions

    SciTech Connect

    Campbell, K.

    1988-08-01

    Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.

  11. Summer School Effects in a Randomized Field Trial

    ERIC Educational Resources Information Center

    Zvoch, Keith; Stevens, Joseph J.

    2013-01-01

    This field-based randomized trial examined the effect of assignment to and participation in summer school for two moderately at-risk samples of struggling readers. Application of multiple regression models to difference scores capturing the change in summer reading fluency revealed that kindergarten students randomly assigned to summer school…

  12. Relativistic Weierstrass random walks.

    PubMed

    Saa, Alberto; Venegeroles, Roberto

    2010-08-01

    The Weierstrass random walk is a paradigmatic Markov chain giving rise to a Lévy-type superdiffusive behavior. It is well known that special relativity prevents the arbitrarily high velocities necessary to establish a superdiffusive behavior in any process occurring in Minkowski spacetime, implying, in particular, that any relativistic Markov chain describing spacetime phenomena must be essentially Gaussian. Here, we introduce a simple relativistic extension of the Weierstrass random walk and show that there must exist a transition time t{c} delimiting two qualitative distinct dynamical regimes: the (nonrelativistic) superdiffusive Lévy flights, for tt{c} . Implications of this crossover between different diffusion regimes are discussed for some explicit examples. The study of such an explicit and simple Markov chain can shed some light on several results obtained in much more involved contexts. PMID:20866862

  13. Random very loose packings.

    PubMed

    Ciamarra, Massimo Pica; Coniglio, Antonio

    2008-09-19

    We measure the number Omega(phi) of mechanically stable states of volume fraction phi of a granular assembly under gravity. The granular entropy S(phi)=logOmega(phi) vanishes both at high density, at phi approximately equal to phi_rcp, and a low density, at phi approximately equal to phi_rvlp, where phi_rvlp is a new lower bound we call random very loose pack. phi_rlp is the volume fraction where the entropy is maximal. These findings allow for a clear explanation of compaction experiments and provide the first first-principle definition of the random loose volume fraction. In the context of the statistical mechanics approach to static granular materials, states with phi

  14. Sampling properties of directed networks.

    PubMed

    Son, S-W; Christensen, C; Bizhani, G; Foster, D V; Grassberger, P; Paczuski, M

    2012-10-01

    For many real-world networks only a small "sampled" version of the original network may be investigated; those results are then used to draw conclusions about the actual system. Variants of breadth-first search (BFS) sampling, which are based on epidemic processes, are widely used. Although it is well established that BFS sampling fails, in most cases, to capture the IN component(s) of directed networks, a description of the effects of BFS sampling on other topological properties is all but absent from the literature. To systematically study the effects of sampling biases on directed networks, we compare BFS sampling to random sampling on complete large-scale directed networks. We present new results and a thorough analysis of the topological properties of seven complete directed networks (prior to sampling), including three versions of Wikipedia, three different sources of sampled World Wide Web data, and an Internet-based social network. We detail the differences that sampling method and coverage can make to the structural properties of sampled versions of these seven networks. Most notably, we find that sampling method and coverage affect both the bow-tie structure and the number and structure of strongly connected components in sampled networks. In addition, at a low sampling coverage (i.e., less than 40%), the values of average degree, variance of out-degree, degree autocorrelation, and link reciprocity are overestimated by 30% or more in BFS-sampled networks and only attain values within 10% of the corresponding values in the complete networks when sampling coverage is in excess of 65%. These results may cause us to rethink what we know about the structure, function, and evolution of real-world directed networks. PMID:23214649

  15. Diffusion at the Random Matrix Hard Edge

    NASA Astrophysics Data System (ADS)

    Ramírez, José A.; Rider, Brian

    2009-06-01

    We show that the limiting minimal eigenvalue distributions for a natural generalization of Gaussian sample-covariance structures (beta ensembles) are described by the spectrum of a random diffusion generator. This generator may be mapped onto the “Stochastic Bessel Operator,” introduced and studied by A. Edelman and B. Sutton in [6] where the corresponding convergence was first conjectured. Here, by a Riccati transformation, we also obtain a second diffusion description of the limiting eigenvalues in terms of hitting laws. All this pertains to the so-called hard edge of random matrix theory and sits in complement to the recent work [15] of the authors and B. Virág on the general beta random matrix soft edge. In fact, the diffusion descriptions found on both sides are used below to prove there exists a transition between the soft and hard edge laws at all values of beta.

  16. Random number generation from spontaneous Raman scattering

    NASA Astrophysics Data System (ADS)

    Collins, M. J.; Clark, A. S.; Xiong, C.; Mägi, E.; Steel, M. J.; Eggleton, B. J.

    2015-10-01

    We investigate the generation of random numbers via the quantum process of spontaneous Raman scattering. Spontaneous Raman photons are produced by illuminating a highly nonlinear chalcogenide glass ( As 2 S 3 ) fiber with a CW laser at a power well below the stimulated Raman threshold. Single Raman photons are collected and separated into two discrete wavelength detuning bins of equal scattering probability. The sequence of photon detection clicks is converted into a random bit stream. Postprocessing is applied to remove detector bias, resulting in a final bit rate of ˜650 kb/s. The collected random bit-sequences pass the NIST statistical test suite for one hundred 1 Mb samples, with the significance level set to α = 0.01 . The fiber is stable, robust and the high nonlinearity (compared to silica) allows for a short fiber length and low pump power favourable for real world application.

  17. Random lasing with spatially nonuniform gain

    NASA Astrophysics Data System (ADS)

    Fan, Ting; Lü, Jiantao

    2016-07-01

    Spatial and spectral properties of random lasing with spatially nonuniform gain were investigated in two-dimensional (2D) disordered medium. The pumping light was described by an individual electric field and coupled into the rate equations by using the polarization equation. The spatially nonuniform gain comes from the multiple scattering of this pumping light. Numerical simulation of the random system with uniform and nonuniform gain were performed both in weak and strong scattering regime. In weak scattering sample, all the lasing modes correspond to those of the passive system whether the nonuniform gain is considered. However, in strong scattering regime, new lasing modes appear with nonuniform gain as the localization area changes. Our results show that it is more accurate to describe the random lasing behavior with introducing the nonuniform gain origins from the multiple light scattering.

  18. Fatigue strength reduction model: RANDOM3 and RANDOM4 user manual. Appendix 2: Development of advanced methodologies for probabilistic constitutive relationships of material strength models

    NASA Technical Reports Server (NTRS)

    Boyce, Lola; Lovelace, Thomas B.

    1989-01-01

    FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.

  19. A random number generator for continuous random variables

    NASA Technical Reports Server (NTRS)

    Guerra, V. M.; Tapia, R. A.; Thompson, J. R.

    1972-01-01

    A FORTRAN 4 routine is given which may be used to generate random observations of a continuous real valued random variable. Normal distribution of F(x), X, E(akimas), and E(linear) is presented in tabular form.

  20. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  1. Sampling properties of directed networks

    NASA Astrophysics Data System (ADS)

    Son, S.-W.; Christensen, C.; Bizhani, G.; Foster, D. V.; Grassberger, P.; Paczuski, M.

    2012-10-01

    For many real-world networks only a small “sampled” version of the original network may be investigated; those results are then used to draw conclusions about the actual system. Variants of breadth-first search (BFS) sampling, which are based on epidemic processes, are widely used. Although it is well established that BFS sampling fails, in most cases, to capture the IN component(s) of directed networks, a description of the effects of BFS sampling on other topological properties is all but absent from the literature. To systematically study the effects of sampling biases on directed networks, we compare BFS sampling to random sampling on complete large-scale directed networks. We present new results and a thorough analysis of the topological properties of seven complete directed networks (prior to sampling), including three versions of Wikipedia, three different sources of sampled World Wide Web data, and an Internet-based social network. We detail the differences that sampling method and coverage can make to the structural properties of sampled versions of these seven networks. Most notably, we find that sampling method and coverage affect both the bow-tie structure and the number and structure of strongly connected components in sampled networks. In addition, at a low sampling coverage (i.e., less than 40%), the values of average degree, variance of out-degree, degree autocorrelation, and link reciprocity are overestimated by 30% or more in BFS-sampled networks and only attain values within 10% of the corresponding values in the complete networks when sampling coverage is in excess of 65%. These results may cause us to rethink what we know about the structure, function, and evolution of real-world directed networks.

  2. Generation of kth-order random toposequences

    NASA Astrophysics Data System (ADS)

    Odgers, Nathan P.; McBratney, Alex. B.; Minasny, Budiman

    2008-05-01

    The model presented in this paper derives toposequences from a digital elevation model (DEM). It is written in ArcInfo Macro Language (AML). The toposequences are called kth-order random toposequences, because they take a random path uphill to the top of a hill and downhill to a stream or valley bottom from a randomly selected seed point, and they are located in a streamshed of order k according to a particular stream-ordering system. We define a kth-order streamshed as the area of land that drains directly to a stream segment of stream order k. The model attempts to optimise the spatial configuration of a set of derived toposequences iteratively by using simulated annealing to maximise the total sum of distances between each toposequence hilltop in the set. The user is able to select the order, k, of the derived toposequences. Toposequences are useful for determining soil sampling locations for use in collecting soil data for digital soil mapping applications. Sampling locations can be allocated according to equal elevation or equal-distance intervals along the length of the toposequence, for example. We demonstrate the use of this model for a study area in the Hunter Valley of New South Wales, Australia. Of the 64 toposequences derived, 32 were first-order random toposequences according to Strahler's stream-ordering system, and 32 were second-order random toposequences. The model that we present in this paper is an efficient method for sampling soil along soil toposequences. The soils along a toposequence are related to each other by the topography they are found in, so soil data collected by this method is useful for establishing soil-landscape rules for the preparation of digital soil maps.

  3. Attenuation of species abundance distributions by sampling.

    PubMed

    Shimadzu, Hideyasu; Darnell, Ross

    2015-04-01

    Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies. PMID:26064626

  4. Attenuation of species abundance distributions by sampling

    PubMed Central

    Shimadzu, Hideyasu; Darnell, Ross

    2015-01-01

    Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies. PMID:26064626

  5. On grey levels in random CAPTCHA generation

    NASA Astrophysics Data System (ADS)

    Newton, Fraser; Kouritzin, Michael A.

    2011-06-01

    A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.

  6. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  7. Beta-binomial ANOVA for multivariate randomized response data.

    PubMed

    Fox, Jean-Paul

    2008-11-01

    There is much empirical evidence that randomized response methods improve the cooperation of the respondents when asking sensitive questions. The traditional methods for analysing randomized response data are restricted to univariate data and only allow inferences at the group level due to the randomized response sampling design. Here, a novel beta-binomial model is proposed for analysing multivariate individual count data observed via a randomized response sampling design. This new model allows for the estimation of individual response probabilities (response rates) for multivariate randomized response data utilizing an empirical Bayes approach. A common beta prior specifies that individuals in a group are tied together and the beta prior parameters are allowed to be cluster-dependent. A Bayes factor is proposed to test for group differences in response rates. An analysis of a cheating study, where 10 items measure cheating or academic dishonesty, is used to illustrate application of the proposed model. PMID:17612461

  8. Index statistical properties of sparse random graphs

    NASA Astrophysics Data System (ADS)

    Metz, F. L.; Stariolo, Daniel A.

    2015-10-01

    Using the replica method, we develop an analytical approach to compute the characteristic function for the probability PN(K ,λ ) that a large N ×N adjacency matrix of sparse random graphs has K eigenvalues below a threshold λ . The method allows to determine, in principle, all moments of PN(K ,λ ) , from which the typical sample-to-sample fluctuations can be fully characterized. For random graph models with localized eigenvectors, we show that the index variance scales linearly with N ≫1 for |λ |>0 , with a model-dependent prefactor that can be exactly calculated. Explicit results are discussed for Erdös-Rényi and regular random graphs, both exhibiting a prefactor with a nonmonotonic behavior as a function of λ . These results contrast with rotationally invariant random matrices, where the index variance scales only as lnN , with an universal prefactor that is independent of λ . Numerical diagonalization results confirm the exactness of our approach and, in addition, strongly support the Gaussian nature of the index fluctuations.

  9. Creating ensembles of decision trees through sampling

    DOEpatents

    Kamath, Chandrika; Cantu-Paz, Erick

    2005-08-30

    A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.

  10. Random numbers from vacuum fluctuations

    NASA Astrophysics Data System (ADS)

    Shi, Yicheng; Chng, Brenda; Kurtsiefer, Christian

    2016-07-01

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  11. Cluster Randomized Controlled Trial

    PubMed Central

    Young, John; Chapman, Katie; Nixon, Jane; Patel, Anita; Holloway, Ivana; Mellish, Kirste; Anwar, Shamaila; Breen, Rachel; Knapp, Martin; Murray, Jenni; Farrin, Amanda

    2015-01-01

    Background and Purpose— We developed a new postdischarge system of care comprising a structured assessment covering longer-term problems experienced by patients with stroke and their carers, linked to evidence-based treatment algorithms and reference guides (the longer-term stroke care system of care) to address the poor longer-term recovery experienced by many patients with stroke. Methods— A pragmatic, multicentre, cluster randomized controlled trial of this system of care. Eligible patients referred to community-based Stroke Care Coordinators were randomized to receive the new system of care or usual practice. The primary outcome was improved patient psychological well-being (General Health Questionnaire-12) at 6 months; secondary outcomes included functional outcomes for patients, carer outcomes, and cost-effectiveness. Follow-up was through self-completed postal questionnaires at 6 and 12 months. Results— Thirty-two stroke services were randomized (29 participated); 800 patients (399 control; 401 intervention) and 208 carers (100 control; 108 intervention) were recruited. In intention to treat analysis, the adjusted difference in patient General Health Questionnaire-12 mean scores at 6 months was −0.6 points (95% confidence interval, −1.8 to 0.7; P=0.394) indicating no evidence of statistically significant difference between the groups. Costs of Stroke Care Coordinator inputs, total health and social care costs, and quality-adjusted life year gains at 6 months, 12 months, and over the year were similar between the groups. Conclusions— This robust trial demonstrated no benefit in clinical or cost-effectiveness outcomes associated with the new system of care compared with usual Stroke Care Coordinator practice. Clinical Trial Registration— URL: http://www.controlled-trials.com. Unique identifier: ISRCTN 67932305. PMID:26152298

  12. Random recursive trees and the elephant random walk

    NASA Astrophysics Data System (ADS)

    Kürsten, Rüdiger

    2016-03-01

    One class of random walks with infinite memory, so-called elephant random walks, are simple models describing anomalous diffusion. We present a surprising connection between these models and bond percolation on random recursive trees. We use a coupling between the two models to translate results from elephant random walks to the percolation process. We calculate, besides other quantities, exact expressions for the first and the second moment of the root cluster size and of the number of nodes in child clusters of the first generation. We further introduce another model, the skew elephant random walk, and calculate the first and second moment of this process.

  13. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate...

  14. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate...

  15. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate...

  16. 40 CFR 761.355 - Third level of sample selection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate...

  17. Coloring random graphs.

    PubMed

    Mulet, R; Pagnani, A; Weigt, M; Zecchina, R

    2002-12-23

    We study the graph coloring problem over random graphs of finite average connectivity c. Given a number q of available colors, we find that graphs with low connectivity admit almost always a proper coloring, whereas graphs with high connectivity are uncolorable. Depending on q, we find the precise value of the critical average connectivity c(q). Moreover, we show that below c(q) there exists a clustering phase c in [c(d),c(q)] in which ground states spontaneously divide into an exponential number of clusters and where the proliferation of metastable states is responsible for the onset of complexity in local search algorithms. PMID:12484862

  18. Can randomization be informative?

    NASA Astrophysics Data System (ADS)

    Pereira, Carlos A. B.; Campos, Thiago F.; Silva, Gustavo M.; Wechsler, Sergio

    2012-10-01

    In this paper the Pair of Siblings Paradox introduced by Pereira [1] is extended by considering more than two children and more than one child observed for gender. We follow the same lines of Wechsler et al. [2] that generalizes the three prisoners' dilemma, introduced by Gardner [3]. This paper's conjecture is that the Pair of Siblings and the Three Prisoners dilemma are dual paradoxes. Looking at possible likelihoods, the sure (randomized) selection for the former is non informative (informative), the opposite that holds for the latter. This situation is maintained for generalizations. Non informative likelihood here means that prior and posterior are equal.

  19. Randomized Response Analysis in Mplus

    ERIC Educational Resources Information Center

    Hox, Joop; Lensvelt-Mulders, Gerty

    2004-01-01

    This article describes a technique to analyze randomized response data using available structural equation modeling (SEM) software. The randomized response technique was developed to obtain estimates that are more valid when studying sensitive topics. The basic feature of all randomized response methods is that the data are deliberately…

  20. Random Numbers and Quantum Computers

    ERIC Educational Resources Information Center

    McCartney, Mark; Glass, David

    2002-01-01

    The topic of random numbers is investigated in such a way as to illustrate links between mathematics, physics and computer science. First, the generation of random numbers by a classical computer using the linear congruential generator and logistic map is considered. It is noted that these procedures yield only pseudo-random numbers since…

  1. Modelling Of Random Vertical Irregularities Of Railway Tracks

    NASA Astrophysics Data System (ADS)

    Podwórna, M.

    2015-08-01

    The study presents state-of-the-art in analytical and numerical modelling of random vertical irregularities of continuously welded ballasted railway tracks. The common model of railway track irregularity vertical profiles is applied, in the form of a stationary and ergodic Gaussian process in space. Random samples of track irregularity vertical profiles are generated with the Monte-Carlo method. Based on the numerical method developed in the study, the minimum and recommended sampling number required in the random analysis of railway bridges and number of frequency increments (harmonic components) in track irregularity vertical profiles simulation are determined. The lower and upper limits of wavelengths are determined based on the literature studies. The approach yields track irregularity random samples close to reality. The track irregularity model developed in the study can be used in the dynamic analysis of railway bridge / track structure / highspeed train systems.

  2. Sampling design for face recognition

    NASA Astrophysics Data System (ADS)

    Yan, Yanjun; Osadciw, Lisa A.

    2006-04-01

    A face recognition system consists of two integrated parts: One is the face recognition algorithm, the other is the selected classifier and derived features by the algorithm from a data set. The face recognition algorithm definitely plays a central role, but this paper does not aim at evaluating the algorithm, but deriving the best features for this algorithm from a specific database through sampling design of the training set, which directs how the sample should be collected and dictates the sample space. Sampling design can help exert the full potential of the face recognition algorithm without overhaul. Conventional statistical analysis usually assume some distribution to draw the inference, but the design-based inference does not assume any distribution of the data and it does not assume the independency between the sample observations. The simulations illustrates that the systematic sampling scheme performs better than the simple random sampling scheme, and the systematic sampling is comparable to using all available training images in recognition performance. Meanwhile the sampling schemes can save the system resources and alleviate the overfitting problem. However, the post stratification by sex is not shown to be significant in improving the recognition performance.

  3. Random sphere packing model of heterogeneous propellants

    NASA Astrophysics Data System (ADS)

    Kochevets, Sergei Victorovich

    It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous

  4. Random-walk enzymes

    NASA Astrophysics Data System (ADS)

    Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.

    2015-09-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C →U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.

  5. Randomness in Competitions

    NASA Astrophysics Data System (ADS)

    Ben-Naim, E.; Hengartner, N. W.; Redner, S.; Vazquez, F.

    2013-05-01

    We study the effects of randomness on competitions based on an elementary random process in which there is a finite probability that a weaker team upsets a stronger team. We apply this model to sports leagues and sports tournaments, and compare the theoretical results with empirical data. Our model shows that single-elimination tournaments are efficient but unfair: the number of games is proportional to the number of teams N, but the probability that the weakest team wins decays only algebraically with N. In contrast, leagues, where every team plays every other team, are fair but inefficient: the top √{N} of teams remain in contention for the championship, while the probability that the weakest team becomes champion is exponentially small. We also propose a gradual elimination schedule that consists of a preliminary round and a championship round. Initially, teams play a small number of preliminary games, and subsequently, a few teams qualify for the championship round. This algorithm is fair and efficient: the best team wins with a high probability and the number of games scales as N 9/5, whereas traditional leagues require N 3 games to fairly determine a champion.

  6. Random rough surface photofabrication

    NASA Astrophysics Data System (ADS)

    Brissonneau, Vincent; Escoubas, Ludovic; Flory, François; Berginc, Gérard

    2011-10-01

    Random rough surfaces are of primary interest for their optical properties: reducing reflection at the interface or obtaining specific scattering diagram for example. Thus controlling surface statistics during the fabrication process paves the way to original and specific behaviors of reflected optical waves. We detail an experimental method allowing the fabrication of random rough surfaces showing tuned statistical properties. A two-step photoresist exposure process was developed. In order to initiate photoresist polymerization, an energy threshold needs to be reached by light exposure. This energy is brought by a uniform exposure equipment comprising UV-LEDs. This pre-exposure is studied by varying parameters such as optical power and exposure time. The second step consists in an exposure based on the Gray method.1 The speckle pattern of an enlarged scattered laser beam is used to insolate the photoresist. A specific photofabrication bench using an argon ion laser was implemented. Parameters such as exposure time and distances between optical components are discussed. Then, we describe how we modify the speckle-based exposure bench to include a spatial light modulator (SLM). The SLM used is a micromirror matrix known as Digital Micromirror Device (DMD) which allows spatial modulation by displaying binary images. Thus, the spatial beam shape can be tuned and so the speckle pattern on the photoresist is modified. As the photoresist photofabricated surface is correlated to the speckle pattern used to insolate, the roughness parameters can be adjusted.

  7. Random-walk enzymes

    PubMed Central

    Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.

    2015-01-01

    Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C → U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508

  8. Mapping in random-structures

    SciTech Connect

    Reidys, C.M.

    1996-06-01

    A mapping in random-structures is defined on the vertices of a generalized hypercube Q{sub {alpha}}{sup n}. A random-structure will consist of (1) a random contact graph and (2) a family of relations imposed on adjacent vertices. The vertex set of a random contact graph will be the set of all coordinates of a vertex P {element_of} Q{sub {alpha}}{sup n}. Its edge will be the union of the edge sets of two random graphs. The first is a random 1-regular graph on 2m vertices (coordinates) and the second is a random graph G{sub p} with p = c{sub 2}/n on all n vertices (coordinates). The structure of the random contact graphs will be investigated and it will be shown that for certain values of m, c{sub 2} the mapping in random-structures allows to search by the set of random-structures. This is applied to mappings in RNA-secondary structures. Also, the results on random-structures might be helpful for designing 3D-folding algorithms for RNA.

  9. Cluster randomized trials for pharmacy practice research.

    PubMed

    Gums, Tyler; Carter, Barry; Foster, Eric

    2016-06-01

    Introduction Cluster randomized trials (CRTs) are now the gold standard in health services research, including pharmacy-based interventions. Studies of behaviour, epidemiology, lifestyle modifications, educational programs, and health care models are utilizing the strengths of cluster randomized analyses. Methodology The key property of CRTs is the unit of randomization (clusters), which may be different from the unit of analysis (individual). Subject sample size and, ideally, the number of clusters is determined by the relationship of between-cluster and within-cluster variability. The correlation among participants recruited from the same cluster is known as the intraclass correlation coefficient (ICC). Generally, having more clusters with smaller ICC values will lead to smaller sample sizes. When selecting clusters, stratification before randomization may be useful in decreasing imbalances between study arms. Participant recruitment methods can differ from other types of randomized trials, as blinding a behavioural intervention cannot always be done. When to use CRTs can yield results that are relevant for making "real world" decisions. CRTs are often used in non-therapeutic intervention studies (e.g. change in practice guidelines). The advantages of CRT design in pharmacy research have been avoiding contamination and the generalizability of the results. A large CRT that studied physician-pharmacist collaborative management of hypertension is used in this manuscript as a CRT example. The trial, entitled Collaboration Among Pharmacists and physicians To Improve Outcomes Now (CAPTION), was implemented in primary care offices in the United States for hypertensive patients. Limitations CRT design limitations include the need for a large number of clusters, high costs, increased training, increased monitoring, and statistical complexity. PMID:26715549

  10. Toward cost-efficient sampling methods

    NASA Astrophysics Data System (ADS)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  11. Structure of random foam.

    SciTech Connect

    Reinelt, Douglas A.; van Swol, Frank B.; Kraynik, Andrew Michael

    2004-06-01

    The Surface Evolver was used to compute the equilibrium microstructure of dry soap foams with random structure and a wide range of cell-size distributions. Topological and geometric properties of foams and individual cells were evaluated. The theory for isotropic Plateau polyhedra describes the dependence of cell geometric properties on their volume and number of faces. The surface area of all cells is about 10% greater than a sphere of equal volume; this leads to a simple but accurate theory for the surface free energy density of foam. A novel parameter based on the surface-volume mean bubble radius R32 is used to characterize foam polydispersity. The foam energy, total cell edge length, and average number of faces per cell all decrease with increasing polydispersity. Pentagonal faces are the most common in monodisperse foam but quadrilaterals take over in highly polydisperse structures.

  12. Accelerated randomized benchmarking

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Ferrie, Christopher; Cory, D. G.

    2015-01-01

    Quantum information processing offers promising advances for a wide range of fields and applications, provided that we can efficiently assess the performance of the control applied in candidate systems. That is, we must be able to determine whether we have implemented a desired gate, and refine accordingly. Randomized benchmarking reduces the difficulty of this task by exploiting symmetries in quantum operations. Here, we bound the resources required for benchmarking and show that, with prior information, we can achieve several orders of magnitude better accuracy than in traditional approaches to benchmarking. Moreover, by building on state-of-the-art classical algorithms, we reach these accuracies with near-optimal resources. Our approach requires an order of magnitude less data to achieve the same accuracies and to provide online estimates of the errors in the reported fidelities. We also show that our approach is useful for physical devices by comparing to simulations.

  13. Employing online quantum random number generators for generating truly random quantum states in Mathematica

    NASA Astrophysics Data System (ADS)

    Miszczak, Jarosław Adam

    2013-01-01

    numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or

  14. How random are random numbers generated using photons?

    NASA Astrophysics Data System (ADS)

    Solis, Aldo; Angulo Martínez, Alí M.; Ramírez Alarcón, Roberto; Cruz Ramírez, Hector; U'Ren, Alfred B.; Hirsch, Jorge G.

    2015-06-01

    Randomness is fundamental in quantum theory, with many philosophical and practical implications. In this paper we discuss the concept of algorithmic randomness, which provides a quantitative method to assess the Borel normality of a given sequence of numbers, a necessary condition for it to be considered random. We use Borel normality as a tool to investigate the randomness of ten sequences of bits generated from the differences between detection times of photon pairs generated by spontaneous parametric downconversion. These sequences are shown to fulfil the randomness criteria without difficulties. As deviations from Borel normality for photon-generated random number sequences have been reported in previous work, a strategy to understand these diverging findings is outlined.

  15. Differential Cost Avoidance and Successful Criminal Careers: Random or Rational?

    ERIC Educational Resources Information Center

    Kazemian, Lila; Le Blanc, Marc

    2007-01-01

    Using a sample of adjudicated French Canadian males from the Montreal Two Samples Longitudinal Study, this article investigates individual and social characteristics associated with differential cost avoidance. The main objective of this study is to determine whether such traits are randomly distributed across differential degrees of cost…

  16. Random-phase metasurfaces at optical wavelengths

    PubMed Central

    Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P.; Bozhevolnyi, Sergey I.

    2016-01-01

    Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector. PMID:27328635

  17. Random-phase metasurfaces at optical wavelengths.

    PubMed

    Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P; Bozhevolnyi, Sergey I

    2016-01-01

    Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector. PMID:27328635

  18. Random-phase metasurfaces at optical wavelengths

    NASA Astrophysics Data System (ADS)

    Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P.; Bozhevolnyi, Sergey I.

    2016-06-01

    Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector.

  19. Efficient robust conditional random fields.

    PubMed

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs. PMID:26080050

  20. Stratified Random Design.

    1988-02-19

    Version 00 STRADE generates matrices of experimental designs based on the Latin Hypercube Sampling technique, that can be applied to any kind of sensitivity analysis or system identification problem involving a large number of input variables. The program was developed for use in reactor safety probabilistic analyses.

  1. Does Random Dispersion Help Survival?

    NASA Astrophysics Data System (ADS)

    Schinazi, Rinaldo B.

    2015-04-01

    Many species live in colonies that prosper for a while and then collapse. After the collapse the colony survivors disperse randomly and found new colonies that may or may not make it depending on the new environment they find. We use birth and death chains in random environments to model such a population and to argue that random dispersion is a superior strategy for survival.

  2. Functional proteins from a random-sequence library

    PubMed Central

    Keefe, Anthony D; Szostak, Jack W.

    2015-01-01

    Functional primordial proteins presumably originated from random sequences, but it is not known how frequently functional, or even folded, proteins occur in collections of random sequences. Here we have used in vitro selection of messenger RNA displayed proteins, in which each protein is covalently linked through its carboxy terminus to the 3′ end of its encoding mRNA1, to sample a large number of distinct random sequences. Starting from a library of 6 × 1012 proteins each containing 80 contiguous random amino acids, we selected functional proteins by enriching for those that bind to ATP. This selection yielded four new ATP-binding proteins that appear to be unrelated to each other or to anything found in the current databases of biological proteins. The frequency of occurrence of functional proteins in random-sequence libraries appears to be similar to that observed for equivalent RNA libraries2,3. PMID:11287961

  3. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  4. A Random Variable Transformation Process.

    ERIC Educational Resources Information Center

    Scheuermann, Larry

    1989-01-01

    Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)

  5. Diffusion in random networks

    NASA Astrophysics Data System (ADS)

    Padrino, Juan C.; Zhang, Duan Z.

    2015-11-01

    The ensemble phase averaging technique is applied to model mass transport in a porous medium. The porous material is idealized as an ensemble of random networks, where each network consists of a set of junction points representing the pores and tortuous channels connecting them. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pores mass density. Instead of attempting to solve this equation, and equivalent set of partial differential equations is derived whose solution is sought numerically. As a test problem, we consider the one-dimensional diffusion of a substance from one end to the other in a bounded domain. For a statistically homogeneous and isotropic material, results show that for relatively large times the pore mass density evolution from the new theory is significantly delayed in comparison with the solution from the classical diffusion equation. In the short-time case, when the solution evolves with time as if the domain were semi-infinite, numerical results indicate that the pore mass density becomes a function of the similarity variable xt- 1 / 4 rather than xt- 1 / 2 characteristic of classical diffusion. This result was verified analytically. Possible applications of this framework include flow in gas shales. Work supported by LDRD project of LANL.

  6. Nonvolatile random access memory

    NASA Technical Reports Server (NTRS)

    Wu, Jiin-Chuan (Inventor); Stadler, Henry L. (Inventor); Katti, Romney R. (Inventor)

    1994-01-01

    A nonvolatile magnetic random access memory can be achieved by an array of magnet-Hall effect (M-H) elements. The storage function is realized with a rectangular thin-film ferromagnetic material having an in-plane, uniaxial anisotropy and inplane bipolar remanent magnetization states. The thin-film magnetic element is magnetized by a local applied field, whose direction is used to form either a 0 or 1 state. The element remains in the 0 or 1 state until a switching field is applied to change its state. The stored information is detcted by a Hall-effect sensor which senses the fringing field from the magnetic storage element. The circuit design for addressing each cell includes transistor switches for providing a current of selected polarity to store a binary digit through a separate conductor overlying the magnetic element of the cell. To read out a stored binary digit, transistor switches are employed to provide a current through a row of Hall-effect sensors connected in series and enabling a differential voltage amplifier connected to all Hall-effect sensors of a column in series. To avoid read-out voltage errors due to shunt currents through resistive loads of the Hall-effect sensors of other cells in the same column, at least one transistor switch is provided between every pair of adjacent cells in every row which are not turned on except in the row of the selected cell.

  7. Ferroelectric random access memories.

    PubMed

    Ishiwara, Hiroshi

    2012-10-01

    Ferroelectric random access memory (FeRAM) is a nonvolatile memory, in which data are stored using hysteretic P-E (polarization vs. electric field) characteristics in a ferroelectric film. In this review, history and characteristics of FeRAMs are first introduced. It is described that there are two types of FeRAMs, capacitor-type and FET-type, and that only the capacitor-type FeRAM is now commercially available. In chapter 2, properties of ferroelectric films are discussed from a viewpoint of FeRAM application, in which particular attention is paid to those of Pb(Zr,Ti)O3, SrBi2Ta2O9, and BiFeO3. Then, cell structures and operation principle of the capacitor-type FeRAMs are discussed in chapter 3. It is described that the stacked technology of ferroelectric capacitors and development of new materials with large remanent polarization are important for fabricating high-density memories. Finally, in chapter 4, the optimized gate structure in ferroelectric-gate field-effect transistors is discussed and experimental results showing excellent data retention characteristics are presented. PMID:23421123

  8. Randomized parcellation based inference.

    PubMed

    Da Mota, Benoit; Fritsch, Virgile; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Bromberg, Uli; Conrod, Patricia; Gallinat, Jürgen; Garavan, Hugh; Martinot, Jean-Luc; Nees, Frauke; Paus, Tomas; Pausova, Zdenka; Rietschel, Marcella; Smolka, Michael N; Ströhle, Andreas; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2014-04-01

    Neuroimaging group analyses are used to relate inter-subject signal differences observed in brain imaging with behavioral or genetic variables and to assess risks factors of brain diseases. The lack of stability and of sensitivity of current voxel-based analysis schemes may however lead to non-reproducible results. We introduce a new approach to overcome the limitations of standard methods, in which active voxels are detected according to a consensus on several random parcellations of the brain images, while a permutation test controls the false positive risk. Both on synthetic and real data, this approach shows higher sensitivity, better accuracy and higher reproducibility than state-of-the-art methods. In a neuroimaging-genetic application, we find that it succeeds in detecting a significant association between a genetic variant next to the COMT gene and the BOLD signal in the left thalamus for a functional Magnetic Resonance Imaging contrast associated with incorrect responses of the subjects from a Stop Signal Task protocol. PMID:24262376

  9. Bell experiments with random destination sources

    SciTech Connect

    Sciarrino, Fabio; Mataloni, Paolo; Vallone, Giuseppe; Cabello, Adan

    2011-03-15

    It is generally assumed that sources randomly sending two particles to one or two different observers, random destination sources (RDSs), cannot be used for genuine quantum nonlocality tests because of the postselection loophole. We demonstrate that Bell experiments not affected by the postselection loophole may be performed with (i) an RDS and local postselection using perfect detectors, (ii) an RDS, local postselection, and fair sampling assumption with any detection efficiency, and (iii) an RDS and a threshold detection efficiency required to avoid the detection loophole. These results allow the adoption of RDS setups which are simpler and more efficient for long-distance free-space Bell tests, and extend the range of physical systems which can be used for loophole-free Bell tests.

  10. Phase transitions on random lattices: how random is topological disorder?

    PubMed

    Barghathi, Hatem; Vojta, Thomas

    2014-09-19

    We study the effects of topological (connectivity) disorder on phase transitions. We identify a broad class of random lattices whose disorder fluctuations decay much faster with increasing length scale than those of generic random systems, yielding a wandering exponent of ω=(d-1)/(2d) in d dimensions. The stability of clean critical points is thus governed by the criterion (d+1)ν>2 rather than the usual Harris criterion dν>2, making topological disorder less relevant than generic randomness. The Imry-Ma criterion is also modified, allowing first-order transitions to survive in all dimensions d>1. These results explain a host of puzzling violations of the original criteria for equilibrium and nonequilibrium phase transitions on random lattices. We discuss applications, and we illustrate our theory by computer simulations of random Voronoi and other lattices. PMID:25279615

  11. Random distributed feedback fibre lasers

    NASA Astrophysics Data System (ADS)

    Turitsyn, Sergei K.; Babin, Sergey A.; Churkin, Dmitry V.; Vatnik, Ilya D.; Nikulin, Maxim; Podivilov, Evgenii V.

    2014-09-01

    The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with “negative absorption” of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors-random distributed feedback fibre laser-was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100 km. Although an effective reflection due to the Rayleigh scattering is extremely small (˜0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the generation

  12. ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES

    PubMed Central

    van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.

    2014-01-01

    SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298

  13. Subrandom methods for multidimensional nonuniform sampling.

    PubMed

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. PMID:27301071

  14. Subrandom methods for multidimensional nonuniform sampling

    NASA Astrophysics Data System (ADS)

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics.

  15. Students' Misconceptions about Random Variables

    ERIC Educational Resources Information Center

    Kachapova, Farida; Kachapov, Ilias

    2012-01-01

    This article describes some misconceptions about random variables and related counter-examples, and makes suggestions about teaching initial topics on random variables in general form instead of doing it separately for discrete and continuous cases. The focus is on post-calculus probability courses. (Contains 2 figures.)

  16. Ticks of a Random clock

    NASA Astrophysics Data System (ADS)

    Jung, P.; Talkner, P.

    2010-09-01

    A simple way to convert a purely random sequence of events into a signal with a strong periodic component is proposed. The signal consists of those instants of time at which the length of the random sequence exceeds an integer multiple of a given number. The larger this number the more pronounced the periodic behavior becomes.

  17. Computer generation of random deviates.

    PubMed

    Cormack, J; Shuter, B

    1991-06-01

    The need for random deviates arises in many scientific applications, such as the simulation of physical processes, numerical evaluation of complex mathematical formulae and the modeling of decision processes. In medical physics, Monte Carlo simulations have been used in radiology, radiation therapy and nuclear medicine. Specific instances include the modelling of x-ray scattering processes and the addition of random noise to images or curves in order to assess the effects of various processing procedures. Reliable sources of random deviates with statistical properties indistinguishable from true random deviates are a fundamental necessity for such tasks. This paper provides a review of computer algorithms which can be used to generate uniform random deviates and other distributions of interest to medical physicists, along with a few caveats relating to various problems and pitfalls which can occur. Source code listings for the generators discussed (in FORTRAN, Turbo-PASCAL and Data General ASSEMBLER) are available on request from the authors. PMID:1747086

  18. Randomness versus nonlocality and entanglement.

    PubMed

    Acín, Antonio; Massar, Serge; Pironio, Stefano

    2012-03-01

    The outcomes obtained in Bell tests involving two-outcome measurements on two subsystems can, in principle, generate up to 2 bits of randomness. However, the maximal violation of the Clauser-Horne-Shimony-Holt inequality guarantees the generation of only 1.23 bits of randomness. We prove here that quantum correlations with arbitrarily little nonlocality and states with arbitrarily little entanglement can be used to certify that close to the maximum of 2 bits of randomness are produced. Our results show that nonlocality, entanglement, and randomness are inequivalent quantities. They also imply that device-independent quantum key distribution with an optimal key generation rate is possible by using almost-local correlations and that device-independent randomness generation with an optimal rate is possible with almost-local correlations and with almost-unentangled states. PMID:22463395

  19. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  20. On the stability of robotic systems with random communication rates

    NASA Technical Reports Server (NTRS)

    Kobayashi, H.; Yun, X.; Paul, R. P.

    1989-01-01

    Control problems of sampled data systems which are subject to random sample rate variations and delays are studied. Due to the rapid growth of the use of computers more and more systems are controlled digitally. Complex systems such as space telerobotic systems require the integration of a number of subsystems at different hierarchical levels. While many subsystems may run on a single processor, some subsystems require their own processor or processors. The subsystems are integrated into functioning systems through communications. Communications between processes sharing a single processor are also subject to random delays due to memory management and interrupt latency. Communications between processors involve random delays due to network access and to data collisions. Furthermore, all control processes involve delays due to casual factors in measuring devices and to signal processing. Traditionally, sampling rates are chosen to meet the worst case communication delay. Such a strategy is wasteful as the processors are then idle a great proportion of the time; sample rates are not as high as possible resulting in poor performance or in the over specification of control processors; there is the possibility of missing data no matter how low the sample rate is picked. Asymptotical stability with probability one for randomly sampled multi-dimensional linear systems is studied. A sufficient condition for the stability is obtained. This condition is so simple that it can be applied to practical systems. A design procedure is also shown.

  1. Dynamic response of random parametered structures with random excitation. [DYNAMO

    SciTech Connect

    Branstetter, L.J.; Paez, T.L.

    1986-02-01

    A Taylor series expansion technique is used for numerical evaluation of the statistical response moments of a linear multidegree of freedom (MDF) system having random stiffness characteristics, when excited by either stationary or nonstationary random load components. Equations are developed for the cases of white noise loading and single step memory loading, and a method is presented to extend the solution to multistep memory loading. The equations are greatly simplified by the assumption that all random quantities are normally distributed. A computer program is developed to calculate the response moments of example systems. A program user's manual and listing (DYNAMO) are included. Future extensions of the work and potential applications are discussed.

  2. Suicidality in a Sample of Arctic Households

    ERIC Educational Resources Information Center

    Haggarty, John M.; Cernovsky, Zack; Bedard, Michel; Merskey, Harold

    2008-01-01

    We investigated the association of suicidal ideation and behavior with depression, anxiety, and alcohol abuse in a Canadian Arctic Inuit community. Inuit (N = 111) from a random sample of households completed assessments of anxiety and depression, alcohol abuse, and suicidality. High rates of suicidal ideation within the past week (43.6%), and…

  3. Measuring Book Availability: A Monthly Sampling Method.

    ERIC Educational Resources Information Center

    Norton, Mick; And Others

    1996-01-01

    Demonstrates that by using barcodes to identify random lists of books, taking small monthly samples, and updating a simple control chart, one can effectively: (1) pinpoint reasons why library books are unavailable; (2) gauge if corrective actions or changes in procedure are needed; and (3) determine if such actions are successful. Contains two…

  4. Network Sampling with Memory: A proposal for more efficient sampling from social networks.

    PubMed

    Mouw, Ted; Verdery, Ashton M

    2012-08-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  5. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    PubMed Central

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  6. Sample-Based Surface Coloring

    PubMed Central

    Bürger, Kai; Krüger, Jens; Westermann, Rüdiger

    2011-01-01

    In this paper, we present a sample-based approach for surface coloring, which is independent of the original surface resolution and representation. To achieve this, we introduce the Orthogonal Fragment Buffer (OFB)—an extension of the Layered Depth Cube—as a high-resolution view-independent surface representation. The OFB is a data structure that stores surface samples at a nearly uniform distribution over the surface, and it is specifically designed to support efficient random read/write access to these samples. The data access operations have a complexity that is logarithmic in the depth complexity of the surface. Thus, compared to data access operations in tree data structures like octrees, data-dependent memory access patterns are greatly reduced. Due to the particular sampling strategy that is employed to generate an OFB, it also maintains sample coherence, and thus, exhibits very good spatial access locality. Therefore, OFB-based surface coloring performs significantly faster than sample-based approaches using tree structures. In addition, since in an OFB, the surface samples are internally stored in uniform 2D grids, OFB-based surface coloring can efficiently be realized on the GPU to enable interactive coloring of high-resolution surfaces. On the OFB, we introduce novel algorithms for color painting using volumetric and surface-aligned brushes, and we present new approaches for particle-based color advection along surfaces in real time. Due to the intermediate surface representation we choose, our method can be used to color polygonal surfaces as well as any other type of surface that can be sampled. PMID:20616392

  7. Sample-based surface coloring.

    PubMed

    Bürger, Kai; Krüger, Jens; Westermann, Rüdiger

    2010-01-01

    In this paper, we present a sample-based approach for surface coloring, which is independent of the original surface resolution and representation. To achieve this, we introduce the Orthogonal Fragment Buffer (OFB)-an extension of the Layered Depth Cube-as a high-resolution view-independent surface representation. The OFB is a data structure that stores surface samples at a nearly uniform distribution over the surface, and it is specifically designed to support efficient random read/write access to these samples. The data access operations have a complexity that is logarithmic in the depth complexity of the surface. Thus, compared to data access operations in tree data structures like octrees, data-dependent memory access patterns are greatly reduced. Due to the particular sampling strategy that is employed to generate an OFB, it also maintains sample coherence, and thus, exhibits very good spatial access locality. Therefore, OFB-based surface coloring performs significantly faster than sample-based approaches using tree structures. In addition, since in an OFB, the surface samples are internally stored in uniform 2D grids, OFB-based surface coloring can efficiently be realized on the GPU to enable interactive coloring of high-resolution surfaces. On the OFB, we introduce novel algorithms for color painting using volumetric and surface-aligned brushes, and we present new approaches for particle-based color advection along surfaces in real time. Due to the intermediate surface representation we choose, our method can be used to color polygonal surfaces as well as any other type of surface that can be sampled. PMID:20616392

  8. Variational Infinite Hidden Conditional Random Fields.

    PubMed

    Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja; Ghahramani, Zoubin

    2015-09-01

    Hidden conditional random fields (HCRFs) are discriminative latent variable models which have been shown to successfully learn the hidden structure of a given classification problem. An Infinite hidden conditional random field is a hidden conditional random field with a countably infinite number of hidden states, which rids us not only of the necessity to specify a priori a fixed number of hidden states available but also of the problem of overfitting. Markov chain Monte Carlo (MCMC) sampling algorithms are often employed for inference in such models. However, convergence of such algorithms is rather difficult to verify, and as the complexity of the task at hand increases the computational cost of such algorithms often becomes prohibitive. These limitations can be overcome by variational techniques. In this paper, we present a generalized framework for infinite HCRF models, and a novel variational inference approach on a model based on coupled Dirichlet Process Mixtures, the HCRF-DPM. We show that the variational HCRF-DPM is able to converge to a correct number of represented hidden states, and performs as well as the best parametric HCRFs-chosen via cross-validation-for the difficult tasks of recognizing instances of agreement, disagreement, and pain in audiovisual sequences. PMID:26353136

  9. Fast phase randomization via two-folds

    PubMed Central

    Jeffrey, M. R.

    2016-01-01

    A two-fold is a singular point on the discontinuity surface of a piecewise-smooth vector field, at which the vector field is tangent to the discontinuity surface on both sides. If an orbit passes through an invisible two-fold (also known as a Teixeira singularity) before settling to regular periodic motion, then the phase of that motion cannot be determined from initial conditions, and, in the presence of small noise, the asymptotic phase of a large number of sample solutions is highly random. In this paper, we show how the probability distribution of the asymptotic phase depends on the global nonlinear dynamics. We also show how the phase of a smooth oscillator can be randomized by applying a simple discontinuous control law that generates an invisible two-fold. We propose that such a control law can be used to desynchronize a collection of oscillators, and that this manner of phase randomization is fast compared with existing methods (which use fixed points as phase singularities), because there is no slowing of the dynamics near a two-fold. PMID:27118901

  10. Lowest eigenvalues of random Hamiltonians

    SciTech Connect

    Shen, J. J.; Zhao, Y. M.; Arima, A.; Yoshinaga, N.

    2008-05-15

    In this article we study the lowest eigenvalues of random Hamiltonians for both fermion and boson systems. We show that an empirical formula of evaluating the lowest eigenvalues of random Hamiltonians in terms of energy centroids and widths of eigenvalues is applicable to many different systems. We improve the accuracy of the formula by considering the third central moment. We show that these formulas are applicable not only to the evaluation of the lowest energy but also to the evaluation of excited energies of systems under random two-body interactions.

  11. Random graphs with hidden color.

    PubMed

    Söderberg, Bo

    2003-07-01

    We propose and investigate a unifying class of sparse random graph models, based on a hidden coloring of edge-vertex incidences, extending an existing approach, random graphs with a given degree distribution, in a way that admits a nontrivial correlation structure in the resulting graphs. The approach unifies a number of existing random graph ensembles within a common general formalism, and allows for the analytic calculation of observable graph characteristics. In particular, generating function techniques are used to derive the size distribution of connected components (clusters) as well as the location of the percolation threshold where a giant component appears. PMID:12935185

  12. Random sequential adsorption on fractals

    NASA Astrophysics Data System (ADS)

    Ciesla, Michal; Barbasz, Jakub

    2012-07-01

    Irreversible adsorption of spheres on flat collectors having dimension d < 2 is studied. Molecules are adsorbed on Sierpinski's triangle and carpet-like fractals (1 < d < 2), and on general Cantor set (d < 1). Adsorption process is modeled numerically using random sequential adsorption (RSA) algorithm. The paper concentrates on measurement of fundamental properties of coverages, i.e., maximal random coverage ratio and density autocorrelation function, as well as RSA kinetics. Obtained results allow to improve phenomenological relation between maximal random coverage ratio and collector dimension. Moreover, simulations show that, in general, most of known dimensional properties of adsorbed monolayers are valid for non-integer dimensions.

  13. Random sequential adsorption on fractals.

    PubMed

    Ciesla, Michal; Barbasz, Jakub

    2012-07-28

    Irreversible adsorption of spheres on flat collectors having dimension d < 2 is studied. Molecules are adsorbed on Sierpinski's triangle and carpet-like fractals (1 < d < 2), and on general Cantor set (d < 1). Adsorption process is modeled numerically using random sequential adsorption (RSA) algorithm. The paper concentrates on measurement of fundamental properties of coverages, i.e., maximal random coverage ratio and density autocorrelation function, as well as RSA kinetics. Obtained results allow to improve phenomenological relation between maximal random coverage ratio and collector dimension. Moreover, simulations show that, in general, most of known dimensional properties of adsorbed monolayers are valid for non-integer dimensions. PMID:22852643

  14. Covariate-based constrained randomization of group-randomized trials.

    PubMed

    Moulton, Lawrence H

    2004-01-01

    Group-randomized study designs are useful when individually randomized designs are either not possible, or will not be able to estimate the parameters of interest. Blocked and/or stratified (for example, pair-matched) designs have been used, and their properties statistically evaluated by many researchers. Group-randomized trials often have small numbers of experimental units, and strong, geographically induced between-unit correlation, which increase the chance of obtaining a "bad" randomization outcome. This article describes a procedure--random selection from a list of acceptable allocations--to allocate treatment conditions in a way that ensures balance on relevant covariates. Numerous individual- and group-level covariates can be balanced using exact or caliper criteria. Simulation results indicate that this method has good frequency properties, but some care may be needed not to overly constrain the randomization. There is a trade-off between achieving good balance through a highly constrained design, and jeopardizing the appearance of impartiality of the investigator and potentially departing from the nominal Type I error. PMID:16279255

  15. Spin models and boson sampling

    NASA Astrophysics Data System (ADS)

    Garcia Ripoll, Juan Jose; Peropadre, Borja; Aspuru-Guzik, Alan

    Aaronson & Arkhipov showed that predicting the measurement statistics of random linear optics circuits (i.e. boson sampling) is a classically hard problem for highly non-classical input states. A typical boson-sampling circuit requires N single photon emitters and M photodetectors, and it is a natural idea to rely on few-level systems for both tasks. Indeed, we show that 2M two-level emitters at the input and output ports of a general M-port interferometer interact via an XY-model with collective dissipation and a large number of dark states that could be used for quantum information storage. More important is the fact that, when we neglect dissipation, the resulting long-range XY spin-spin interaction is equivalent to boson sampling under the same conditions that make boson sampling efficient. This allows efficient implementations of boson sampling using quantum simulators & quantum computers. We acknowledge support from Spanish Mineco Project FIS2012-33022, CAM Research Network QUITEMAD+ and EU FP7 FET-Open Project PROMISCE.

  16. P-Type Factor Analyses of Individuals' Thought Sampling Data.

    ERIC Educational Resources Information Center

    Hurlburt, Russell T.; Melancon, Susan M.

    Recently, interest in research measuring stream of consciousness or thought has increased. A study was conducted, based on a previous study by Hurlburt, Lech, and Saltman, in which subjects were randomly interrupted to rate their thoughts and moods on a Likert-type scale. Thought samples were collected from 27 subjects who carried random-tone…

  17. Pedagogical Simulation of Sampling Distributions and the Central Limit Theorem

    ERIC Educational Resources Information Center

    Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

    2007-01-01

    Students often find the fact that a sample statistic is a random variable very hard to grasp. Even more mysterious is why a sample mean should become ever more Normal as the sample size increases. This simulation tool is meant to illustrate the process, thereby giving students some intuitive grasp of the relationship between a parent population…

  18. Sampling for Telephone Surveys: Do the Results Depend on Technique?

    ERIC Educational Resources Information Center

    Franz, Jennifer D.

    Two basic methods exist for drawing probability samples to be used in telephone surveys: directory sampling (from alphabetical or street directories) and random digit dialing (RDD). RDD includes unlisted numbers, whereas directory sampling includes only listed numbers. The goal of this paper is to estimate the effect of failure to include…

  19. Coring Sample Acquisition Tool

    NASA Technical Reports Server (NTRS)

    Haddad, Nicolas E.; Murray, Saben D.; Walkemeyer, Phillip E.; Badescu, Mircea; Sherrit, Stewart; Bao, Xiaoqi; Kriechbaum, Kristopher L.; Richardson, Megan; Klein, Kerry J.

    2012-01-01

    A sample acquisition tool (SAT) has been developed that can be used autonomously to sample drill and capture rock cores. The tool is designed to accommodate core transfer using a sample tube to the IMSAH (integrated Mars sample acquisition and handling) SHEC (sample handling, encapsulation, and containerization) without ever touching the pristine core sample in the transfer process.

  20. A Very Simple Safe-Bayesian Random Forest.

    PubMed

    Quadrianto, Novi; Ghahramani, Zoubin

    2015-06-01

    Random forests works by averaging several predictions of de-correlated trees. We show a conceptually radical approach to generate a random forest: random sampling of many trees from a prior distribution, and subsequently performing a weighted ensemble of predictive probabilities. Our approach uses priors that allow sampling of decision trees even before looking at the data, and a power likelihood that explores the space spanned by combination of decision trees. While each tree performs Bayesian inference to compute its predictions, our aggregation procedure uses the power likelihood rather than the likelihood and is therefore strictly speaking not Bayesian. Nonetheless, we refer to it as a Bayesian random forest but with a built-in safety. The safeness comes as it has good predictive performance even if the underlying probabilistic model is wrong. We demonstrate empirically that our Safe-Bayesian random forest outperforms MCMC or SMC based Bayesian decision trees in term of speed and accuracy, and achieves competitive performance to entropy or Gini optimised random forest, yet is very simple to construct. PMID:26357350

  1. Quantifying randomness in real networks

    PubMed Central

    Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-01-01

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs. PMID:26482121

  2. Scattering from a random surface

    SciTech Connect

    Abarbanel, H.D.I.

    1980-11-01

    We give a formulation of the problem of propagation of scalar waves over a random surface. By a judicious choice of variables we are able to show that this situation is equivalent to propagation of these waves through a medium of random fluctuations with fluctuating source and receiver. The wave equation in the new coordinates has an additional term, the fluctuation operator, which depends on derivatives of the surface in space and time. An expansion in the fluctuation operator is given which guarantees the desired boundary conditions at every order. We treat both the cases where the surface is time dependent, such as the sea surface, or fixed in time. Also discussed is the situation where the source and receiver lie between the random surface and another, possibly also random, surface. In detail we consider acoustic waves for which the surfaces are pressure release. The method is directly applicable to electromagnetic waves and other boundary conditions.

  3. A Randomized Central Limit Theorem

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2010-05-01

    The Central Limit Theorem (CLT), one of the most elemental pillars of Probability Theory and Statistical Physics, asserts that: the universal probability law of large aggregates of independent and identically distributed random summands with zero mean and finite variance, scaled by the square root of the aggregate-size (√{n}), is Gaussian. The scaling scheme of the CLT is deterministic and uniform - scaling all aggregate-summands by the common and deterministic factor √{n}. This Letter considers scaling schemes which are stochastic and non-uniform, and presents a "Randomized Central Limit Theorem" (RCLT): we establish a class of random scaling schemes which yields universal probability laws of large aggregates of independent and identically distributed random summands. The RCLT universal probability laws, in turn, are the one-sided and the symmetric Lévy laws.

  4. Quantifying randomness in real networks

    NASA Astrophysics Data System (ADS)

    Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-10-01

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.

  5. Quantifying randomness in real networks.

    PubMed

    Orsini, Chiara; Dankulov, Marija M; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri

    2015-01-01

    Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs. PMID:26482121

  6. Quantum-noise randomized ciphers

    NASA Astrophysics Data System (ADS)

    Nair, Ranjith; Yuen, Horace P.; Corndorf, Eric; Eguchi, Takami; Kumar, Prem

    2006-11-01

    We review the notion of a classical random cipher and its advantages. We sharpen the usual description of random ciphers to a particular mathematical characterization suggested by the salient feature responsible for their increased security. We describe a concrete system known as αη and show that it is equivalent to a random cipher in which the required randomization is affected by coherent-state quantum noise. We describe the currently known security features of αη and similar systems, including lower bounds on the unicity distances against ciphertext-only and known-plaintext attacks. We show how αη used in conjunction with any standard stream cipher such as the Advanced Encryption Standard provides an additional, qualitatively different layer of security from physical encryption against known-plaintext attacks on the key. We refute some claims in the literature that αη is equivalent to a nonrandom stream cipher.

  7. Quantum-noise randomized ciphers

    SciTech Connect

    Nair, Ranjith; Yuen, Horace P.; Kumar, Prem; Corndorf, Eric; Eguchi, Takami

    2006-11-15

    We review the notion of a classical random cipher and its advantages. We sharpen the usual description of random ciphers to a particular mathematical characterization suggested by the salient feature responsible for their increased security. We describe a concrete system known as {alpha}{eta} and show that it is equivalent to a random cipher in which the required randomization is affected by coherent-state quantum noise. We describe the currently known security features of {alpha}{eta} and similar systems, including lower bounds on the unicity distances against ciphertext-only and known-plaintext attacks. We show how {alpha}{eta} used in conjunction with any standard stream cipher such as the Advanced Encryption Standard provides an additional, qualitatively different layer of security from physical encryption against known-plaintext attacks on the key. We refute some claims in the literature that {alpha}{eta} is equivalent to a nonrandom stream cipher.

  8. Control theory for random systems

    NASA Technical Reports Server (NTRS)

    Bryson, A. E., Jr.

    1972-01-01

    A survey is presented of the current knowledge available for designing and predicting the effectiveness of controllers for dynamic systems which can be modeled by ordinary differential equations. A short discussion of feedback control is followed by a description of deterministic controller design and the concept of system state. The need for more realistic disturbance models led to the use of stochastic process concepts, in particular the Gauss-Markov process. A compensator controlled system, with random forcing functions, random errors in the measurements, and random initial conditions, is treated as constituting a Gauss-Markov random process; hence the mean-square behavior of the controlled system is readily predicted. As an example, a compensator is designed for a helicopter to maintain it in hover in a gusty wind over a point on the ground.

  9. Diffraction by random Ronchi gratings.

    PubMed

    Torcal-Milla, Francisco Jose; Sanchez-Brea, Luis Miguel

    2016-08-01

    In this work, we obtain analytical expressions for the near-and far-field diffraction of random Ronchi diffraction gratings where the slits of the grating are randomly displaced around their periodical positions. We theoretically show that the effect of randomness in the position of the slits of the grating produces a decrease of the contrast and even disappearance of the self-images for high randomness level at the near field. On the other hand, it cancels high-order harmonics in far field, resulting in only a few central diffraction orders. Numerical simulations by means of the Rayleigh-Sommerfeld diffraction formula are performed in order to corroborate the analytical results. These results are of interest for industrial and technological applications where manufacture errors need to be considered. PMID:27505363

  10. Experimental evidence of quantum randomness incomputability

    SciTech Connect

    Calude, Cristian S.; Dinneen, Michael J.; Dumitrescu, Monica; Svozil, Karl

    2010-08-15

    In contrast with software-generated randomness (called pseudo-randomness), quantum randomness can be proven incomputable; that is, it is not exactly reproducible by any algorithm. We provide experimental evidence of incomputability--an asymptotic property--of quantum randomness by performing finite tests of randomness inspired by algorithmic information theory.

  11. Representative Sampling of Maritally Violent and Nonviolent Couples: A Feasibility Study

    ERIC Educational Resources Information Center

    Farris, Coreen; Holtzworth-Munroe, Amy

    2007-01-01

    Despite the methodological advantages of representative sampling, few researchers in the field of marital violence have employed random samples for laboratory assessments of couples. The current study tests the feasibility and sampling success of three recruitment methods: (a) random digit dialing, (b) directory-assisted recruitment, and (c) a…

  12. On Pfaffian Random Point Fields

    NASA Astrophysics Data System (ADS)

    Kargin, V.

    2014-02-01

    We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.

  13. Digital random-number generator

    NASA Technical Reports Server (NTRS)

    Brocker, D. H.

    1973-01-01

    For binary digit array of N bits, use N noise sources to feed N nonlinear operators; each flip-flop in digit array is set by nonlinear operator to reflect whether amplitude of generator which feeds it is above or below mean value of generated noise. Fixed-point uniform distribution random number generation method can also be used to generate random numbers with other than uniform distribution.

  14. Quasi-Random Sequence Generators.

    1994-03-01

    Version 00 LPTAU generates quasi-random sequences. The sequences are uniformly distributed sets of L=2**30 points in the N-dimensional unit cube: I**N=[0,1]. The sequences are used as nodes for multidimensional integration, as searching points in global optimization, as trial points in multicriteria decision making, as quasi-random points for quasi Monte Carlo algorithms.

  15. Randomness and degrees of irregularity.

    PubMed Central

    Pincus, S; Singer, B H

    1996-01-01

    The fundamental question "Are sequential data random?" arises in myriad contexts, often with severe data length constraints. Furthermore, there is frequently a critical need to delineate nonrandom sequences in terms of closeness to randomness--e.g., to evaluate the efficacy of therapy in medicine. We address both these issues from a computable framework via a quantification of regularity. ApEn (approximate entropy), defining maximal randomness for sequences of arbitrary length, indicating the applicability to sequences as short as N = 5 points. An infinite sequence formulation of randomness is introduced that retains the operational (and computable) features of the finite case. In the infinite sequence setting, we indicate how the "foundational" definition of independence in probability theory, and the definition of normality in number theory, reduce to limit theorems without rates of convergence, from which we utilize ApEn to address rates of convergence (of a deficit from maximal randomness), refining the aforementioned concepts in a computationally essential manner. Representative applications among many are indicated to assess (i) random number generation output; (ii) well-shuffled arrangements; and (iii) (the quality of) bootstrap replicates. PMID:11607637

  16. A Mars Sample Return Sample Handling System

    NASA Technical Reports Server (NTRS)

    Wilson, David; Stroker, Carol

    2013-01-01

    We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory

  17. Is the Non-Dipole Magnetic Field Random?

    NASA Technical Reports Server (NTRS)

    Walker, Andrew D.; Backus, George E.

    1996-01-01

    Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.

  18. R. A. Fisher and his advocacy of randomization.

    PubMed

    Hall, Nancy S

    2007-01-01

    The requirement of randomization in experimental design was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimental design were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods. PMID:18175604

  19. Probing cell activity in random access modality

    NASA Astrophysics Data System (ADS)

    Sacconi, L.; Crocini, C.; Lotti, J.; Coppini, R.; Ferrantini, C.; Tesi, C.; Yan, P.; Loew, L. M.; Cerbai, E.; Poggesi, C.; Pavone, F. S.

    2013-06-01

    We combined the advantage of an ultrafast random access microscope with novel labelling technologies to study the intra- and inter-cellular action potential propagation in neurons and cardiac myocytes with sub-millisecond time resolution. The random accesses microscopy was used in combination with a new fluorinated voltage sensitive dye with improved photostability to record membrane potential from multiple Purkinje cells with near simultaneous sampling. The RAMP system rapidly scanned between lines drawn in the membranes of neurons to perform multiplex measurements of the TPF signal. This recording was achieved by rapidly positioning the laser excitation with the AOD to sample a patch of membrane from each cell in <100 μs for recording from five cells, multiplexing permits a temporal resolution of 400 μs sufficient to capture every spike. The system is capable to record spontaneous activity over 800 ms from five neighbouring cells simultaneously, showing that spiking is not temporally correlated. The system was also used to investigate the electrical properties of tubular system (TATS) in isolated rat ventricular myocytes.

  20. Phase Transitions on Random Lattices: How Random is Topological Disorder?

    NASA Astrophysics Data System (ADS)

    Barghathi, Hatem; Vojta, Thomas

    2015-03-01

    We study the effects of topological (connectivity) disorder on phase transitions. We identify a broad class of random lattices whose disorder fluctuations decay much faster with increasing length scale than those of generic random systems, yielding a wandering exponent of ω = (d - 1) / (2 d) in d dimensions. The stability of clean critical points is thus governed by the criterion (d + 1) ν > 2 rather than the usual Harris criterion dν > 2 , making topological disorder less relevant than generic randomness. The Imry-Ma criterion is also modified, allowing first-order transitions to survive in all dimensions d > 1 . These results explain a host of puzzling violations of the original criteria for equilibrium and nonequilibrium phase transitions on random lattices. We discuss applications, and we illustrate our theory by computer simulations of random Voronoi and other lattices. This work was supported by the NSF under Grant Nos. DMR-1205803 and PHYS-1066293. We acknowledge the hospitality of the Aspen Center for Physics.

  1. Efficiency analysis of sampling protocols used in protein crystallization screening

    NASA Astrophysics Data System (ADS)

    Segelke, Brent W.

    2001-11-01

    In an effort to objectively compare the efficiency of protein crystallization screening techniques, a probability model of sampling efficiency is developed and used to calculate sampling efficiencies from experimental data. Three typical sampling protocols (grid screening, footprint screening, and random screening) are used to crystallize each of five proteins (Phospholipase A 2, Thaumatin, Catalase, Lysozyme, and Ribonuclease B). For each of the three sampling protocols, experiments are chosen from a large set of possible experiments generated by systematic combination of a number of parameters common in crystallization screens. Software has been developed to generate and select from the combinations with each of the three sampling protocols examined in this study. The protocols differ only in the order samples are chosen from the set of possible combinations. Random sampling is motivated by the "Incomplete Factorial" screen (Carter and Carter, J. Biol. Chem. 254 (1979) 12 219); sampling with subsets of four is motivated by the "Footprint" screen (Stura et al., J. Crystal Growth 122 (1992) 273) and sampling with subsets of twenty-four is motivated by the "Grid" screen (McPherson, Prepartion and Analysis of Protein Crystals, Wiley, New York, 1982). For the five proteins examined, random sampling has the greatest average efficiency. Additional benefits of random sampling are discussed.

  2. The single-channel regime of transport through random media

    PubMed Central

    Peña, A.; Girschik, A.; Libisch, F.; Rotter, S.; Chabanov, A. A.

    2014-01-01

    The propagation of light through samples with random inhomogeneities can be described by way of transmission eigenchannels, which connect incoming and outgoing external propagating modes. Although the detailed structure of a disordered sample can generally not be fully specified, these transmission eigenchannels can nonetheless be successfully controlled and used for focusing and imaging light through random media. Here we demonstrate that in deeply localized quasi-1D systems, the single dominant transmission eigenchannel is formed by an individual Anderson-localized mode or by a ‘necklace state’. In this single-channel regime, the disordered sample can be treated as an effective 1D system with a renormalized localization length, coupled through all the external modes to its surroundings. Using statistical criteria of the single-channel regime and pulsed excitations of the disordered samples allows us to identify long-lived localized modes and short-lived necklace states at long and short time delays, respectively. PMID:24663028

  3. Full randomness from arbitrarily deterministic events.

    PubMed

    Gallego, Rodrigo; Masanes, Lluis; De La Torre, Gonzalo; Dhara, Chirag; Aolita, Leandro; Acín, Antonio

    2013-01-01

    Do completely unpredictable events exist? Classical physics excludes fundamental randomness. Although quantum theory makes probabilistic predictions, this does not imply that nature is random, as randomness should be certified without relying on the complete structure of the theory being used. Bell tests approach the question from this perspective. However, they require prior perfect randomness, falling into a circular reasoning. A Bell test that generates perfect random bits from bits possessing high-but less than perfect-randomness has recently been obtained. Yet, the main question remained open: does any initial randomness suffice to certify perfect randomness? Here we show that this is indeed the case. We provide a Bell test that uses arbitrarily imperfect random bits to produce bits that are, under the non-signalling principle assumption, perfectly random. This provides the first protocol attaining full randomness amplification. Our results have strong implications onto the debate of whether there exist events that are fully random. PMID:24173040

  4. On Convergent Probability of a Random Walk

    ERIC Educational Resources Information Center

    Lee, Y.-F.; Ching, W.-K.

    2006-01-01

    This note introduces an interesting random walk on a straight path with cards of random numbers. The method of recurrent relations is used to obtain the convergent probability of the random walk with different initial positions.

  5. On co-design of filter and fault estimator against randomly occurring nonlinearities and randomly occurring deception attacks

    NASA Astrophysics Data System (ADS)

    Hu, Jun; Liu, Steven; Ji, Donghai; Li, Shanqiang

    2016-07-01

    In this paper, the co-design problem of filter and fault estimator is studied for a class of time-varying non-linear stochastic systems subject to randomly occurring nonlinearities and randomly occurring deception attacks. Two mutually independent random variables obeying the Bernoulli distribution are employed to characterize the phenomena of the randomly occurring nonlinearities and randomly occurring deception attacks, respectively. By using the augmentation approach, the co-design problem of the robust filter and fault estimator is converted into the recursive filter design problem. A new compensation scheme is proposed such that, for both randomly occurring nonlinearities and randomly occurring deception attacks, an upper bound of the filtering error covariance is obtained and such an upper bound is minimized by properly designing the filter gain at each sampling instant. Moreover, the explicit form of the filter gain is given based on the solution to two Riccati-like difference equations. It is shown that the proposed co-design algorithm is of a recursive form that is suitable for online computation. Finally, a simulation example is given to illustrate the usefulness of the developed filtering approach.

  6. Ewens sampling formulae with and without selection

    NASA Astrophysics Data System (ADS)

    Huillet, Thierry

    2007-09-01

    We shall first consider the random Dirichlet partitioning of the interval into n fragments at temperature [theta]>0. Using calculus for Dirichlet integrals, pre-asymptotic versions of the Ewens sampling formulae from finite Dirichlet partitions follow up. From these preliminaries, straightforward proofs of the usual sampling formulae from random proportions with Poisson-Dirichlet (PD)([gamma]) distribution can be obtained, while considering the Kingman limit n[NE pointing arrow][infinity], [theta][south east arrow]0, with n[theta]=[gamma]>0. In this manuscript, the Gibbs version of the Dirichlet partition with symmetric selection is considered. By use of similar series expansion calculus for Dirichlet integrals, closed-form expressions of Ewens sampling formulae in the presence of selection are obtained; special types of Bell polynomials are shown to be involved.

  7. Sampling Motif-Constrained Ensembles of Networks

    NASA Astrophysics Data System (ADS)

    Fischer, Rico; Leitão, Jorge C.; Peixoto, Tiago P.; Altmann, Eduardo G.

    2015-10-01

    The statistical significance of network properties is conditioned on null models which satisfy specified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this Letter we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, networks with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.

  8. Sampling mechanisms for asteroid sample return missions

    NASA Astrophysics Data System (ADS)

    Sears, D.; Franzen, M. A.; Preble, J.; Long, T.

    2003-04-01

    There is a unique challenge in developing sample collectors for low-gravity bodies such as asteroids. Traditional devices rely mostly on gravity for sample collection which is inappropriate in the case of asteroids. The NEAR Shoemaker has shown that we can design spacecrafts that can maneuver very closely to asteroids and provide us with a wealth of valuable data. However, a sample collector that can return samples to the Earth has yet to be fully developed. During the Near-Earth Sample Return Workshop held in Los Angeles in July 2002, the scientific requirements and engineering constraints of sample return collectors were discussed. It was proposed that the touch-and-go-sampler is to be preferred for the first missions. The collector should be as simple as possible, with the minimum of moving parts to reduce cost and prevent damage to the sampler during the collection process as well as minimize surface disturbance on the asteroid. However, the collection procedure must meet certain conditions in order for a complete assessment of the samples. The collection process should not change the composition (molecular, elemental, or isotopic), physical properties, mineral and phase proportions, or grain size distribution. Our answer to these challenges is an adhesive tray collector. The adhesive tray touch-and-go-sampler would include a thirty centimeter in diameter tray bound to a boom. The boom would allow the spacecraft to collect samples with a minimum amount of disturbance from the one to two second encounter with the surface of the asteroid with the adhesive tray. The adhesive tray would be able to sample surface regolith including one to two centimeter clasts in a diverse number of scientifically valuable sites. Once the sample has been collected, the boom will retract and place the adhesive sample tray into a sample return canister. Progress in the development of this collector and preliminary results of testing under microgravity and space conditions will be

  9. Texture synthesis and transfer from multiple samples

    NASA Astrophysics Data System (ADS)

    Qi, Yue; Zhao, Qinping

    2003-09-01

    Texture Mapping plays a very important role in Computer Graphics. Texture Synthesis is one of the main methods to obtain textures, it makes use of sample textures to generate new textures. Texture Transfer is based on Texture Synthesis, it renders objects with textures taken from different objects. Currently, most of Texture Synthesis and Transfer methods use a single sample texture. A method for Texture Synthesis adn Transfer from multi samples was presented. For texture synthesis, the L-shaped neighborhood seaching approach was used. Users specify the proportion of each sample, the number of seed points, and these seed points are scattered randomly according to their samples in horizontal and vertical direction synchronously to synthesize textures. The synthesized textures are very good. For texture transfer, the luminance of the target image and the sample textures are analyzed. This procedure is from coarse to fine, and can produce a visually pleasing result.

  10. Optical and microwave propagation in random dielectric and metallic media

    NASA Astrophysics Data System (ADS)

    Genack, A. Z.; Ferrari, L. A.; Zhu, J.; Garcia, N.; Drake, J. M.

    1988-10-01

    Measurements of steady state and picosecond optical transmission through a sample composed of TiO2 microparticles embedded in polystyrene are consistent with a model of photon diffusion with a coefficient D=4.8×105 cm2/s and photon absorption time τabs=294 ps at 589 nm. Measurements of transmission of K band microwave radiation in a random sample of copper spheres in paraffin indicate that the diffusion model is not adequate at certain frequencies.

  11. A Randomized Violence Prevention Trial with Comparison: Responses by Gender

    ERIC Educational Resources Information Center

    Griffin, James P., Jr.; Chen, Dungtsa; Eubanks, Adriane; Brantley, Katrina M.; Willis, Leigh A.

    2007-01-01

    Using random assignment of students to two intervention groups and a comparison school sample, the researchers evaluated a three-group school-based violence prevention program. The three groups were (1) a whole-school intervention, (2) whole-school, cognitive-behavioral and cultural enrichment training, and (3) no violence prevention. The…

  12. Random vibration test of Mars Exploration Rover spacecraft

    NASA Technical Reports Server (NTRS)

    Scharton, T.; Lee, D.

    2003-01-01

    The primary objective of the random vibration test was to identify any hardware problems, which might compromise the mission. The test objectives, configuration, and requirements are briefly described in this presentation, and a representative sample of the measured data is presented.

  13. Intraclass Correlations for Planning Group Randomized Experiments in Rural Education

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Hedberg, E. C.

    2007-01-01

    Experiments that assign intact groups (usually schools) to treatment conditions are increasingly common in educational research. The design of group randomized experiments requires knowledge of the intraclass correlation structure to compute statistical power and to determine the sample sizes required to achieve adequate power. The intraclass…

  14. What Does a Random Line Look Like: An Experimental Study

    ERIC Educational Resources Information Center

    Turner, Nigel E.; Liu, Eleanor; Toneatto, Tony

    2011-01-01

    The study examined the perception of random lines by people with gambling problems compared to people without gambling problems. The sample consisted of 67 probable pathological gamblers and 46 people without gambling problems. Participants completed a number of questionnaires about their gambling and were then presented with a series of random…

  15. Wave propagation through a random medium - The random slab problem

    NASA Technical Reports Server (NTRS)

    Acquista, C.

    1978-01-01

    The first-order smoothing approximation yields integral equations for the mean and the two-point correlation function of a wave in a random medium. A method is presented for the approximate solution of these equations that combines features of the eiconal approximation and of the Born expansion. This method is applied to the problem of reflection and transmission of a plane wave by a slab of a random medium. Both the mean wave and the covariance are calculated to determine the reflected and transmitted amplitudes and intensities.

  16. Cover times of random searches

    NASA Astrophysics Data System (ADS)

    Chupeau, Marie; Bénichou, Olivier; Voituriez, Raphaël

    2015-10-01

    How long must one undertake a random search to visit all sites of a given domain? This time, known as the cover time, is a key observable to quantify the efficiency of exhaustive searches, which require a complete exploration of an area and not only the discovery of a single target. Examples range from immune-system cells chasing pathogens to animals harvesting resources, from robotic exploration for cleaning or demining to the task of improving search algorithms. Despite its broad relevance, the cover time has remained elusive and so far explicit results have been scarce and mostly limited to regular random walks. Here we determine the full distribution of the cover time for a broad range of random search processes, including Lévy strategies, intermittent strategies, persistent random walks and random walks on complex networks, and reveal its universal features. We show that for all these examples the mean cover time can be minimized, and that the corresponding optimal strategies also minimize the mean search time for a single target, unambiguously pointing towards their robustness.

  17. Random root movements in weightlessness.

    PubMed

    Johnsson, A; Karlsson, C; Iversen, T H; Chapman, D K

    1996-02-01

    The dynamics of root growth was studied in weightlessness. In the absence of the gravitropic reference direction during weightlessness, root movements could be controlled by spontaneous growth processes, without any corrective growth induced by the gravitropic system. If truly random of nature, the bending behavior should follow so-called 'random walk' mathematics during weightlessness. Predictions from this hypothesis were critically tested. In a Spacelab ESA-experiment, denoted RANDOM and carried out during the IML-2 Shuttle flight in July 1994, the growth of garden cress (Lepidium sativum) roots was followed by time lapse photography at 1-h intervals. The growth pattern was recorded for about 20 h. Root growth was significantly smaller in weightlessness as compared to gravity (control) conditions. It was found that the roots performed spontaneous movements in weightlessness. The average direction of deviation of the plants consistently stayed equal to zero, despite these spontaneous movements. The average squared deviation increased linearly with time as predicted theoretically (but only for 8-10 h). Autocorrelation calculations showed that bendings of the roots, as determined from the 1-h photographs, were uncorrelated after about a 2-h interval. It is concluded that random processes play an important role in root growth. Predictions from a random walk hypothesis as to the growth dynamics could explain parts of the growth patterns recorded. This test of the hypothesis required microgravity conditions as provided for in a space experiment. PMID:11541141

  18. Misoe Sample Data Systems

    ERIC Educational Resources Information Center

    Creager, John A.; And Others

    1974-01-01

    Among the purposes of this chapter is the provision of a description of the sample and research design of the MISOE sample data systems, including a specification of the size of the sample. (Author/RK)

  19. SAMPLING TUBING EFFECTS ON GROUNDWATER SAMPLES

    EPA Science Inventory

    Volatile organic compounds pose a challenge to groundwater sampling protocols, since they can be lost as a water sample degasses or lost due to sorption on tubing or pump materials. Laboratory sorption experiments have been conducted with five common flexible tubing materials to ...

  20. Self-averaging and ergodicity of subdiffusion in quenched random media

    NASA Astrophysics Data System (ADS)

    Dentz, Marco; Russian, Anna; Gouze, Philippe

    2016-01-01

    We study the self-averaging properties and ergodicity of the mean square displacement m (t ) of particles diffusing in d dimensional quenched random environments which give rise to subdiffusive average motion. These properties are investigated in terms of the sample to sample fluctuations as measured by the variance of m (t ) . We find that m (t ) is not self-averaging for d <2 due to the inefficient disorder sampling by random motion in a single realization. For d ≥2 in contrast, the efficient sampling of heterogeneity by the space random walk renders m (t ) self-averaging and thus ergodic. This is remarkable because the average particle motion in d >2 obeys a CTRW, which by itself displays weak ergodicity breaking. This paradox is resolved by the observation that the CTRW as an average model does not reflect the disorder sampling by random motion in a single medium realization.

  1. Safety and Reactogenicity of an MSP-1 Malaria Vaccine Candidate: A Randomized Phase Ib Dose-Escalation Trial in Kenyan Children

    PubMed Central

    Withers, Mark R; McKinney, Denise; Ogutu, Bernhards R; Waitumbi, John N; Milman, Jessica B; Apollo, Odika J; Allen, Otieno G; Tucker, Kathryn; Soisson, Lorraine A; Diggs, Carter; Leach, Amanda; Wittes, Janet; Dubovsky, Filip; Stewart, V. Ann; Remich, Shon A; Cohen, Joe; Ballou, W. Ripley; Holland, Carolyn A; Lyon, Jeffrey A; Angov, Evelina; Stoute, José A; Martin, Samuel K; Heppner, D. Gray

    2006-01-01

    Objective: Our aim was to evaluate the safety, reactogenicity, and immunogenicity of an investigational malaria vaccine. Design: This was an age-stratified phase Ib, double-blind, randomized, controlled, dose-escalation trial. Children were recruited into one of three cohorts (dosage groups) and randomized in 2:1 fashion to receive either the test product or a comparator. Setting: The study was conducted in a rural population in Kombewa Division, western Kenya. Participants: Subjects were 135 children, aged 12–47 mo. Interventions: Subjects received 10, 25, or 50 μg of falciparum malaria protein 1 (FMP1) formulated in 100, 250, and 500 μL, respectively, of AS02A, or they received a comparator (Imovax® rabies vaccine). Outcome Measures: We performed safety and reactogenicity parameters and assessment of adverse events during solicited (7 d) and unsolicited (30 d) periods after each vaccination. Serious adverse events were monitored for 6 mo after the last vaccination. Results: Both vaccines were safe and well tolerated. FMP1/AS02A recipients experienced significantly more pain and injection-site swelling with a dose-effect relationship. Systemic reactogenicity was low at all dose levels. Hemoglobin levels remained stable and similar across arms. Baseline geometric mean titers were comparable in all groups. Anti-FMP1 antibody titers increased in a dose-dependent manner in subjects receiving FMP1/AS02A; no increase in anti-FMP1 titers occurred in subjects who received the comparator. By study end, subjects who received either 25 or 50 μg of FMP1 had similar antibody levels, which remained significantly higher than that of those who received the comparator or 10 μg of FMP1. A longitudinal mixed effects model showed a statistically significant effect of dosage level on immune response (F3,1047 = 10.78, or F3, 995 = 11.22, p < 0.001); however, the comparison of 25 μg and 50 μg recipients indicated no significant difference (F1,1047 = 0.05; p = 0.82). Conclusions

  2. Propagation in multiscale random media

    NASA Astrophysics Data System (ADS)

    Balk, Alexander M.

    2003-10-01

    Many studies consider media with microstructure, which has variations on some microscale, while the macroproperties are under investigation. Sometimes the medium has several microscales, all of them being much smaller than the macroscale. Sometimes the variations on the macroscale are also included, which are taken into account by some procedures, like WKB or geometric optics. What if the medium has variations on all scales from microscale to macroscale? This situation occurs in several practical problems. The talk is about such situations, in particular, passive tracer in a random velocity field, wave propagation in a random medium, Schrödinger equation with random potential. To treat such problems we have developed the statistical near-identity transformation. We find anomalous attenuation of the pulse propagating in a multiscale medium.

  3. Determining the Optimum Number of Increments in Composite Sampling

    SciTech Connect

    Hathaway, John E.; Schaalje, G Bruce; Gilbert, Richard O.; Pulsipher, Brent A.; Matzke, Brett D.

    2008-09-30

    Composite sampling can be more cost effective than simple random sampling. This paper considers how to determine the optimum number of increments to use in composite sampling. Composite sampling can be more cost effective than simple random sampling. This paper considers how to determine the optimum number of increments to use in composite sampling. Composite sampling terminology and theory are outlined and a method is developed which accounts for different sources of variation in compositing and data analysis. This method is used to define and understand the process of determining the optimum number of increments that should be used in forming a composite. The blending variance is shown to have a smaller range of possible values than previously reported when estimating the number of increments in a composite sample. Accounting for differing levels of the blending variance significantly affects the estimated number of increments.

  4. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    SciTech Connect

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  5. Depth reconstruction from sparse samples: representation, algorithm, and sampling.

    PubMed

    Liu, Lee-Kang; Chan, Stanley H; Nguyen, Truong Q

    2015-06-01

    The rapid development of 3D technology and computer vision applications has motivated a thrust of methodologies for depth acquisition and estimation. However, existing hardware and software acquisition methods have limited performance due to poor depth precision, low resolution, and high computational cost. In this paper, we present a computationally efficient method to estimate dense depth maps from sparse measurements. There are three main contributions. First, we provide empirical evidence that depth maps can be encoded much more sparsely than natural images using common dictionaries, such as wavelets and contourlets. We also show that a combined wavelet-contourlet dictionary achieves better performance than using either dictionary alone. Second, we propose an alternating direction method of multipliers (ADMM) for depth map reconstruction. A multiscale warm start procedure is proposed to speed up the convergence. Third, we propose a two-stage randomized sampling scheme to optimally choose the sampling locations, thus maximizing the reconstruction performance for a given sampling budget. Experimental results show that the proposed method produces high-quality dense depth estimates, and is robust to noisy measurements. Applications to real data in stereo matching are demonstrated. PMID:25769151

  6. Relatively Random: Context Effects on Perceived Randomness and Predicted Outcomes

    ERIC Educational Resources Information Center

    Matthews, William J.

    2013-01-01

    This article concerns the effect of context on people's judgments about sequences of chance outcomes. In Experiment 1, participants judged whether sequences were produced by random, mechanical processes (such as a roulette wheel) or skilled human action (such as basketball shots). Sequences with lower alternation rates were judged more likely to…

  7. A Randomized Experiment Comparing Random and Cutoff-Based Assignment

    ERIC Educational Resources Information Center

    Shadish, William R.; Galindo, Rodolfo; Wong, Vivian C.; Steiner, Peter M.; Cook, Thomas D.

    2011-01-01

    In this article, we review past studies comparing randomized experiments to regression discontinuity designs, mostly finding similar results, but with significant exceptions. The latter might be due to potential confounds of study characteristics with assignment method or with failure to estimate the same parameter over methods. In this study, we…

  8. Molecular random tilings as glasses

    PubMed Central

    Garrahan, Juan P.; Stannard, Andrew; Blunt, Matthew O.; Beton, Peter H.

    2009-01-01

    We have recently shown that p-terphenyl-3,5,3′,5′-tetracarboxylic acid adsorbed on graphite self-assembles into a two-dimensional rhombus random tiling. This tiling is close to ideal, displaying long-range correlations punctuated by sparse localized tiling defects. In this article we explore the analogy between dynamic arrest in this type of random tilings and that of structural glasses. We show that the structural relaxation of these systems is via the propagation–reaction of tiling defects, giving rise to dynamic heterogeneity. We study the scaling properties of the dynamics and discuss connections with kinetically constrained models of glasses. PMID:19720990

  9. Mode statistics in random lasers

    SciTech Connect

    Zaitsev, Oleg

    2006-12-15

    Representing an ensemble of random lasers with an ensemble of random matrices, we compute the average number of lasing modes and its fluctuations. The regimes of weak and strong coupling of the passive resonator to the environment are considered. In the latter case, contrary to an earlier claim in the literature, we do not find a power-law dependence of the average mode number on the pump strength. For the relative fluctuations, however, a power law can be established. It is shown that, due to the mode competition, the distribution of the number of excited modes over an ensemble of lasers is not binomial.

  10. Neutron transport in random media

    SciTech Connect

    Makai, M.

    1996-08-01

    The survey reviews the methods available in the literature which allow a discussion of corium recriticality after a severe accident and a characterization of the corium. It appears that to date no one has considered the eigenvalue problem, though for the source problem several approaches have been proposed. The mathematical formulation of a random medium may be approached in different ways. Based on the review of the literature, we can draw three basic conclusions. The problem of static, random perturbations has been solved. The static case is tractable by the Monte Carlo method. There is a specific time dependent case for which the average flux is given as a series expansion.

  11. Synchronizability of random rectangular graphs

    SciTech Connect

    Estrada, Ernesto Chen, Guanrong

    2015-08-15

    Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.

  12. Random organization and plastic depinning

    SciTech Connect

    Reichhardt, Charles; Reichhardt, Cynthia

    2008-01-01

    We provide evidence that the general phenomenon of plastic depinning can be described as an absorbing phase transition, and shows the same features as the random organization which was recently studied in periodically driven particle systems [L. Corte, Nature Phys. 4, 420 (2008)]. In the plastic flow system, the pinned regime corresponds to the absorbing state and the moving state corresponds to the fluctuating state. When an external force is suddenly applied, the system eventually organizes into one of these two states with a time scale that diverges as a power law at a nonequilibrium transition. We propose a simple experiment to test for this transition in systems with random disorder.

  13. Random sequential adsorption of tetramers

    NASA Astrophysics Data System (ADS)

    Cieśla, Michał

    2013-07-01

    Adsorption of a tetramer built of four identical spheres was studied numerically using the random sequential adsorption (RSA) algorithm. Tetramers were adsorbed on a two-dimensional, flat and homogeneous surface. Two different models of the adsorbate were investigated: a rhomboid and a square one; monomer centres were put on vertices of rhomboids and squares, respectively. Numerical simulations allow us to establish the maximal random coverage ratio as well as the available surface function (ASF), which is crucial for determining kinetics of the adsorption process. These results were compared with data obtained experimentally for KfrA plasmid adsorption. Additionally, the density autocorrelation function was measured.

  14. Rare attributes in finite universe: Hypotheses testing specification and exact randomized upper confidence bounds

    SciTech Connect

    Wright, T.

    1993-03-01

    When attributes are rare and few or none are observed in the selected sample from a finite universe, sampling statisticians are increasingly being challenged to use whatever methods are available to declare with high probability or confidence that the universe is near or completely attribute-free. This is especially true when the attribute is undesirable. Approximations such as those based on normal theory are frequently inadequate with rare attributes. For simple random sampling without replacement, an appropriate probability distribution for statistical inference is the hypergeometric distribution. But even with the hypergeometric distribution, the investigator is limited from making claims of attribute-free with high confidence unless the sample size is quite large using nonrandomized techniques. In the hypergeometric setting with rare attributes, exact randomized tests of hypothesis a,re investigated to determine the effect on power of how one specifies the null hypothesis. In particular, specifying the null hypothesis as zero attributes does not always yield maximum possible power. We also consider the hypothesis specification question under complex sampling designs including stratified random sampling and two-stage cluster sampling (one case involves random selection at first stage and another case involves probability proportional to size without replacement selection at first stage). Also under simple random sampling, this article defines and presents a simple algorithm for the construction of exact ``randomized`` upper confidence bounds which permit one to possibly report tighter bounds than those exact bounds obtained using ``nonrandomized`` methods.

  15. Rare attributes in finite universe: Hypotheses testing specification and exact randomized upper confidence bounds

    SciTech Connect

    Wright, T.

    1993-03-01

    When attributes are rare and few or none are observed in the selected sample from a finite universe, sampling statisticians are increasingly being challenged to use whatever methods are available to declare with high probability or confidence that the universe is near or completely attribute-free. This is especially true when the attribute is undesirable. Approximations such as those based on normal theory are frequently inadequate with rare attributes. For simple random sampling without replacement, an appropriate probability distribution for statistical inference is the hypergeometric distribution. But even with the hypergeometric distribution, the investigator is limited from making claims of attribute-free with high confidence unless the sample size is quite large using nonrandomized techniques. In the hypergeometric setting with rare attributes, exact randomized tests of hypothesis a,re investigated to determine the effect on power of how one specifies the null hypothesis. In particular, specifying the null hypothesis as zero attributes does not always yield maximum possible power. We also consider the hypothesis specification question under complex sampling designs including stratified random sampling and two-stage cluster sampling (one case involves random selection at first stage and another case involves probability proportional to size without replacement selection at first stage). Also under simple random sampling, this article defines and presents a simple algorithm for the construction of exact randomized'' upper confidence bounds which permit one to possibly report tighter bounds than those exact bounds obtained using nonrandomized'' methods.

  16. Intentional sampling by goal optimization with decoupling by stochastic perturbation

    NASA Astrophysics Data System (ADS)

    Lauretto, Marcelo De Souza; Nakano, Fábio; Pereira, Carlos Alberto de Bragança; Stern, Julio Michael

    2012-10-01

    Intentional sampling methods are non-probabilistic procedures that select a group of individuals for a sample with the purpose of meeting specific prescribed criteria. Intentional sampling methods are intended for exploratory research or pilot studies where tight budget constraints preclude the use of traditional randomized representative sampling. The possibility of subsequently generalize statistically from such deterministic samples to the general population has been the issue of long standing arguments and debates. Nevertheless, the intentional sampling techniques developed in this paper explore pragmatic strategies for overcoming some of the real or perceived shortcomings and limitations of intentional sampling in practical applications.

  17. Permutation/randomization-based inference for environmental data.

    PubMed

    Spicer, R Christopher; Gangloff, Harry J

    2016-03-01

    Quantitative inference from environmental contaminant data is almost exclusively from within the classic Neyman/Pearson (N/P) hypothesis-testing model, by which the mean serves as the fundamental quantitative measure, but which is constrained by random sampling and the assumption of normality in the data. Permutation/randomization-based inference originally forwarded by R. A. Fisher derives probability directly from the proportion of the occurrences of interest and is not dependent upon the distribution of data or random sampling. Foundationally, the underlying logic and the interpretation of the significance of the two models vary, but inference using either model can often be successfully applied. However, data examples from airborne environmental fungi (mold), asbestos in settled dust, and 1,2,3,4-tetrachlorobenzene (TeCB) in soil demonstrate potentially misleading inference using traditional N/P hypothesis testing based upon means/variance compared to permutation/randomization inference using differences in frequency of detection (Δf d). Bootstrapping and permutation testing, which are extensions of permutation/randomization, confirm calculated p values via Δf d and should be utilized to verify the appropriateness of a given data analysis by either model. PMID:26850713

  18. Accurate Prediction of the Statistics of Repetitions in Random Sequences: A Case Study in Archaea Genomes

    PubMed Central

    Régnier, Mireille; Chassignet, Philippe

    2016-01-01

    Repetitive patterns in genomic sequences have a great biological significance and also algorithmic implications. Analytic combinatorics allow to derive formula for the expected length of repetitions in a random sequence. Asymptotic results, which generalize previous works on a binary alphabet, are easily computable. Simulations on random sequences show their accuracy. As an application, the sample case of Archaea genomes illustrates how biological sequences may differ from random sequences. PMID:27376057

  19. Accurate Prediction of the Statistics of Repetitions in Random Sequences: A Case Study in Archaea Genomes.

    PubMed

    Régnier, Mireille; Chassignet, Philippe

    2016-01-01

    Repetitive patterns in genomic sequences have a great biological significance and also algorithmic implications. Analytic combinatorics allow to derive formula for the expected length of repetitions in a random sequence. Asymptotic results, which generalize previous works on a binary alphabet, are easily computable. Simulations on random sequences show their accuracy. As an application, the sample case of Archaea genomes illustrates how biological sequences may differ from random sequences. PMID:27376057

  20. Fluid sampling tool

    DOEpatents

    Garcia, Anthony R.; Johnston, Roger G.; Martinez, Ronald K.

    2000-01-01

    A fluid-sampling tool for obtaining a fluid sample from a container. When used in combination with a rotatable drill, the tool bores a hole into a container wall, withdraws a fluid sample from the container, and seals the borehole. The tool collects fluid sample without exposing the operator or the environment to the fluid or to wall shavings from the container.

  1. Selecting a Sample

    ERIC Educational Resources Information Center

    Ritter, Lois A., Ed.; Sue, Valerie M., Ed.

    2007-01-01

    This chapter provides an overview of sampling methods that are appropriate for conducting online surveys. The authors review some of the basic concepts relevant to online survey sampling, present some probability and nonprobability techniques for selecting a sample, and briefly discuss sample size determination and nonresponse bias. Although some…

  2. A Mixed Effects Randomized Item Response Model

    ERIC Educational Resources Information Center

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  3. Are quasar redshifts randomly distributed

    NASA Technical Reports Server (NTRS)

    Weymann, R. J.; Boroson, T.; Scargle, J. D.

    1978-01-01

    A statistical analysis of possible clumping (not periodicity) of emission line redshifts of QSO's shows the available data to be compatible with random fluctuations of a smooth, non-clumped distribution. This result is demonstrated with Monte Carlo simulations as well as with the Kolmogorov-Smirnov test. It is in complete disagreement with the analysis by Varshni, which is shown to be incorrect.

  4. Randomized Item Response Theory Models

    ERIC Educational Resources Information Center

    Fox, Jean-Paul

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR…

  5. Common Randomness Principles of Secrecy

    ERIC Educational Resources Information Center

    Tyagi, Himanshu

    2013-01-01

    This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…

  6. Entropy of random entangling surfaces

    NASA Astrophysics Data System (ADS)

    Solodukhin, Sergey N.

    2012-09-01

    We consider the situation when a globally defined four-dimensional field system is separated on two entangled sub-systems by a dynamical (random) two-dimensional surface. The reduced density matrix averaged over ensemble of random surfaces of fixed area and the corresponding average entropy are introduced. The average entanglement entropy is analyzed for a generic conformal field theory in four dimensions. Two important particular cases are considered. In the first, both the intrinsic metric on the entangling surface and the spacetime metric are fluctuating. An important example of this type is when the entangling surface is a black hole horizon, the fluctuations of which cause necessarily the fluctuations in the spacetime geometry. In the second case, the spacetime is considered to be fixed. The detailed analysis is carried out for the random entangling surfaces embedded in flat Minkowski spacetime. In all cases, the problem reduces to an effectively two-dimensional problem of random surfaces which can be treated by means of the well-known conformal methods. Focusing on the logarithmic terms in the entropy, we predict the appearance of a new ln ln(A) term. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’.

  7. Garnet Random-Access Memory

    NASA Technical Reports Server (NTRS)

    Katti, Romney R.

    1995-01-01

    Random-access memory (RAM) devices of proposed type exploit magneto-optical properties of magnetic garnets exhibiting perpendicular anisotropy. Magnetic writing and optical readout used. Provides nonvolatile storage and resists damage by ionizing radiation. Because of basic architecture and pinout requirements, most likely useful as small-capacity memory devices.

  8. Undecidability Theorem and Quantum Randomness

    NASA Astrophysics Data System (ADS)

    Berezin, Alexander A.

    2005-04-01

    As scientific folklore has it, Kurt Godel was once annoyed by question whether he sees any link between his Undecidability Theorem (UT) and Uncertainty Relationship. His reaction, however, may indicate that he probably felt that such a hidden link could indeed exist but he was unable clearly formulate it. Informational version of UT (G.J.Chaitin) states impossibility to rule out algorithmic compressibility of arbitrary digital string. Thus, (mathematical) randomness can only be disproven, not proven. Going from mathematical to physical (mainly quantum) randomness, we encounter seemingly random acts of radioactive decays of isotopes (such as C14), emission of excited atoms, tunneling effects, etc. However, our notion of quantum randomness (QR) may likely hit similarly formidable wall of physical version of UT leading to seemingly bizarre ideas such as Everett many world model (D.Deutsch) or backward causation (J.A.Wheeler). Resolution may potentially lie in admitting some form of Aristotelean final causation (AFC) as an ultimate foundational principle (G.W.Leibniz) connecting purely mathematical (Platonic) grounding aspects with it physically observable consequences, such as plethora of QR effects. Thus, what we interpret as QR may eventually be manifestation of AFC in which UT serves as delivery vehicle. Another example of UT/QR/AFC connection is question of identity (indistinguishability) of elementary particles (are all electrons exactly the same or just approximately so to a very high degree?).

  9. Plated wire random access memories

    NASA Technical Reports Server (NTRS)

    Gouldin, L. D.

    1975-01-01

    A program was conducted to construct 4096-work by 18-bit random access, NDRO-plated wire memory units. The memory units were subjected to comprehensive functional and environmental tests at the end-item level to verify comformance with the specified requirements. A technical description of the unit is given, along with acceptance test data sheets.

  10. Models of random graph hierarchies

    NASA Astrophysics Data System (ADS)

    Paluch, Robert; Suchecki, Krzysztof; Hołyst, Janusz A.

    2015-10-01

    We introduce two models of inclusion hierarchies: random graph hierarchy (RGH) and limited random graph hierarchy (LRGH). In both models a set of nodes at a given hierarchy level is connected randomly, as in the Erdős-Rényi random graph, with a fixed average degree equal to a system parameter c. Clusters of the resulting network are treated as nodes at the next hierarchy level and they are connected again at this level and so on, until the process cannot continue. In the RGH model we use all clusters, including those of size 1, when building the next hierarchy level, while in the LRGH model clusters of size 1 stop participating in further steps. We find that in both models the number of nodes at a given hierarchy level h decreases approximately exponentially with h. The height of the hierarchy H, i.e. the number of all hierarchy levels, increases logarithmically with the system size N, i.e. with the number of nodes at the first level. The height H decreases monotonically with the connectivity parameter c in the RGH model and it reaches a maximum for a certain c max in the LRGH model. The distribution of separate cluster sizes in the LRGH model is a power law with an exponent about - 1.25. The above results follow from approximate analytical calculations and have been confirmed by numerical simulations.

  11. Universality in random quantum networks

    NASA Astrophysics Data System (ADS)

    Novotný, Jaroslav; Alber, Gernot; Jex, Igor

    2015-12-01

    Networks constitute efficient tools for assessing universal features of complex systems. In physical contexts, classical as well as quantum networks are used to describe a wide range of phenomena, such as phase transitions, intricate aspects of many-body quantum systems, or even characteristic features of a future quantum internet. Random quantum networks and their associated directed graphs are employed for capturing statistically dominant features of complex quantum systems. Here, we develop an efficient iterative method capable of evaluating the probability of a graph being strongly connected. It is proven that random directed graphs with constant edge-establishing probability are typically strongly connected, i.e., any ordered pair of vertices is connected by a directed path. This typical topological property of directed random graphs is exploited to demonstrate universal features of the asymptotic evolution of large random qubit networks. These results are independent of our knowledge of the details of the network topology. These findings suggest that other highly complex networks, such as a future quantum internet, may also exhibit similar universal properties.

  12. Optimal random search for a single hidden target.

    PubMed

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all. PMID:21405659

  13. Entanglement generation of nearly random operators.

    PubMed

    Weinstein, Yaakov S; Hellberg, C Stephen

    2005-07-15

    We study the entanglement generation of operators whose statistical properties approach those of random matrices but are restricted in some way. These include interpolating ensemble matrices, where the interval of the independent random parameters are restricted, pseudorandom operators, where there are far fewer random parameters than required for random matrices, and quantum chaotic evolution. Restricting randomness in different ways allows us to probe connections between entanglement and randomness. We comment on which properties affect entanglement generation and discuss ways of efficiently producing random states on a quantum computer. PMID:16090726

  14. Random trinomial tree models and vanilla options

    NASA Astrophysics Data System (ADS)

    Ganikhodjaev, Nasir; Bayram, Kamola

    2013-09-01

    In this paper we introduce and study random trinomial model. The usual trinomial model is prescribed by triple of numbers (u, d, m). We call the triple (u, d, m) an environment of the trinomial model. A triple (Un, Dn, Mn), where {Un}, {Dn} and {Mn} are the sequences of independent, identically distributed random variables with 0 < Dn < 1 < Un and Mn = 1 for all n, is called a random environment and trinomial tree model with random environment is called random trinomial model. The random trinomial model is considered to produce more accurate results than the random binomial model or usual trinomial model.

  15. Heuristic-biased stochastic sampling

    SciTech Connect

    Bresina, J.L.

    1996-12-31

    This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show that with the proper bias function, it can be easy to outperform greedy search.

  16. Multi-stage sampling in genetic epidemiology.

    PubMed

    Whittemore, A S; Halpern, J

    When data are expensive to collect, it can be cost-efficient to sample in two or more stages. In the first stage a simple random sample is drawn and then stratified according to some easily measured attribute. In each subsequent stage a random subset of previously selected units is sampled for more detailed observation, with a unit's sampling probability determined by its attributes as observed in the previous stages. These designs are useful in many medical studies; here we use them in genetic epidemiology. Two genetic studies illustrate the strengths and limitations of the approach. The first study evaluates nuclear and mitochondrial DNA in U.S. blacks. The goal is to estimate the relative contributions of white male genes and white female genes to the gene pool of African-Americans. This example shows that the Horvitz-Thompson estimators proposed for multi-stage designs can be inefficient, particularly when used with unnecessary stratification. The second example is a multi-stage study of familial prostate cancer. The goal is to gather pedigrees, blood samples and archived tissue for segregation and linkage analysis of familial prostate cancer data by first obtaining crude family data from prostate cancer cases and cancer-free controls. This second example shows the gains in efficiency from multi-stage sampling when the individual likelihood or quasilikelihood scores vary substantially across strata. PMID:9004389

  17. Accuracy Sampling Design Bias on Coarse Spatial Resolution Land Cover Data in the Great Lakes Region (United States and Canada)

    EPA Science Inventory

    A number of articles have investigated the impact of sampling design on remotely sensed landcover accuracy estimates. Gong and Howarth (1990) found significant differences for Kappa accuracy values when comparing purepixel sampling, stratified random sampling, and stratified sys...

  18. Randomized Trial of a Gatekeeper Program for Suicide Prevention: 1-Year Impact on Secondary School Staff

    ERIC Educational Resources Information Center

    Wyman, Peter A.; Brown, C. Hendricks; Inman, Jeff; Cross, Wendi; Schmeelk-Cone, Karen; Guo, Jing; Pena, Juan B.

    2008-01-01

    Gatekeeper-training programs, designed to increase identification and referral of suicidal individuals, are widespread but largely untested. A group-based randomized trial with 32 schools examined impact of Question, Persuade, Refer (QPR) training on a stratified random sample of 249 staff with 1-year average follow-up. To test QPR impact, the…

  19. THE EFFECT OF RANDOM VARIATIONS OF DIFFERENT TYPES ON POPULATION GROWTH

    PubMed Central

    Levins, R.

    1969-01-01

    The probability distributions of population size are derived for populations living in randomly varying environments for both density-dependent and density-independent population growth. The effects of random variation in the rate of increase, the carrying capacity, and sampling variation in numbers are examined. PMID:5256407

  20. The Probability of Small Schedule Values and Preference for Random-Interval Schedules

    ERIC Educational Resources Information Center

    Soreth, Michelle Ennis; Hineline, Philip N.

    2009-01-01

    Preference for working on variable schedules and temporal discrimination were simultaneously examined in two experiments using a discrete-trial, concurrent-chains arrangement with fixed interval (FI) and random interval (RI) terminal links. The random schedule was generated by first sampling a probability distribution after the programmed delay to…

  1. Randomized Controlled Trial of a Preventive Intervention for Perinatal Depression in High-Risk Latinas

    ERIC Educational Resources Information Center

    Le, Huynh-Nhu; Perry, Deborah F.; Stuart, Elizabeth A.

    2011-01-01

    Objective: A randomized controlled trial was conducted to evaluate the efficacy of a cognitive-behavioral (CBT) intervention to prevent perinatal depression in high-risk Latinas. Method: A sample of 217 participants, predominantly low-income Central American immigrants who met demographic and depression risk criteria, were randomized into usual…

  2. Final Reading Outcomes of the National Randomized Field Trial of Success for All

    ERIC Educational Resources Information Center

    Borman, Geoffrey D.; Slavin, Robert E.; Cheung, Alan C. K.; Chamberlain, Anne M.; Madden, Nancy A.; Chambers, Bette

    2007-01-01

    Using a cluster randomization design, schools were randomly assigned to implement Success for All, a comprehensive reading reform model, or control methods. This article reports final literacy outcomes for a 3-year longitudinal sample of children who participated in the treatment or control condition from kindergarten through second grade and a…

  3. Data-Division-Specific Robustness and Power of Randomization Tests for ABAB Designs

    ERIC Educational Resources Information Center

    Manolov, Rumen; Solanas, Antonio; Bulte, Isis; Onghena, Patrick

    2010-01-01

    This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each…

  4. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  5. Random amplified polymorphic DNA analysis of genetically modified organisms.

    PubMed

    Yoke-Kqueen, Cheah; Radu, Son

    2006-12-15

    Randomly amplified polymorphic DNA (RAPD) was used to analyzed 78 samples comprises of certified reference materials (soya and maize powder), raw seeds (soybean and maize), processed food and animal feed. Combination assay of two arbitrary primers in the RAPD analysis enable to distinguish genetically modified organism (GMO) reference materials from the samples tested. Dendrogram analysis revealed 13 clusters at 45% similarity from the RAPD. RAPD analysis showed that the maize and soybean samples were clustered differently besides the GMO and non-GMO products. PMID:16860900

  6. Non-random patterns in viral diversity

    PubMed Central

    Anthony, Simon J.; Islam, Ariful; Johnson, Christine; Navarrete-Macias, Isamara; Liang, Eliza; Jain, Komal; Hitchens, Peta L.; Che, Xiaoyu; Soloyvov, Alexander; Hicks, Allison L.; Ojeda-Flores, Rafael; Zambrana-Torrelio, Carlos; Ulrich, Werner; Rostal, Melinda K.; Petrosov, Alexandra; Garcia, Joel; Haider, Najmul; Wolfe, Nathan; Goldstein, Tracey; Morse, Stephen S.; Rahman, Mahmudur; Epstein, Jonathan H.; Mazet, Jonna K.; Daszak, Peter; Lipkin, W. Ian

    2015-01-01

    It is currently unclear whether changes in viral communities will ever be predictable. Here we investigate whether viral communities in wildlife are inherently structured (inferring predictability) by looking at whether communities are assembled through deterministic (often predictable) or stochastic (not predictable) processes. We sample macaque faeces across nine sites in Bangladesh and use consensus PCR and sequencing to discover 184 viruses from 14 viral families. We then use network modelling and statistical null-hypothesis testing to show the presence of non-random deterministic patterns at different scales, between sites and within individuals. We show that the effects of determinism are not absolute however, as stochastic patterns are also observed. In showing that determinism is an important process in viral community assembly we conclude that it should be possible to forecast changes to some portion of a viral community, however there will always be some portion for which prediction will be unlikely. PMID:26391192

  7. Transcranial cerebral oximetry in random normal subjects

    NASA Astrophysics Data System (ADS)

    Misra, Mukesh; Stark, Jennifer; Dujovny, Manuel; Alp, M. Serdar; Widman, Ronald; Ausman, James I.

    1997-08-01

    Near infrared optical spectroscopy is becoming a useful method for monitoring regional cerebral oxygenation status. The method is simple, reliable and noninvasive and the information which it provides is clinically significant in managing a growing number of neurological ailments. Use of this technique has been described previously by numerous authors. In the present study, regional cerebral oxygen saturation was measured at rest in 94 subjects randomly elected from a diverse population of individuals. This sample consisted of 38 males and 65 females, with the age ranging from 18 - 70. There were 68 light-skinned individuals and 35 with darker skin comprising various ethnic and cultural backgrounds. Mean regional cerebral hemoglobin oxygen saturation was recorded as 67.14 plus or minus 8.84%. The association of the mean regional cerebral hemoglobin oxygen saturation in various group of individuals in relationship of their age, race, sex and skin color is examined.

  8. Discrete Signal Processing on Graphs: Sampling Theory

    NASA Astrophysics Data System (ADS)

    Chen, Siheng; Varma, Rohan; Sandryhaila, Aliaksei; Kovacevic, Jelena

    2015-12-01

    We propose a sampling theory for signals that are supported on either directed or undirected graphs. The theory follows the same paradigm as classical sampling theory. We show that perfect recovery is possible for graph signals bandlimited under the graph Fourier transform. The sampled signal coefficients form a new graph signal, whose corresponding graph structure preserves the first-order difference of the original graph signal. For general graphs, an optimal sampling operator based on experimentally designed sampling is proposed to guarantee perfect recovery and robustness to noise; for graphs whose graph Fourier transforms are frames with maximal robustness to erasures as well as for Erd\\H{o}s-R\\'enyi graphs, random sampling leads to perfect recovery with high probability. We further establish the connection to the sampling theory of finite discrete-time signal processing and previous work on signal recovery on graphs. To handle full-band graph signals, we propose a graph filter bank based on sampling theory on graphs. Finally, we apply the proposed sampling theory to semi-supervised classification on online blogs and digit images, where we achieve similar or better performance with fewer labeled samples compared to previous work.

  9. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Pecans in the... sample shall consist of 100 pecans. The individual sample shall be drawn at random from a...

  10. 7 CFR 51.1406 - Sample for grade or size determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... PRODUCTS 1 2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Pecans in the... sample shall consist of 100 pecans. The individual sample shall be drawn at random from a...

  11. Sampling in epidemiological research: issues, hazards and pitfalls

    PubMed Central

    Tyrer, Stephen; Heyman, Bob

    2016-01-01

    Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research. PMID:27087985

  12. Random density matrices versus random evolution of open system

    NASA Astrophysics Data System (ADS)

    Pineda, Carlos; Seligman, Thomas H.

    2015-10-01

    We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.

  13. Apollo 14 rock samples

    NASA Technical Reports Server (NTRS)

    Carlson, I. C.

    1978-01-01

    Petrographic descriptions of all Apollo 14 samples larger than 1 cm in any dimension are presented. The sample description format consists of: (1) an introductory section which includes information on lunar sample location, orientation, and return containers, (2) a section on physical characteristics, which contains the sample mass, dimensions, and a brief description; (3) surface features, including zap pits, cavities, and fractures as seen in binocular view; (4) petrographic description, consisting of a binocular description and, if possible, a thin section description; and (5) a discussion of literature relevant to sample petrology is included for samples which have previously been examined by the scientific community.

  14. Mars sample return - Science

    NASA Technical Reports Server (NTRS)

    Blanchard, Douglas P.

    1988-01-01

    The possible scientific goals of a Mars sample return mission are reviewed, including the value of samples and the selection of sampling sites. The fundamental questions about Mars which could be studied using samples are examined, including planetary formation, differentiation, volcanism and petrogenesis, weathering, and erosion. Scenarios are presented for sample acquisition and analysis. Possible sampling methods and tools are discussed, including drilling techniques, types of rovers, and processing instruments. In addition, the possibility of aerocapture out of elliptical or circular orbit is considered.

  15. Stardust Sample: Investigator's Guidebook

    NASA Technical Reports Server (NTRS)

    Allen, Carl

    2006-01-01

    In January 2006, the Stardust spacecraft returned the first in situ collection of samples from a comet, and the first samples of contemporary interstellar dust. Stardust is the first US sample return mission from a planetary body since Apollo, and the first ever from beyond the moon. This handbook is a basic reference source for allocation procedures and policies for Stardust samples. These samples consist of particles and particle residues in aerogel collectors, in aluminum foil, and in spacecraft components. Contamination control samples and unflown collection media are also available for allocation.

  16. Rain sampling device

    DOEpatents

    Nelson, Danny A.; Tomich, Stanley D.; Glover, Donald W.; Allen, Errol V.; Hales, Jeremy M.; Dana, Marshall T.

    1991-01-01

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of said precipitation from said chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device.

  17. Rain sampling device

    DOEpatents

    Nelson, D.A.; Tomich, S.D.; Glover, D.W.; Allen, E.V.; Hales, J.M.; Dana, M.T.

    1991-05-14

    The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of the precipitation from the chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device. 11 figures.

  18. Population-Sample Regression in the Estimation of Population Proportions

    ERIC Educational Resources Information Center

    Weitzman, R. A.

    2006-01-01

    Focusing on a single sample obtained randomly with replacement from a single population, this article examines the regression of population on sample proportions and develops an unbiased estimator of the square of the correlation between them. This estimator turns out to be the regression coefficient. Use of the squared-correlation estimator as a…

  19. An efficient sampling protocol for sagebrush/grassland monitoring

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Monitoring the health and condition of rangeland vegetation can be very time consuming and costly. An efficiency but rigorous sampling protocol is needed for monitoring sagebrush/grassland vegetation. A randomized sampling protocol was presented for geo-referenced, nadir photographs acquired using...

  20. Sampling Variability and Axioms of Classical Test Theory

    ERIC Educational Resources Information Center

    Zimmerman, Donald W.

    2011-01-01

    Many well-known equations in classical test theory are mathematical identities in populations of individuals but not in random samples from those populations. First, test scores are subject to the same sampling error that is familiar in statistical estimation and hypothesis testing. Second, the assumptions made in derivation of formulas in test…

  1. Rational Variability in Children's Causal Inferences: The Sampling Hypothesis

    ERIC Educational Resources Information Center

    Denison, Stephanie; Bonawitz, Elizabeth; Gopnik, Alison; Griffiths, Thomas L.

    2013-01-01

    We present a proposal--"The Sampling Hypothesis"--suggesting that the variability in young children's responses may be part of a rational strategy for inductive inference. In particular, we argue that young learners may be randomly sampling from the set of possible hypotheses that explain the observed data, producing different hypotheses with…

  2. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  3. An empirical coverage test for the g-sample problem

    USGS Publications Warehouse

    Orlowski, L.A.; Grundy, W.D.; Mielke, P.W., Jr.

    1991-01-01

    A nonparametric g-sample empirical coverage test has recently been developed for univariate continuous data. It is based upon the empirical coverages which are spacings of multiple random samples. The test is capable of detecting any distributional differences which may exist among the parent populations, without additional assumptions beyond randomness and continuity. The test can be effective with the limited and/or unequal sample sizes most often encountered in geologic studies. A computer program for implementing this procedure, G-SECT 1, is available. ?? 1991 International Association for Mathematical Geology.

  4. Soil Sampling Techniques For Alabama Grain Fields

    NASA Technical Reports Server (NTRS)

    Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.

    2003-01-01

    Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.

  5. A comparison of two sampling methods for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tarantola, Stefano; Becker, William; Zeitz, Dirk

    2012-05-01

    We compare the convergence properties of two different quasi-random sampling designs - Sobol's quasi-Monte Carlo, and Latin supercube sampling in variance-based global sensitivity analysis. We use the non-monotonic V-function of Sobol' as base case-study, and compare the performance of both sampling strategies at increasing sample size and dimensionality against analytical values. The results indicate that in almost all cases investigated here, the Sobol' design performs better. This, coupled with the fact that effective Latin supercube sampling requires a priori knowledge of the interaction properties of the function, leads us to recommend Sobol' sampling in most practical cases.

  6. ENVIRONMENTAL SAMPLING AND MONITORING

    EPA Science Inventory

    Methods of probability sampling provide a rigorous protocol which scientifically reliable information on environmental issues may e obtained. he authors review fundamentals of probability sampling from the perspective of monitoring environmental resources. hey first describe basi...

  7. GROUND WATER SAMPLING ISSUES

    EPA Science Inventory

    Obtaining representative ground water samples is important for site assessment and
    remedial performance monitoring objectives. Issues which must be considered prior to initiating a ground-water monitoring program include defining monitoring goals and objectives, sampling point...

  8. Superposition Enhanced Nested Sampling

    NASA Astrophysics Data System (ADS)

    Martiniani, Stefano; Stevenson, Jacob D.; Wales, David J.; Frenkel, Daan

    2014-07-01

    The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  9. GEOTHERMAL EFFLUENT SAMPLING WORKSHOP

    EPA Science Inventory

    This report outlines the major recommendations resulting from a workshop to identify gaps in existing geothermal effluent sampling methodologies, define needed research to fill those gaps, and recommend strategies to lead to a standardized sampling methodology.

  10. Optimal randomized scheduling by replacement

    SciTech Connect

    Saias, I.

    1996-05-01

    In the replacement scheduling problem, a system is composed of n processors drawn from a pool of p. The processors can become faulty while in operation and faulty processors never recover. A report is issued whenever a fault occurs. This report states only the existence of a fault but does not indicate its location. Based on this report, the scheduler can reconfigure the system and choose another set of n processors. The system operates satisfactorily as long as, upon report of a fault, the scheduler chooses n non-faulty processors. We provide a randomized protocol maximizing the expected number of faults the system can sustain before the occurrence of a crash. The optimality of the protocol is established by considering a closely related dual optimization problem. The game-theoretic technical difficulties that we solve in this paper are very general and encountered whenever proving the optimality of a randomized algorithm in parallel and distributed computation.

  11. Percolation on correlated random networks

    NASA Astrophysics Data System (ADS)

    Agliari, E.; Cioli, C.; Guadagnini, E.

    2011-09-01

    We consider a class of random, weighted networks, obtained through a redefinition of patterns in an Hopfield-like model, and, by performing percolation processes, we get information about topology and resilience properties of the networks themselves. Given the weighted nature of the graphs, different kinds of bond percolation can be studied: stochastic (deleting links randomly) and deterministic (deleting links based on rank weights), each mimicking a different physical process. The evolution of the network is accordingly different, as evidenced by the behavior of the largest component size and of the distribution of cluster sizes. In particular, we can derive that weak ties are crucial in order to maintain the graph connected and that, when they are the most prone to failure, the giant component typically shrinks without abruptly breaking apart; these results have been recently evidenced in several kinds of social networks.

  12. Random modelling of contagious diseases.

    PubMed

    Demongeot, J; Hansen, O; Hessami, H; Jannot, A S; Mintsa, J; Rachdi, M; Taramasco, C

    2013-03-01

    Modelling contagious diseases needs to include a mechanistic knowledge about contacts between hosts and pathogens as specific as possible, e.g., by incorporating in the model information about social networks through which the disease spreads. The unknown part concerning the contact mechanism can be modelled using a stochastic approach. For that purpose, we revisit SIR models by introducing first a microscopic stochastic version of the contacts between individuals of different populations (namely Susceptible, Infective and Recovering), then by adding a random perturbation in the vicinity of the endemic fixed point of the SIR model and eventually by introducing the definition of various types of random social networks. We propose as example of application to contagious diseases the HIV, and we show that a micro-simulation of individual based modelling (IBM) type can reproduce the current stable incidence of the HIV epidemic in a population of HIV-positive men having sex with men (MSM). PMID:23525763

  13. Image segmentation using random features

    NASA Astrophysics Data System (ADS)

    Bull, Geoff; Gao, Junbin; Antolovich, Michael

    2014-01-01

    This paper presents a novel algorithm for selecting random features via compressed sensing to improve the performance of Normalized Cuts in image segmentation. Normalized Cuts is a clustering algorithm that has been widely applied to segmenting images, using features such as brightness, intervening contours and Gabor filter responses. Some drawbacks of Normalized Cuts are that computation times and memory usage can be excessive, and the obtained segmentations are often poor. This paper addresses the need to improve the processing time of Normalized Cuts while improving the segmentations. A significant proportion of the time in calculating Normalized Cuts is spent computing an affinity matrix. A new algorithm has been developed that selects random features using compressed sensing techniques to reduce the computation needed for the affinity matrix. The new algorithm, when compared to the standard implementation of Normalized Cuts for segmenting images from the BSDS500, produces better segmentations in significantly less time.

  14. The weighted random graph model

    NASA Astrophysics Data System (ADS)

    Garlaschelli, Diego

    2009-07-01

    We introduce the weighted random graph (WRG) model, which represents the weighted counterpart of the Erdos-Renyi random graph and provides fundamental insights into more complicated weighted networks. We find analytically that the WRG is characterized by a geometric weight distribution, a binomial degree distribution and a negative binomial strength distribution. We also characterize exactly the percolation phase transitions associated with edge removal and with the appearance of weighted subgraphs of any order and intensity. We find that even this completely null model displays a percolation behaviour similar to what is observed in real weighted networks, implying that edge removal cannot be used to detect community structure empirically. By contrast, the analysis of clustering successfully reveals different patterns between the WRG and real networks.

  15. Fully photonics-based physical random bit generator.

    PubMed

    Li, Pu; Sun, Yuanyuan; Liu, Xianglian; Yi, Xiaogang; Zhang, Jianguo; Guo, Xiaomin; Guo, Yanqiang; Wang, Yuncai

    2016-07-15

    We propose a fully photonics-based approach for ultrafast physical random bit generation. This approach exploits a compact nonlinear loop mirror (called a terahertz optical asymmetric demultiplexer, TOAD) to sample the chaotic optical waveform in an all-optical domain and then generate random bit streams through further comparison with a threshold level. This method can efficiently overcome the electronic jitter bottleneck confronted by existing RBGs in practice. A proof-of-concept experiment demonstrates that this method can continuously extract 5 Gb/s random bit streams from the chaotic output of a distributed feedback laser diode (DFB-LD) with optical feedback. This limited generation rate is caused by the bandwidth of the used optical chaos. PMID:27420532

  16. Physiologic Responsiveness Should Guide Entry into Randomized Controlled Trials.

    PubMed

    Goligher, Ewan C; Kavanagh, Brian P; Rubenfeld, Gordon D; Ferguson, Niall D

    2015-12-15

    Most randomized trials in critical care report no mortality benefit; this may reflect competing pathogenic mechanisms, patient heterogeneity, or true ineffectiveness of interventions. We hypothesize that in acute respiratory distress syndrome (ARDS), randomizing only those patients who show a favorable physiological response to an intervention would help ensure that only those likely to benefit would be entered into the study. If true, this would decrease study "noise" and reduce required sample size, thereby increasing the chances of finding true-positive outcomes. It would also lessen the chances of exposing patients to treatments that are unlikely to help or that could cause harm. We present a reanalysis of randomized clinical trials of positive end-expiratory pressure in ARDS that support this hypothesis. PMID:25580530

  17. Random drift and culture change.

    PubMed

    Bentley, R Alexander; Hahn, Matthew W; Shennan, Stephen J

    2004-07-22

    We show that the frequency distributions of cultural variants, in three different real-world examples--first names, archaeological pottery and applications for technology patents--follow power laws that can be explained by a simple model of random drift. We conclude that cultural and economic choices often reflect a decision process that is value-neutral; this result has far-reaching testable implications for social-science research. PMID:15306315

  18. Random drift and culture change.

    PubMed Central

    Bentley, R. Alexander; Hahn, Matthew W.; Shennan, Stephen J.

    2004-01-01

    We show that the frequency distributions of cultural variants, in three different real-world examples--first names, archaeological pottery and applications for technology patents--follow power laws that can be explained by a simple model of random drift. We conclude that cultural and economic choices often reflect a decision process that is value-neutral; this result has far-reaching testable implications for social-science research. PMID:15306315

  19. Randomized gap and amplitude estimation

    NASA Astrophysics Data System (ADS)

    Zintchenko, Ilia; Wiebe, Nathan

    2016-06-01

    We provide a method for estimating spectral gaps in low-dimensional systems. Unlike traditional phase estimation, our approach does not require ancillary qubits nor does it require well-characterized gates. Instead, it only requires the ability to perform approximate Haar random unitary operations, applying the unitary whose eigenspectrum is sought and performing measurements in the computational basis. We discuss application of these ideas to in-place amplitude estimation and quantum device calibration.

  20. Correlated randomness and switching phenomena

    NASA Astrophysics Data System (ADS)

    Stanley, H. E.; Buldyrev, S. V.; Franzese, G.; Havlin, S.; Mallamace, F.; Kumar, P.; Plerou, V.; Preis, T.

    2010-08-01

    One challenge of biology, medicine, and economics is that the systems treated by these serious scientific disciplines have no perfect metronome in time and no perfect spatial architecture-crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. Further, many of these processes and structures have the remarkable feature of “switching” from one behavior to another as if by magic. The past century has, philosophically, been concerned with placing aside the human tendency to see the universe as a fine-tuned machine. Here we will address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at some of the many spatial and temporal patterns in biology, medicine, and economics and even begin to characterize the switching phenomena that enables a system to pass from one state to another. Inspired by principles developed by A. Nihat Berker and scores of other statistical physicists in recent years, we discuss some applications of correlated randomness to understand switching phenomena in various fields. Specifically, we present evidence from experiments and from computer simulations supporting the hypothesis that water’s anomalies are related to a switching point (which is not unlike the “tipping point” immortalized by Malcolm Gladwell), and that the bubbles in economic phenomena that occur on all scales are not “outliers” (another Gladwell immortalization). Though more speculative, we support the idea of disease as arising from some kind of yet-to-be-understood complex switching phenomenon, by discussing data on selected examples, including heart disease and Alzheimer disease.