Randomization and sampling issues
Geissler, P.H.
1996-01-01
The need for randomly selected routes and other sampling issues have been debated by the Amphibian electronic discussion group. Many excellent comments have been made, pro and con, but we have not reached consensus yet. This paper brings those comments together and attempts a synthesis. I hope that the resulting discussion will bring us closer to a consensus.
Airway microbiota and pathogen abundance in age-stratified cystic fibrosis patients.
Cox, Michael J; Allgaier, Martin; Taylor, Byron; Baek, Marshall S; Huang, Yvonne J; Daly, Rebecca A; Karaoz, Ulas; Andersen, Gary L; Brown, Ronald; Fujimura, Kei E; Wu, Brian; Tran, Diem; Koff, Jonathan; Kleinhenz, Mary Ellen; Nielson, Dennis; Brodie, Eoin L; Lynch, Susan V
2010-06-23
Bacterial communities in the airways of cystic fibrosis (CF) patients are, as in other ecological niches, influenced by autogenic and allogenic factors. However, our understanding of microbial colonization in younger versus older CF airways and the association with pulmonary function is rudimentary at best. Using a phylogenetic microarray, we examine the airway microbiota in age stratified CF patients ranging from neonates (9 months) to adults (72 years). From a cohort of clinically stable patients, we demonstrate that older CF patients who exhibit poorer pulmonary function possess more uneven, phylogenetically-clustered airway communities, compared to younger patients. Using longitudinal samples collected form a subset of these patients a pattern of initial bacterial community diversification was observed in younger patients compared with a progressive loss of diversity over time in older patients. We describe in detail the distinct bacterial community profiles associated with young and old CF patients with a particular focus on the differences between respective "early" and "late" colonizing organisms. Finally we assess the influence of Cystic Fibrosis Transmembrane Regulator (CFTR) mutation on bacterial abundance and identify genotype-specific communities involving members of the Pseudomonadaceae, Xanthomonadaceae, Moraxellaceae and Enterobacteriaceae amongst others. Data presented here provides insights into the CF airway microbiota, including initial diversification events in younger patients and establishment of specialized communities of pathogens associated with poor pulmonary function in older patient populations.
Matter, L; Germann, D; Bally, F; Schopfer, K
1997-01-01
We have performed age-stratified seroprevalence studies for MMR to evaluate these vaccinations. Serum samples submitted for diagnostic testing were randomly selected for unlinked anonymous panels. IgG antibodies were tested by ELISA and indirect immunofluorescence. In the vaccination cohort (age 1.5 to 6.5 years), seroprevalence attained 80%. For measles and mumps it continued to increase to 95%, while for rubella it declined transiently to 60% between 7 and 12 years of age. We observed no differences according to gender in any age group in 1991-1992. (Semi)quantitative values of the IgG antibodies against all three viruses increased during adolescence, suggesting wild virus circulation. In 1992, MMR vaccination has reached < 80% of the children during their second year of age. Due to previous monovalent measles and mumps vaccinations in pre-school children and due to endemic and epidemic activity, particularly of mumps virus, a trough of the seroprevalence in adolescents was evident only for rubella. MMR vaccination campaigns performed at school since 1987 have increase seroprevalence in this population segment and have probably over-compensated for the expected shift to the right of the seroprevalence curves. A more compulsive implementation of the recommended childhood vaccination schedule and continued efforts at catchup vaccinations during school age especially for rubella are necessary to avoid the accumulation of susceptible young adults during the forthcoming decades.
Reduction of display artifacts by random sampling
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.
1983-01-01
The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.
Repeated Random Sampling in Year 5
ERIC Educational Resources Information Center
Watson, Jane M.; English, Lyn D.
2016-01-01
As an extension to an activity introducing Year 5 students to the practice of statistics, the software "TinkerPlots" made it possible to collect repeated random samples from a finite population to informally explore students' capacity to begin reasoning with a distribution of sample statistics. This article provides background for the…
Acceptance sampling using judgmental and randomly selected samples
Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl
2010-09-01
We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.
Systematic random sampling of the comet assay.
McArt, Darragh G; Wasson, Gillian R; McKerr, George; Saetzler, Kurt; Reed, Matt; Howard, C Vyvyan
2009-07-01
The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory 'tail' DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the 'randomness' of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
NASA Astrophysics Data System (ADS)
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-08-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging.
Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2016-01-01
Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464
Wollert, Richard; Cramer, Elliot; Waggoner, Jacqueline; Skelton, Alex; Vess, James
2010-12-01
A useful understanding of the relationship between age, actuarial scores, and sexual recidivism can be obtained by comparing the entries in equivalent cells from "age-stratified" actuarial tables. This article reports the compilation of the first multisample age-stratified table of sexual recidivism rates, referred to as the "multisample age-stratified table of sexual recidivism rates (MATS-1)," from recent research on Static-99 and another actuarial known as the Automated Sexual Recidivism Scale. The MATS-1 validates the "age invariance effect" that the risk of sexual recidivism declines with advancing age and shows that age-restricted tables underestimate risk for younger offenders and overestimate risk for older offenders. Based on data from more than 9,000 sex offenders, our conclusion is that evaluators should report recidivism estimates from age-stratified tables when they are assessing sexual recidivism risk, particularly when evaluating the aging sex offender.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Sequential time interleaved random equivalent sampling for repetitive signal
NASA Astrophysics Data System (ADS)
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Performance of Random Effects Model Estimators under Complex Sampling Designs
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
A random spatial sampling method in a rural developing nation
2014-01-01
Background Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. Methods We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. Results This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Conclusions Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available. PMID:24716473
Partial least squares and random sample consensus in outlier detection.
Peng, Jiangtao; Peng, Silong; Hu, Yong
2012-03-16
A novel outlier detection method in partial least squares based on random sample consensus is proposed. The proposed algorithm repeatedly generates partial least squares solutions estimated from random samples and then tests each solution for the support from the complete dataset for consistency. A comparative study of the proposed method and leave-one-out cross validation in outlier detection on simulated data and near-infrared data of pharmaceutical tablets is presented. In addition, a comparison between the proposed method and PLS, RSIMPLS, PRM is provided. The obtained results demonstrate that the proposed method is highly efficient.
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
Investigation of spectral analysis techniques for randomly sampled velocimetry data
NASA Technical Reports Server (NTRS)
Sree, Dave
1993-01-01
It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable
Non-local MRI denoising using random sampling.
Hu, Jinrong; Zhou, Jiliu; Wu, Xi
2016-09-01
In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's.
Ross, Robert D
2012-12-01
Accurate grading of the presence and severity of heart failure (HF) signs and symptoms in infants and children remains challenging. It has been 25 years since the Ross classification was first used for this purpose. Since then, several modifications of the system have been used and others proposed. New evidence has shown that in addition to signs and symptoms, data from echocardiography, exercise testing, and biomarkers such as N-terminal pro-brain natriuretic peptide (NT-proBNP) all are useful in stratifying outcomes for children with HF. It also is apparent that grading of signs and symptoms in children is dependent on age because infants manifest HF differently than toddlers and older children. This review culminates in a proposed new age-based Ross classification for HF in children that incorporates the most useful data from the last two decades. Testing of this new system will be important to determine whether an age-stratified scoring system can unify the way communication of HF severity and research on HF in children is performed in the future.
On random sampling of correlated resonance parameters with large uncertainties
NASA Astrophysics Data System (ADS)
Žerovnik, Gašper; Capote, Roberto; Trkov, Andrej
2013-09-01
Three different methods for multivariate random sampling of correlated resonance parameters are proposed: the diagonalization method, the Metropolis method, and the correlated sampling method. For small relative uncertainties (typical for s-wave resonances) and weak correlations all methods are equivalent. Differences arise under difficult conditions: large relative uncertainties of inherently positive parameters (typical for widths of higher-l-wave resonances) and/or strong correlations between a large number of parameters. The methods are tested on realistic examples; advantages and disadvantages of each method are pointed out. The correlated sampling method is the only method which produces consistent samples under any conditions. In the field of reactor physics, these methods are mostly used for the sampling of nuclear data, however, they may be used for any data with given uncertainties and correlations.
Ochs, Marco M; Fritz, Thomas; André, Florian; Riffel, Johannes; Mereles, Derliz; Müller-Hennessen, Matthias; Giannitsis, Evangelos; Katus, Hugo A; Friedrich, Matthias G; Buss, Sebastian J
2017-01-21
Cardiac valve plane displacement (CVPD) reflects longitudinal LV function. The purpose of the present study was to determine regional heterogeneity of CVPD in healthy adults to provide normal values by cardiac magnetic resonance (CMR). We measured the anterior aortic plane systolic excursion (AAPSE); the anterior, anterolateral, inferolateral, inferior, and inferoseptal mitral annular plane systolic excursion (MAPSE); and the lateral tricuspid annulus plane systolic excursion (TAPSE). Systolic excursion was measured as the distance from peak end-diastolic to peak end-sysstolic annular position (peak-to-peak) in cine images acquired in 2-, 3- and 4-chamber views. Echocardiographic measurements of CVPD were performed in M-Mode as previously described. We retrospectively analyzed 209 healthy Caucasians (57% men), who participated in the Heidelberg normal cohort between March 2009 and September 2014. The analysis was possible in all participants. Mean values were: AAPSE = 14 ± 3 mm (8-20); MAPSEanterior = 14 ± 3 mm (8-20); MAPSEanterolateral = 16 ± 3 mm (10-22); MAPSEinferolateral = 16 ± 3 mm (10-22); MAPSEinferior = 17 ± 3 mm (11-23); MAPSEinferoseptal = 13 ± 3 mm (7-19) and TAPSE = 26 ± 4 mm (18-34) respectively. MAPSE was significantly elevated in lateral compared to septal regions (p = 0.0001). Sex-differences for CVPD were not found. Age-dependency of CVPD revealed distinct regional differences. AAPSE decreased the most with age (B=-0.48; p = 0.0001), whereas MAPSEinferior was the least age-dependent site (B=-0.17; p = 0.01). AAPSE revealed favorable intra-/interobserver reproducibility and interstudy agreement. Intermethod-comparison of CMR and M-Mode echocardiography showed good agreement between both measurements of CVPD. Age-stratified normal values of regional CVPD are provided. AAPSE revealed the most pronounced age-related decrease and provided favorable reproducibility
An environmental sampling model for combining judgment and randomly placed samples
Sego, Landon H.; Anderson, Kevin K.; Matzke, Brett D.; Sieber, Karl; Shulman, Stanley; Bennett, James; Gillen, M.; Wilson, John E.; Pulsipher, Brent A.
2007-08-23
In the event of the release of a lethal agent (such as anthrax) inside a building, law enforcement and public health responders take samples to identify and characterize the contamination. Sample locations may be rapidly chosen based on available incident details and professional judgment. To achieve greater confidence of whether or not a room or zone was contaminated, or to certify that detectable contamination is not present after decontamination, we consider a Bayesian model for combining the information gained from both judgment and randomly placed samples. We investigate the sensitivity of the model to the parameter inputs and make recommendations for its practical use.
Randomly Sampled-Data Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
Simple-random-sampling-based multiclass text classification algorithm.
Liu, Wuying; Wang, Lin; Yi, Mianzhu
2014-01-01
Multiclass text classification (MTC) is a challenging issue and the corresponding MTC algorithms can be used in many applications. The space-time overhead of the algorithms must be concerned about the era of big data. Through the investigation of the token frequency distribution in a Chinese web document collection, this paper reexamines the power law and proposes a simple-random-sampling-based MTC (SRSMTC) algorithm. Supported by a token level memory to store labeled documents, the SRSMTC algorithm uses a text retrieval approach to solve text classification problems. The experimental results on the TanCorp data set show that SRSMTC algorithm can achieve the state-of-the-art performance at greatly reduced space-time requirements.
A comparison of methods for representing sparsely sampled random quantities.
Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua
2013-09-01
This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.
Reich, Nicholas G; Myers, Jessica A; Obeng, Daniel; Milstone, Aaron M; Perl, Trish M
2012-01-01
In recent years, the number of studies using a cluster-randomized design has grown dramatically. In addition, the cluster-randomized crossover design has been touted as a methodological advance that can increase efficiency of cluster-randomized studies in certain situations. While the cluster-randomized crossover trial has become a popular tool, standards of design, analysis, reporting and implementation have not been established for this emergent design. We address one particular aspect of cluster-randomized and cluster-randomized crossover trial design: estimating statistical power. We present a general framework for estimating power via simulation in cluster-randomized studies with or without one or more crossover periods. We have implemented this framework in the clusterPower software package for R, freely available online from the Comprehensive R Archive Network. Our simulation framework is easy to implement and users may customize the methods used for data analysis. We give four examples of using the software in practice. The clusterPower package could play an important role in the design of future cluster-randomized and cluster-randomized crossover studies. This work is the first to establish a universal method for calculating power for both cluster-randomized and cluster-randomized clinical trials. More research is needed to develop standardized and recommended methodology for cluster-randomized crossover studies.
Is Knowledge Random? Introducing Sampling and Bias through Outdoor Inquiry
ERIC Educational Resources Information Center
Stier, Sam
2010-01-01
Sampling, very generally, is the process of learning about something by selecting and assessing representative parts of that population or object. In the inquiry activity described here, students learned about sampling techniques as they estimated the number of trees greater than 12 cm dbh (diameter at breast height) in a wooded, discrete area…
ERIC Educational Resources Information Center
Vardeman, Stephen B.; Wendelberger, Joanne R.
2005-01-01
There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…
ANOVA with random sample sizes: An application to a Brazilian database on cancer registries
NASA Astrophysics Data System (ADS)
Nunes, Célia; Capistrano, Gilberto; Ferreira, Dário; Ferreira, Sandra S.
2013-10-01
We apply our results on random sample size ANOVA to a Brazilian database on cancer registries. The samples sizes will be considered as realizations of random variables. The interest of this approach lies in avoiding false rejections obtained when using the classical fixed size F-tests.
Stratified random sampling plan for an irrigation customer telephone survey
Johnston, J.W.; Davis, L.J.
1986-05-01
This report describes the procedures used to design and select a sample for a telephone survey of individuals who use electricity in irrigating agricultural cropland in the Pacific Northwest. The survey is intended to gather information on the irrigated agricultural sector that will be useful for conservation assessment, load forecasting, rate design, and other regional power planning activities.
Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.
2016-01-01
In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling Non-Porous Surfaces for Measurement-Based... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each...
The Power of Slightly More than One Sample in Randomized Load Balancing
2015-04-26
2015 Approved for public release; distribution is unlimited. The power of slightly more than one sample in randomized load balancing The views...Box 12211 Research Triangle Park, NC 27709-2211 Load Balancing , Mean-Field Analysis, Cloud Computing REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S...2820 ABSTRACT The power of slightly more than one sample in randomized load balancing Report Title In many computing and networking applications
Random Sampling of Correlated Parameters - a Consistent Solution for Unfavourable Conditions
NASA Astrophysics Data System (ADS)
Žerovnik, G.; Trkov, A.; Kodeli, I. A.; Capote, R.; Smith, D. L.
2015-01-01
Two methods for random sampling according to a multivariate lognormal distribution - the correlated sampling method and the method of transformation of correlation coefficients - are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.
Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions
Žerovnik, G.; Trkov, A.; Kodeli, I.A.; Capote, R.; Smith, D.L.
2015-01-15
Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.
Arrabal-Polo, Miguel Angel; Arias-Santiago, Salvador; Girón-Prieto, María Sierra; Abad-Menor, Felix; López-Carmona Pintado, Fernando; Zuluaga-Gomez, Armando; Arrabal-Martin, Miguel
2012-10-01
Calcium lithiasis is the most frequently diagnosed renal lithiasis and is associated with a high percentage of patients with metabolic disorders, such as hypercalciuria, hypocitraturia, and hyperoxaluria. The present study included 50 patients with recurrent calcium lithiasis. We conducted a random urine test during nocturnal fasting and a 24-h urine test, and examined calcium, oxalate, and citrate. A study of the linear correlation between the metabolites was performed, and the receiver operator characteristic (ROC) curves were analyzed in the random urine samples to determine the cutoff values for hypercalciuria (excretion greater than 200 mg), hyperoxaluria (excretion greater than 40 mg), and hypocitraturia (excretion less than 320 mg) in the 24-h urine. Linear relationships were observed between the calcium levels in the random and 24-h urine samples (R = 0.717, p = 0.0001), the oxalate levels in the random and 24-h urine samples (R = 0.838, p = 0.0001), and the citrate levels in the random and 24-h urine samples (R = 0.799, p = 0.0001). After obtaining the ROC curves, we observed that more than 10.15 mg/dl of random calcium and more than 16.45 mg/l of random oxalate were indicative of hypercalciuria and hyperoxaluria, respectively, in the 24-h urine. In addition, we found that the presence of less than 183 mg/l of random citrate was indicative of the presence of hypocitraturia in the 24-h urine. Using the proposed values, screening for hypercalciuria, hyperoxaluria, and hypocitraturia can be performed with a random urine sample during fasting with an overall sensitivity greater than 86%.
NASA Astrophysics Data System (ADS)
Walvoort, D. J. J.; Brus, D. J.; de Gruijter, J. J.
2010-10-01
Both for mapping and for estimating spatial means of an environmental variable, the accuracy of the result will usually be increased by dispersing the sample locations so that they cover the study area as uniformly as possible. We developed a new R package for designing spatial coverage samples for mapping, and for random sampling from compact geographical strata for estimating spatial means. The mean squared shortest distance (MSSD) was chosen as objective function, which can be minimized by k-means clustering. Two k-means algorithms are described, one for unequal area and one for equal area partitioning. The R package is illustrated with three examples: (1) subsampling of square and circular sampling plots commonly used in surveys of soil, vegetation, forest, etc.; (2) sampling of agricultural fields for soil testing; and (3) infill sampling of climate stations for mainland Australia and Tasmania. The algorithms give satisfactory results within reasonable computing time.
Stuetz, Wolfgang; Weber, Daniela; Dollé, Martijn E. T.; Jansen, Eugène; Grubeck-Loebenstein, Beatrix; Fiegl, Simone; Toussaint, Olivier; Bernhardt, Juergen; Gonos, Efstathios S.; Franceschi, Claudio; Sikora, Ewa; Moreno-Villanueva, María; Breusing, Nicolle; Grune, Tilman; Bürkle, Alexander
2016-01-01
Blood micronutrient status may change with age. We analyzed plasma carotenoids, α-/γ-tocopherol, and retinol and their associations with age, demographic characteristics, and dietary habits (assessed by a short food frequency questionnaire) in a cross-sectional study of 2118 women and men (age-stratified from 35 to 74 years) of the general population from six European countries. Higher age was associated with lower lycopene and α-/β-carotene and higher β-cryptoxanthin, lutein, zeaxanthin, α-/γ-tocopherol, and retinol levels. Significant correlations with age were observed for lycopene (r = −0.248), α-tocopherol (r = 0.208), α-carotene (r = −0.112), and β-cryptoxanthin (r = 0.125; all p < 0.001). Age was inversely associated with lycopene (−6.5% per five-year age increase) and this association remained in the multiple regression model with the significant predictors (covariables) being country, season, cholesterol, gender, smoking status, body mass index (BMI (kg/m2)), and dietary habits. The positive association of α-tocopherol with age remained when all covariates including cholesterol and use of vitamin supplements were included (1.7% vs. 2.4% per five-year age increase). The association of higher β-cryptoxanthin with higher age was no longer statistically significant after adjustment for fruit consumption, whereas the inverse association of α-carotene with age remained in the fully adjusted multivariable model (−4.8% vs. −3.8% per five-year age increase). We conclude from our study that age is an independent predictor of plasma lycopene, α-tocopherol, and α-carotene. PMID:27706032
Bell, David C; Erbaugh, Elizabeth B; Serrano, Tabitha; Dayton-Shotts, Cheryl A; Montoya, Isaac D
2017-02-01
Both random walk and respondent-driven sampling (RDS) exploit social networks and may reduce biases introduced by earlier methods for sampling from hidden populations. Although RDS has become much more widely used by social researchers than random walk (RW), there has been little discussion of the tradeoffs in choosing RDS over RW. This paper compares experiences of implementing RW and RDS to recruit drug users to a network-based study in Houston, Texas. Both recruitment methods were implemented over comparable periods of time, with the same population, by the same research staff. RDS methods recruited more participants with less strain on staff. However, participants recruited through RW were more forthcoming than RDS participants in helping to recruit members of their social networks. Findings indicate that, dependent upon study goals, researchers' choice of design may influence participant recruitment, participant commitment, and impact on staff, factors that may in turn affect overall study success.
NASA Astrophysics Data System (ADS)
Maziero, Jonas
2015-12-01
The numerical generation of random quantum states (RQS) is an important procedure for investigations in quantum information science. Here, we review some methods that may be used for performing that task. We start by presenting a simple procedure for generating random state vectors, for which the main tool is the random sampling of unbiased discrete probability distributions (DPD). Afterwards, the creation of random density matrices is addressed. In this context, we first present the standard method, which consists in using the spectral decomposition of a quantum state for getting RQS from random DPDs and random unitary matrices. In the sequence, the Bloch vector parametrization method is described. This approach, despite being useful in several instances, is not in general convenient for RQS generation. In the last part of the article, we regard the overparametrized method (OPM) and the related Ginibre and Bures techniques. The OPM can be used to create random positive semidefinite matrices with unit trace from randomly produced general complex matrices in a simple way that is friendly for numerical implementations. We consider a physically relevant issue related to the possible domains that may be used for the real and imaginary parts of the elements of such general complex matrices. Subsequently, a too fast concentration of measure in the quantum state space that appears in this parametrization is noticed.
Random Sampling Process Leads to Overestimation of β-Diversity of Microbial Communities
Zhou, Jizhong; Jiang, Yi-Huei; Deng, Ye; Shi, Zhou; Zhou, Benjamin Yamin; Xue, Kai; Wu, Liyou; He, Zhili; Yang, Yunfeng
2013-01-01
ABSTRACT The site-to-site variability in species composition, known as β-diversity, is crucial to understanding spatiotemporal patterns of species diversity and the mechanisms controlling community composition and structure. However, quantifying β-diversity in microbial ecology using sequencing-based technologies is a great challenge because of a high number of sequencing errors, bias, and poor reproducibility and quantification. Herein, based on general sampling theory, a mathematical framework is first developed for simulating the effects of random sampling processes on quantifying β-diversity when the community size is known or unknown. Also, using an analogous ball example under Poisson sampling with limited sampling efforts, the developed mathematical framework can exactly predict the low reproducibility among technically replicate samples from the same community of a certain species abundance distribution, which provides explicit evidences of random sampling processes as the main factor causing high percentages of technical variations. In addition, the predicted values under Poisson random sampling were highly consistent with the observed low percentages of operational taxonomic unit (OTU) overlap (<30% and <20% for two and three tags, respectively, based on both Jaccard and Bray-Curtis dissimilarity indexes), further supporting the hypothesis that the poor reproducibility among technical replicates is due to the artifacts associated with random sampling processes. Finally, a mathematical framework was developed for predicting sampling efforts to achieve a desired overlap among replicate samples. Our modeling simulations predict that several orders of magnitude more sequencing efforts are needed to achieve desired high technical reproducibility. These results suggest that great caution needs to be taken in quantifying and interpreting β-diversity for microbial community analysis using next-generation sequencing technologies. PMID:23760464
Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling
Tian, Jiayong
2015-01-01
This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion. PMID:26089958
A Nationwide Random Sampling Survey of Potential Complicated Grief in Japan
ERIC Educational Resources Information Center
Mizuno, Yasunao; Kishimoto, Junji; Asukai, Nozomu
2012-01-01
To investigate the prevalence of significant loss, potential complicated grief (CG), and its contributing factors, we conducted a nationwide random sampling survey of Japanese adults aged 18 or older (N = 1,343) using a self-rating Japanese-language version of the Complicated Grief Brief Screen. Among them, 37.0% experienced their most significant…
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2011 CFR
2011-07-01
...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1... nearly equal as possible) halves. For example, divide the area into top and bottom halves or left and right halves. Choose the top/bottom or left/right division that produces halves having as close to...
Optimal Sampling of Units in Three-Level Cluster Randomized Designs: An Ancova Framework
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2011-01-01
Field experiments with nested structures assign entire groups such as schools to treatment and control conditions. Key aspects of such cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. The units at each level of the…
An Upper Bound for the Expected Range of a Random Sample
ERIC Educational Resources Information Center
Marengo, James; Lopez, Manuel
2010-01-01
We consider the expected range of a random sample of points chosen from the interval [0, 1] according to some probability distribution. We then use the notion of convexity to derive an upper bound for this expected range which is valid for all possible choices of this distribution. Finally we show that there is only one distribution for which this…
On the repeated measures designs and sample sizes for randomized controlled trials.
Tango, Toshiro
2016-04-01
For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials.
A Model for Predicting Behavioural Sleep Problems in a Random Sample of Australian Pre-Schoolers
ERIC Educational Resources Information Center
Hall, Wendy A.; Zubrick, Stephen R.; Silburn, Sven R.; Parsons, Deborah E.; Kurinczuk, Jennifer J.
2007-01-01
Behavioural sleep problems (childhood insomnias) can cause distress for both parents and children. This paper reports a model describing predictors of high sleep problem scores in a representative population-based random sample survey of non-Aboriginal singleton children born in 1995 and 1996 (1085 girls and 1129 boys) in Western Australia.…
Power and sample size calculations for Mendelian randomization studies using one genetic instrument.
Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary
2013-08-01
Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.
HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA
Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...
Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA
ERIC Educational Resources Information Center
Taylor, Laura; Doehler, Kirsten
2015-01-01
This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…
Flexible sampling large-scale social networks by self-adjustable random walk
NASA Astrophysics Data System (ADS)
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
Performance degradation of a Michelson interferometer due to random sampling errors.
Cohen, D L
1999-01-01
The performance of a standard Michelson interferometer is degraded by disturbances that cause the interferogram signal to be sampled at nonconstant time intervals. A formula that shows how the power spectrum of the random disturbances interacts with the signal to contaminate different regions of the measured spectrum is derived for the spectral noise. The sampling noise does not look conventionally noiselike because it is correlated over large regions of the measured spectrum, and adjustment of the unbalanced background interferogram to match the size of the balanced background interferogram minimizes the sampling-noise amplitude.
Monte Carlo non-local means: random sampling for large-scale image filtering.
Chan, Stanley H; Zickler, Todd; Lu, Yue M
2014-08-01
We propose a randomized version of the nonlocal means (NLM) algorithm for large-scale image filtering. The new algorithm, called Monte Carlo nonlocal means (MCNLM), speeds up the classical NLM by computing a small subset of image patch distances, which are randomly selected according to a designed sampling pattern. We make two contributions. First, we analyze the performance of the MCNLM algorithm and show that, for large images or large external image databases, the random outcomes of MCNLM are tightly concentrated around the deterministic full NLM result. In particular, our error probability bounds show that, at any given sampling ratio, the probability for MCNLM to have a large deviation from the original NLM solution decays exponentially as the size of the image or database grows. Second, we derive explicit formulas for optimal sampling patterns that minimize the error probability bound by exploiting partial knowledge of the pairwise similarity weights. Numerical experiments show that MCNLM is competitive with other state-of-the-art fast NLM algorithms for single-image denoising. When applied to denoising images using an external database containing ten billion patches, MCNLM returns a randomized solution that is within 0.2 dB of the full NLM solution while reducing the runtime by three orders of magnitude.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H; Barber, Samuel K; Bouet, Nathalie; McKinney, Wayne R; Takacs, Peter Z; Voronov, Dmitriy L
2010-09-17
Verification of the reliability of metrology data from high quality x-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [Proc. SPIE 7077-7 (2007), Opt. Eng. 47(7), 073602-1-5 (2008)} and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [Nucl. Instr. and Meth. A 616, 172-82 (2010)]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.
Characterization of Electron Microscopes with Binary Pseudo-random Multilayer Test Samples
V Yashchuk; R Conley; E Anderson; S Barber; N Bouet; W McKinney; P Takacs; D Voronov
2011-12-31
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
NASA Astrophysics Data System (ADS)
Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2011-09-01
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1,2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
Cardew, P T
2003-07-01
The Mann-Whitney U-test is used to demonstrate the impact of phosphate on lead concentrations measured at customer properties. This test is statistically robust and particularly efficient for the type of distributions encountered in lead random daytime sampling. This non-parametric technique is developed to provide a best estimate of the lead reduction that results from a change in plumbosolvency conditions. The method is illustrated with compliance data collected before and after the introduction of phosphate at customer properties in the north west of England. Limitations due to operational factors are highlighted. Using a Monte Carlo simulation of the variability of lead random daytime samples it is shown that the method might be practical when assessing the impact of incremental changes in phosphate concentration on plumbosolvency.
Knowledge of health information and services in a random sample of the population of Glasgow.
Moynihan, M; Jones, A K; Stewart, G T; Lucas, R W
1980-01-01
A RANDOM sample of adults in Glasgow was surveyed by trained interviewers to determine public knowledge on four topics chosen specifically for each of four age groups. The topics were: Welfare rights and services; Coronary Heart Disease (CHD) and individual action that can reduce risk; The dangers of smoking in pregnancy; and fluoride and its functions and the connections between good health and habitual behaviour.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
Yashchuk, V.V.; Conley, R.; Anderson, E.H.; Barber, S.K.; Bouet, N.; McKinney, W.R.; Takacs, P.Z. and Voronov, D.L.
2010-12-08
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binarypseudo-random (BPR) gratings and arrays has been suggested and and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer. Here we describe the details of development of binarypseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML testsamples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
Risk factors for cutaneous malignant melanoma among aircrews and a random sample of the population
Rafnsson, V; Hrafnkelsson, J; Tulinius, H; Sigurgeirsson, B; Hjaltalin, O
2003-01-01
Aims: To evaluate whether a difference in the prevalence of risk factors for malignant melanoma in a random sample of the population and among pilots and cabin attendants could explain the increased incidence of malignant melanoma which had been found in previous studies of aircrews. Methods: A questionnaire was used to collect information on hair colour, eye colour, freckles, number of naevi, family history of skin cancer and naevi, skin type, history of sunburn, sunbed, all sunscreen use, and number of sunny vacations. Results: The 239 pilots were all males and there were 856 female cabin attendants, which were compared with 454 males and 1464 females of the same age drawn randomly from the general population. The difference in constitutional and behavioural risk factors for malignant melanoma between the aircrews and the population sample was not substantial. The aircrews had more often used sunscreen and had taken more sunny vacations than the other men and women. The predictive values for use of sunscreen were 0.88 for pilots and 0.85 for cabin attendants and the predictive values for sunny vacation were 1.36 and 1.34 respectively. Conclusion: There was no substantial difference between the aircrew and the random sample of the population with respect to prevalence of risk factors for malignant melanoma. Thus it is unlikely that the increased incidence of malignant melanoma found in previous studies of pilots and cabin attendants can be solely explained by excessive sun exposure. PMID:14573711
Random sampling causes the low reproducibility of rare eukaryotic OTUs in Illumina COI metabarcoding
Knowlton, Nancy
2017-01-01
DNA metabarcoding, the PCR-based profiling of natural communities, is becoming the method of choice for biodiversity monitoring because it circumvents some of the limitations inherent to traditional ecological surveys. However, potential sources of bias that can affect the reproducibility of this method remain to be quantified. The interpretation of differences in patterns of sequence abundance and the ecological relevance of rare sequences remain particularly uncertain. Here we used one artificial mock community to explore the significance of abundance patterns and disentangle the effects of two potential biases on data reproducibility: indexed PCR primers and random sampling during Illumina MiSeq sequencing. We amplified a short fragment of the mitochondrial Cytochrome c Oxidase Subunit I (COI) for a single mock sample containing equimolar amounts of total genomic DNA from 34 marine invertebrates belonging to six phyla. We used seven indexed broad-range primers and sequenced the resulting library on two consecutive Illumina MiSeq runs. The total number of Operational Taxonomic Units (OTUs) was ∼4 times higher than expected based on the composition of the mock sample. Moreover, the total number of reads for the 34 components of the mock sample differed by up to three orders of magnitude. However, 79 out of 86 of the unexpected OTUs were represented by <10 sequences that did not appear consistently across replicates. Our data suggest that random sampling of rare OTUs (e.g., small associated fauna such as parasites) accounted for most of variation in OTU presence–absence, whereas biases associated with indexed PCRs accounted for a larger amount of variation in relative abundance patterns. These results suggest that random sampling during sequencing leads to the low reproducibility of rare OTUs. We suggest that the strategy for handling rare OTUs should depend on the objectives of the study. Systematic removal of rare OTUs may avoid inflating diversity based on
Bjork, J; Brown, C; Friedlander, H; Schiffman, E; Neitzel, D
2016-08-03
Many disease surveillance programs, including the Massachusetts Department of Public Health and the Minnesota Department of Health, are challenged by marked increases in Lyme disease (LD) reports. The purpose of this study was to retrospectively analyse LD reports from 2005 through 2012 to determine whether key epidemiologic characteristics were statistically indistinguishable when an estimation procedure based on sampling was utilized. Estimates of the number of LD cases were produced by taking random 20% and 50% samples of laboratory-only reports, multiplying by 5 or 2, respectively, and adding the number of provider-reported confirmed cases. Estimated LD case counts were compared to observed, confirmed cases each year. In addition, the proportions of cases that were male, were ≤12 years of age, had erythema migrans (EM), had any late manifestation of LD, had a specific late manifestation of LD (arthritis, cranial neuritis or carditis) or lived in a specific region were compared to the proportions of cases identified using standard surveillance to determine whether estimated proportions were representative of observed proportions. Results indicate that the estimated counts of confirmed LD cases were consistently similar to observed, confirmed LD cases and accurately conveyed temporal trends. Most of the key demographic and disease manifestation characteristics were not significantly different (P < 0.05), although estimates for the 20% random sample demonstrated greater deviation than the 50% random sample. Applying this estimation procedure in endemic states could conserve limited resources by reducing follow-up effort while maintaining the ability to track disease trends.
D'Arco, Philippe; Mustapha, Sami; Ferrabone, Matteo; Noël, Yves; De La Pierre, Marco; Dovesi, Roberto
2013-09-04
A symmetry-adapted algorithm producing uniformly at random the set of symmetry independent configurations (SICs) in disordered crystalline systems or solid solutions is presented here. Starting from Pólya's formula, the role of the conjugacy classes of the symmetry group in uniform random sampling is shown. SICs can be obtained for all the possible compositions or for a chosen one, and symmetry constraints can be applied. The approach yields the multiplicity of the SICs and allows us to operate configurational statistics in the reduced space of the SICs. The present low-memory demanding implementation is briefly sketched. The probability of finding a given SIC or a subset of SICs is discussed as a function of the number of draws and their precise estimate is given. The method is illustrated by application to a binary series of carbonates and to the binary spinel solid solution Mg(Al,Fe)2O4.
Bias due to participant overlap in two‐sample Mendelian randomization
Davies, Neil M.; Thompson, Simon G.
2016-01-01
ABSTRACT Mendelian randomization analyses are often performed using summarized data. The causal estimate from a one‐sample analysis (in which data are taken from a single data source) with weak instrumental variables is biased in the direction of the observational association between the risk factor and outcome, whereas the estimate from a two‐sample analysis (in which data on the risk factor and outcome are taken from non‐overlapping datasets) is less biased and any bias is in the direction of the null. When using genetic consortia that have partially overlapping sets of participants, the direction and extent of bias are uncertain. In this paper, we perform simulation studies to investigate the magnitude of bias and Type 1 error rate inflation arising from sample overlap. We consider both a continuous outcome and a case‐control setting with a binary outcome. For a continuous outcome, bias due to sample overlap is a linear function of the proportion of overlap between the samples. So, in the case of a null causal effect, if the relative bias of the one‐sample instrumental variable estimate is 10% (corresponding to an F parameter of 10), then the relative bias with 50% sample overlap is 5%, and with 30% sample overlap is 3%. In a case‐control setting, if risk factor measurements are only included for the control participants, unbiased estimates are obtained even in a one‐sample setting. However, if risk factor data on both control and case participants are used, then bias is similar with a binary outcome as with a continuous outcome. Consortia releasing publicly available data on the associations of genetic variants with continuous risk factors should provide estimates that exclude case participants from case‐control samples. PMID:27625185
Random sampling of skewed distributions implies Taylor’s power law of fluctuation scaling
Cohen, Joel E.; Xu, Meng
2015-01-01
Taylor’s law (TL), a widely verified quantitative pattern in ecology and other sciences, describes the variance in a species’ population density (or other nonnegative quantity) as a power-law function of the mean density (or other nonnegative quantity): Approximately, variance = a(mean)b, a > 0. Multiple mechanisms have been proposed to explain and interpret TL. Here, we show analytically that observations randomly sampled in blocks from any skewed frequency distribution with four finite moments give rise to TL. We do not claim this is the only way TL arises. We give approximate formulae for the TL parameters and their uncertainty. In computer simulations and an empirical example using basal area densities of red oak trees from Black Rock Forest, our formulae agree with the estimates obtained by least-squares regression. Our results show that the correlated sampling variation of the mean and variance of skewed distributions is statistically sufficient to explain TL under random sampling, without the intervention of any biological or behavioral mechanisms. This finding connects TL with the underlying distribution of population density (or other nonnegative quantity) and provides a baseline against which more complex mechanisms of TL can be compared. PMID:25852144
Large sample randomization inference of causal effects in the presence of interference
Liu, Lan; Hudgens, Michael G.
2013-01-01
Recently, increasing attention has focused on making causal inference when interference is possible. In the presence of interference, treatment may have several types of effects. In this paper, we consider inference about such effects when the population consists of groups of individuals where interference is possible within groups but not between groups. A two stage randomization design is assumed where in the first stage groups are randomized to different treatment allocation strategies and in the second stage individuals are randomized to treatment or control conditional on the strategy assigned to their group in the first stage. For this design, the asymptotic distributions of estimators of the causal effects are derived when either the number of individuals per group or the number of groups grows large. Under certain homogeneity assumptions, the asymptotic distributions provide justification for Wald-type confidence intervals (CIs) and tests. Empirical results demonstrate the Wald CIs have good coverage in finite samples and are narrower than CIs based on either the Chebyshev or Hoeffding inequalities provided the number of groups is not too small. The methods are illustrated by two examples which consider the effects of cholera vaccination and an intervention to encourage voting. PMID:24659836
Convergence Properties of Crystal Structure Prediction by Quasi-Random Sampling
2015-01-01
Generating sets of trial structures that sample the configurational space of crystal packing possibilities is an essential step in the process of ab initio crystal structure prediction (CSP). One effective methodology for performing such a search relies on low-discrepancy, quasi-random sampling, and our implementation of such a search for molecular crystals is described in this paper. Herein we restrict ourselves to rigid organic molecules and, by considering their geometric properties, build trial crystal packings as starting points for local lattice energy minimization. We also describe a method to match instances of the same structure, which we use to measure the convergence of our packing search toward completeness. The use of these tools is demonstrated for a set of molecules with diverse molecular characteristics and as representative of areas of application where CSP has been applied. An important finding is that the lowest energy crystal structures are typically located early and frequently during a quasi-random search of phase space. It is usually the complete sampling of higher energy structures that requires extended sampling. We show how the procedure can first be refined, through targetting the volume of the generated crystal structures, and then extended across a range of space groups to make a full CSP search and locate experimentally observed and lists of hypothetical polymorphs. As the described method has also been created to lie at the base of more involved approaches to CSP, which are being developed within the Global Lattice Energy Explorer (Glee) software, a few of these extensions are briefly discussed. PMID:26716361
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Sample size and robust marginal methods for cluster-randomized trials with censored event times.
Zhong, Yujie; Cook, Richard J
2015-03-15
In cluster-randomized trials, intervention effects are often formulated by specifying marginal models, fitting them under a working independence assumption, and using robust variance estimates to address the association in the responses within clusters. We develop sample size criteria within this framework, with analyses based on semiparametric Cox regression models fitted with event times subject to right censoring. At the design stage, copula models are specified to enable derivation of the asymptotic variance of estimators from a marginal Cox regression model and to compute the number of clusters necessary to satisfy power requirements. Simulation studies demonstrate the validity of the sample size formula in finite samples for a range of cluster sizes, censoring rates, and degrees of within-cluster association among event times. The power and relative efficiency implications of copula misspecification is studied, as well as the effect of within-cluster dependence in the censoring times. Sample size criteria and other design issues are also addressed for the setting where the event status is only ascertained at periodic assessments and times are interval censored.
Purkinje Cell Loss in Essential Tremor: Random Sampling Quantification and Nearest Neighbor Analysis
Choe, Matthew; Cortés, Etty; Vonsattel, Jean-Paul G.; Kuo, Sheng-Han; Faust, Phyllis L.; Louis, Elan D.
2015-01-01
Objective Purkinje cell loss has been documented in some although not all postmortem studies of essential tremor. Hence, there is considerable controversy concerning the presence of Purkinje cell loss in this disease. To date, few studies have been performed. Methods Over the past eight years, we have assembled 50 prospectively-studied cases and 25 age-matched controls; none were reported in our prior large series of 33 essential tremor and 21 controls. In addition to methods used in prior studies, the current study used a random sampling approach to quantify Purkinje cells along the Purkinje cell layer with a mean of 217 sites examined in each specimen, allowing for extensive sampling of the Purkinje cell layer within the section. For the first time, we also quantified the distance between Purkinje cell bodies - a nearest neighbor analysis. Results In the Purkinje cell count data collected from fifteen 100x-fields, cases had lower counts than controls in all three counting criteria (cell bodies, nuclei, nucleoli, all p<0.001). Purkinje cell linear density was also lower in cases than controls (all p<0.001). Purkinje cell linear density obtained by random sampling was similarly lower in cases than controls in all three counting criteria (cell bodies, nuclei, nucleoli, all p≤0.005). In agreement with the quantitative Purkinje cell counts, the mean distance from one Purkinje cell body to another Purkinje cell body along the Purkinje cell layer was greater in cases than controls (p=0.002). Conclusions These data provide support for the neurodegeneration of cerebellar Purkinje cells in essential tremor. PMID:26861543
Hartmann, Jessica A.; Wichers, Marieke; Menne-Lothmann, Claudia; Kramer, Ingrid; Viechtbauer, Wolfgang; Peeters, Frenk; Schruers, Koen R. J.; van Bemmel, Alex L.; Myin-Germeys, Inez; Delespaul, Philippe; van Os, Jim; Simons, Claudia J. P.
2015-01-01
Objectives Positive affect (PA) plays a crucial role in the development, course, and recovery of depression. Recently, we showed that a therapeutic application of the experience sampling method (ESM), consisting of feedback focusing on PA in daily life, was associated with a decrease in depressive symptoms. The present study investigated whether the experience of PA increased during the course of this intervention. Design Multicentre parallel randomized controlled trial. An electronic random sequence generator was used to allocate treatments. Settings University, two local mental health care institutions, one local hospital. Participants 102 pharmacologically treated outpatients with a DSM-IV diagnosis of major depressive disorder, randomized over three treatment arms. Intervention Six weeks of ESM self-monitoring combined with weekly PA-focused feedback sessions (experimental group); six weeks of ESM self-monitoring combined with six weekly sessions without feedback (pseudo-experimental group); or treatment as usual (control group). Main outcome The interaction between treatment allocation and time in predicting positive and negative affect (NA) was investigated in multilevel regression models. Results 102 patients were randomized (mean age 48.0, SD 10.2) of which 81 finished the entire study protocol. All 102 patients were included in the analyses. The experimental group did not show a significant larger increase in momentary PA during or shortly after the intervention compared to the pseudo-experimental or control groups (χ2 (2) =0.33, p=.846). The pseudo-experimental group showed a larger decrease in NA compared to the control group (χ2 (1) =6.29, p=.012). Conclusion PA-focused feedback did not significantly impact daily life PA during or shortly after the intervention. As the previously reported reduction in depressive symptoms associated with the feedback unveiled itself only after weeks, it is conceivable that the effects on daily life PA also evolve
NASA Astrophysics Data System (ADS)
Brunner, N. M.; Mladinich, C. S.; Caldwell, M. K.; Beal, Y. J. G.
2014-12-01
The U.S. Geological Survey is generating a suite of Essential Climate Variables (ECVs) products, as defined by the Global Climate Observing System, from the Landsat data archive. Validation protocols for these products are being established, incorporating the Committee on Earth Observing Satellites Land Product Validation Subgroup's best practice guidelines and validation hierarchy stages. The sampling design and accuracy measures follow the methodology developed by the European Space Agency's Climate Change Initiative Fire Disturbance (fire_cci) project (Padilla and others, 2014). A rigorous validation was performed on the 2008 Burned Area ECV (BAECV) prototype product, using a stratified random sample of 48 Thiessen scene areas overlaying Landsat path/rows distributed across several terrestrial biomes throughout North America. The validation reference data consisted of fourteen sample sites acquired from the fire_cci project and the remaining new samples sites generated from a densification of the stratified sampling for North America. The reference burned area polygons were generated using the ABAMS (Automatic Burned Area Mapping) software (Bastarrika and others, 2011; Izagirre, 2014). Accuracy results will be presented indicating strengths and weaknesses of the BAECV algorithm.Bastarrika, A., Chuvieco, E., and Martín, M.P., 2011, Mapping burned areas from Landsat TM/ETM+ data with a two-phase algorithm: Balancing omission and commission errors: Remote Sensing of Environment, v. 115, no. 4, p. 1003-1012.Izagirre, A.B., 2014, Automatic Burned Area Mapping Software (ABAMS), Preliminary Documentation, Version 10 v4,: Vitoria-Gasteiz, Spain, University of Basque Country, p. 27.Padilla, M., Chuvieco, E., Hantson, S., Theis, R., and Sandow, C., 2014, D2.1 - Product Validation Plan: UAH - University of Alcalá de Henares (Spain), 37 p.
Mendelian Randomization Studies for a Continuous Exposure Under Case-Control Sampling
Dai, James Y.; Zhang, Xinyi Cindy
2015-01-01
In this article, we assess the impact of case-control sampling on mendelian randomization analyses with a dichotomous disease outcome and a continuous exposure. The 2-stage instrumental variables (2SIV) method uses the prediction of the exposure given genotypes in the logistic regression for the outcome and provides a valid test and an approximation of the causal effect. Under case-control sampling, however, the first stage of the 2SIV procedure becomes a secondary trait association, which requires proper adjustment for the biased sampling. Through theoretical development and simulations, we compare the naïve estimator, the inverse probability weighted estimator, and the maximum likelihood estimator for the first-stage association and, more importantly, the resulting 2SIV estimates of the causal effect. We also include in our comparison the causal odds ratio estimate derived from structural mean models by double-logistic regression. Our results suggest that the naïve estimator is substantially biased under the alternative, yet it remains unbiased under the null hypothesis of no causal effect; the maximum likelihood estimator yields smaller variance and mean squared error than other estimators; and the structural mean models estimator delivers the smallest bias, though generally incurring a larger variance and sometimes having issues in algorithm stability and convergence. PMID:25713335
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
Sample size calculations for the design of cluster randomized trials: A summary of methodology.
Gao, Fei; Earnest, Arul; Matchar, David B; Campbell, Michael J; Machin, David
2015-05-01
Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included.
TemperSAT: A new efficient fair-sampling random k-SAT solver
NASA Astrophysics Data System (ADS)
Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.
The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.
Lindquist, Lisa K; Love, Holly C; Elbogen, Eric B
2017-01-25
This study randomly sampled post-9/11 military veterans and reports on causes, predictors, and frequency of traumatic brain injury (TBI) (N=1,388). A total of 17.3% met criteria for TBI during military service, with about one-half reporting multiple head injuries, which were related to higher rates of posttraumatic stress disorder, depression, back pain, and suicidal ideation. The most common mechanisms of TBI included blasts (33.1%), objects hitting head (31.7%), and fall (13.5%). TBI was associated with enlisted rank, male gender, high combat exposure, and sustaining TBI prior to military service. Clinical and research efforts in veterans should consider TBI mechanism, effects of cumulative TBI, and screening for premilitary TBI.
Random sampling of the Green's Functions for reversible reactions with an intermediate state
NASA Astrophysics Data System (ADS)
Plante, Ianik; Devroye, Luc; Cucinotta, Francis A.
2013-06-01
Exact random variate generators were developed to sample Green's functions used in Brownian Dynamics (BD) algorithms for the simulations of chemical systems. These algorithms, which use less than a kilobyte of memory, provide a useful alternative to the table look-up method that has been used in similar work. The cases that are studied with this approach are (1) diffusion-influenced reactions; (2) reversible diffusion-influenced reactions and (3) reactions with an intermediate state such as enzymatic catalysis. The results are validated by comparison with those obtained by the Independent Reaction Times (IRT) method. This work is part of our effort in developing models to understand the role of radiation chemistry in the radiation effects on human body and may eventually be included in event-based models of space radiation risk.
Random sampling of the Green’s Functions for reversible reactions with an intermediate state
Plante, Ianik; Devroye, Luc; Cucinotta, Francis A.
2013-06-01
Exact random variate generators were developed to sample Green’s functions used in Brownian Dynamics (BD) algorithms for the simulations of chemical systems. These algorithms, which use less than a kilobyte of memory, provide a useful alternative to the table look-up method that has been used in similar work. The cases that are studied with this approach are (1) diffusion-influenced reactions; (2) reversible diffusion-influenced reactions and (3) reactions with an intermediate state such as enzymatic catalysis. The results are validated by comparison with those obtained by the Independent Reaction Times (IRT) method. This work is part of our effort in developing models to understand the role of radiation chemistry in the radiation effects on human body and may eventually be included in event-based models of space radiation risk.
Song, Zhuoyi; Zhou, Yu; Juusola, Mikko
2016-01-01
Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779
Sorenson, Susan B; Joshi, Manisha; Sivitz, Elizabeth
2014-02-01
Rape awareness and prevention programs are common on college campuses and a potentially useful way to reach large numbers of young adults. One largely unexamined potential mediator or moderator of program effectiveness is the personal knowledge of student audiences. In this study, we assess the prevalence of knowing a victim and, notably, a perpetrator of sexual assault. A stratified random sample of 2,400 undergraduates was recruited for an online survey about sexual assault. A total of 53.5% participated and yielded a sample representative of the student body. Sixteen questions were modified from the Sexual Experiences Survey to assess whether participants knew a victim of any one of eight types of sexual assault. Findings indicate that students begin college with considerable personal knowledge of sexual assault victimization and perpetration. Nearly two thirds (64.5%) reported that they know one or more women who were a victim of any one of eight types of sexual assault, and over half (52.4%) reported that they know one or more men who perpetrated any of the types of sexual assault. Most students reported knowing victims and perpetrators of multiple types of assault. Knowledge varied substantially by gender and ethnicity. Students' preexisting personal knowledge should be included in assessments of program effectiveness and, ideally, in program design.
Model-wise and point-wise random sample consensus for robust regression and outlier detection.
El-Melegy, Moumen T
2014-11-01
Popular regression techniques often suffer at the presence of data outliers. Most previous efforts to solve this problem have focused on using an estimation algorithm that minimizes a robust M-estimator based error criterion instead of the usual non-robust mean squared error. However the robustness gained from M-estimators is still low. This paper addresses robust regression and outlier detection in a random sample consensus (RANSAC) framework. It studies the classical RANSAC framework and highlights its model-wise nature for processing the data. Furthermore, it introduces for the first time a point-wise strategy of RANSAC. New estimation algorithms are developed following both the model-wise and point-wise RANSAC concepts. The proposed algorithms' theoretical robustness and breakdown points are investigated in a novel probabilistic setting. While the proposed concepts and algorithms are generic and general enough to adopt many regression machineries, the paper focuses on multilayered feed-forward neural networks in solving regression problems. The algorithms are evaluated on synthetic and real data, contaminated with high degrees of outliers, and compared to existing neural network training algorithms. Furthermore, to improve the time performance, parallel implementations of the two algorithms are developed and assessed to utilize the multiple CPU cores available on nowadays computers.
Misperceptions of spoken words: Data from a random sample of American English words
Albert Felty, Robert; Buchwald, Adam; Gruenenfelder, Thomas M.; Pisoni, David B.
2013-01-01
This study reports a detailed analysis of incorrect responses from an open-set spoken word recognition experiment of 1428 words designed to be a random sample of the entire American English lexicon. The stimuli were presented in six-talker babble to 192 young, normal-hearing listeners at three signal-to-noise ratios (0, +5, and +10 dB). The results revealed several patterns: (1) errors tended to have a higher frequency of occurrence than did the corresponding target word, and frequency of occurrence of error responses was significantly correlated with target frequency of occurrence; (2) incorrect responses were close to the target words in terms of number of phonemes and syllables but had a mean edit distance of 3; (3) for syllables, substitutions were much more frequent than either deletions or additions; for phonemes, deletions were slightly more frequent than substitutions; both were more frequent than additions; and (4) for errors involving just a single segment, substitutions were more frequent than either deletions or additions. The raw data are being made available to other researchers as supplementary material to form the beginnings of a database of speech errors collected under controlled laboratory conditions. PMID:23862832
Vrugt, Jasper A; Hyman, James M; Robinson, Bruce A; Higdon, Dave; Ter Braak, Cajo J F; Diks, Cees G H
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Image reconstruction in EIT with unreliable electrode data using random sample consensus method
NASA Astrophysics Data System (ADS)
Jeon, Min Ho; Khambampati, Anil Kumar; Kim, Bong Seok; In Kang, Suk; Kim, Kyung Youn
2017-04-01
In electrical impedance tomography (EIT), it is important to acquire reliable measurement data through EIT system for achieving good reconstructed image. In order to have reliable data, various methods for checking and optimizing the EIT measurement system have been studied. However, most of the methods involve additional cost for testing and the measurement setup is often evaluated before the experiment. It is useful to have a method which can detect the faulty electrode data during the experiment without any additional cost. This paper presents a method based on random sample consensus (RANSAC) to find the incorrect data on fault electrode in EIT data. RANSAC is a curve fitting method that removes the outlier data from measurement data. RANSAC method is applied with Gauss Newton (GN) method for image reconstruction of human thorax with faulty data. Numerical and phantom experiments are performed and the reconstruction performance of the proposed RANSAC method with GN is compared with conventional GN method. From the results, it can be noticed that RANSAC with GN has better reconstruction performance than conventional GN method with faulty electrode data.
Huang, Rimao; Qiu, Xuesong; Rui, Lanlan
2011-01-01
Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.
Cognitive deficits and morphological cerebral changes in a random sample of social drinkers.
Bergman, H
1985-01-01
A random sample of 200 men and 200 women taken from the general population as well as subsamples of 31 male and 17 female excessive social drinkers were investigated with neuropsychological tests and computed tomography of the brain. Relatively high alcohol intake per drinking occasion did not give evidence of cognitive deficits or morphological cerebral changes. However, in males, mild cognitive deficits and morphological cerebral changes as a result of high recent alcohol intake, particularly during the 24-hr period prior to the investigation, were observed. When excluding acute effects of recent alcohol intake, mild cognitive deficits but not morphological cerebral changes that are apparently due to long-term excessive social drinking were observed in males. In females there was no association between the drinking variables and cognitive deficits or morphological cerebral changes, probably due to their less advanced drinking habits. It is suggested that future risk evaluations and estimations of safe alcohol intake should take into consideration the potential risk for brain damage due to excessive social drinking. However, it is premature to make any definite statements about safe alcohol intake and the risk for brain damage in social drinkers from the general population.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Sample-to-sample fluctuations of power spectrum of a random motion in a periodic Sinai model.
Dean, David S; Iorio, Antonio; Marinari, Enzo; Oshanin, Gleb
2016-09-01
The Sinai model of a tracer diffusing in a quenched Brownian potential is a much-studied problem exhibiting a logarithmically slow anomalous diffusion due to the growth of energy barriers with the system size. However, if the potential is random but periodic, the regime of anomalous diffusion crosses over to one of normal diffusion once a tracer has diffused over a few periods of the system. Here we consider a system in which the potential is given by a Brownian bridge on a finite interval (0,L) and then periodically repeated over the whole real line and study the power spectrum S(f) of the diffusive process x(t) in such a potential. We show that for most of realizations of x(t) in a given realization of the potential, the low-frequency behavior is S(f)∼A/f^{2}, i.e., the same as for standard Brownian motion, and the amplitude A is a disorder-dependent random variable with a finite support. Focusing on the statistical properties of this random variable, we determine the moments of A of arbitrary, negative, or positive order k and demonstrate that they exhibit a multifractal dependence on k and a rather unusual dependence on the temperature and on the periodicity L, which are supported by atypical realizations of the periodic disorder. We finally show that the distribution of A has a log-normal left tail and exhibits an essential singularity close to the right edge of the support, which is related to the Lifshitz singularity. Our findings are based both on analytic results and on extensive numerical simulations of the process x(t).
Abuse-related trauma forward medical care in a randomly sampled nationwide population
Ho, Cheng-Maw; Lee, Chih-Hsin; Wang, Jann-Yuan; Lee, Po-Huang; Lai, Hong-Shiee; Hu, Rey-Heng; Chen, Jin-Shing
2016-01-01
Abstract Abuse-related trauma remains a global health issue. However, there is paucity in nationwide reports. We aim to estimate the incidence of abuse-related trauma forward medical care and identify its characteristics and clinical course in Taiwan. Patients with trauma between 2005 and 2007 that occurred 3 months before or after a diagnosis of abuse were identified from a randomly sampled nationwide longitudinal health insurance database of 1 million beneficiaries. The patients’ demographic data, injury pattern, and medical resource utilization were measured, stratified by age and sex, and compared using chi-square test. Risk factors of next trauma event were identified using Cox regression analysis. Ninety-three patients (65 females) were identified (mean age, 20.6 ± 16.3 years), including 61.3% under 18 years of age. For the first trauma event, 68 patients (73.1%) visited the emergency room, 63 (67.7%) received intervention, and 14 (15.1%) needed hospital care. Seven (7.5%), all less than 11 years old, had intracranial hemorrhage and required intensive care. Thirty-three (35.5%) left with complications or sequelae, or required rehabilitation, but all survived. Of the 34 victims of sexual abuse, 32 were aged less than 18 years. Men received more mood stabilizers or antipsychotics (50.0% vs 10.7%, P = 0.030) and reeducative psychotherapy (25.0% vs 0, P = 0.044). Risk factors for a next trauma event were injury involving the extremities (hazard ratio [HR]: 5.27 [2.45–11.33]) and use of antibiotics (HR: 4.21 [1.45–12.24]) on the first trauma event. Abuse-related trauma has heterogeneous presentations among subgroups. Clinicians should be alert in providing timely diagnosis and individualized intervention. PMID:27787382
Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus
NASA Astrophysics Data System (ADS)
Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.
2014-09-01
There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
Scott, J.C.
1990-01-01
Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.
Jonsson, Pär; Wuolikainen, Anna; Thysell, Elin; Chorell, Elin; Stattin, Pär; Wikström, Pernilla; Antti, Henrik
Analytical drift is a major source of bias in mass spectrometry based metabolomics confounding interpretation and biomarker detection. So far, standard protocols for sample and data analysis have not been able to fully resolve this. We present a combined approach for minimizing the influence of analytical drift on multivariate comparisons of matched or dependent samples in mass spectrometry based metabolomics studies. The approach is building on a randomization procedure for sample run order, constrained to independent randomizations between and within dependent sample pairs (e.g. pre/post intervention). This is followed by a novel multivariate statistical analysis strategy allowing paired or dependent analyses of individual effects named OPLS-effect projections (OPLS-EP). We show, using simulated data that OPLS-EP gives improved interpretation over existing methods and that constrained randomization of sample run order in combination with an appropriate dependent statistical test increase the accuracy and sensitivity and decrease the false omission rate in biomarker detection. We verify these findings and prove the strength of the suggested approach in a clinical data set consisting of LC/MS data of blood plasma samples from patients before and after radical prostatectomy. Here OPLS-EP compared to traditional (independent) OPLS-discriminant analysis (OPLS-DA) on constrained randomized data gives a less complex model (3 versus 5 components) as well a higher predictive ability (Q2 = 0.80 versus Q2 = 0.55). We explain this by showing that paired statistical analysis detects 37 unique significant metabolites that were masked for the independent test due to bias, including analytical drift and inter-individual variation.
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Estimating Stanine Scores from a Non-Random Sample: A Methodology Discussion.
ERIC Educational Resources Information Center
Rhodes-Kline, Anne K.
A methodology for estimating descriptive statistics, specifically the mean and the variance, from a sample that is not normally drawn is described. The method involves breaking the sample down into subgroups and weighting the descriptive statistics associated with each subgroup by the proportion of the population that the subgroup represents. This…
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational
Survey of the histamine content in fish samples randomly selected from the Greek retail market.
Vosikis, V; Papageorgopoulou, A; Economou, V; Frillingos, S; Papadopoulou, C
2008-01-01
The histamine content of fish sold in the Greek retail market was surveyed and the performance of high-performance liquid chromatography (HPLC) and enzyme-linked immunoabsorbant assay (ELISA) methods for the determination of histamine were compared. A total of 125 samples of fresh and canned tuna, fresh and canned sardines, deep frozen swordfish, smoked and deep frozen mackerel, anchovies, salted and smoked herring were analysed by HPLC (55 samples), ELISA (106 samples) and both methods (36 samples). Histamine levels as determined by HPLC, ranged from 2.7 mg kg(-1) to 220 mg kg(-1). The highest histamine concentrations obtained by HPLC were found in herring and anchovy samples. Eight out of the 55 samples (14.5%) analysed by HPLC, exceeded the US Food and Drug Administration (USFDA) limit (50 mg kg(-1)), while 16 out of the 106 samples (15%) analysed by ELISA exceeded the limit. The results show that for histamine concentrations below 50 mg kg(-1), there is good agreement between the ELISA and HPLC but above 50 mg kg(-1) big differences were found.
Random Transect with Adaptive Clustering Sampling Design - ArcPad Applet Manual
2011-09-01
sampling design geodatabase ................................. 6 3.1.2 Create features in geodatabase ...developed for ArcPad®, a mobile geographical information software (GIS) for field applications developed by ESRI ® of Redlands, CA. ArcPad is designed to...occurrence maps (Rew et al. 2005) to guide future surveying and management efforts. The RTAC combines features of the two NIS sampling designs described
Code to generate random identifiers and select QA/QC samples
Mehnert, Edward
1992-01-01
SAMPLID is a PC-based, FORTRAN-77 code which generates unique numbers for identification of samples, selection of QA/QC samples, and generation of labels. These procedures are tedious, but using a computer code such as SAMPLID can increase efficiency and reduce or eliminate errors and bias. The algorithm, used in SAMPLID, for generation of pseudorandom numbers is free of statistical flaws present in commonly available algorithms.
Water quality in dental chair units. A random sample in the canton of St. Gallen.
Barben, Jürg; Kuehni, Claudia E; Schmid, Jürg
2009-01-01
This study aimed to identify the microbial contamination of water from dental chair units (DCUs) using the prevalence of Pseudomonas aeruginosa, Legionella species and heterotrophic bacteria as a marker of pollution in water in the area of St. Gallen, Switzerland. Water (250 ml) from 76 DCUs was collected twice (early on a morning before using all the instruments and after using the DCUs for at least two hours) either from the high-speed handpiece tube, the 3 in 1 syringe or the micromotor for water quality testing. An increased bacterial count (>300 CFU/ml) was found in 46 (61%) samples taken before use of the DCU, but only in 29 (38%) samples taken two hours after use. Pseudomonas aeruginosa was found in both water samples in 6/76 (8%) of the DCUs. Legionella were found in both samples in 15 (20%) of the DCUs tested. Legionella anisa was identified in seven samples and Legionella pneumophila was found in eight. DCUs which were less than five years old were contaminated less often than older units (25% und 77%, p<0.001). This difference remained significant (0=0.0004) when adjusted for manufacturer and sampling location in a multivariable logistic regression. A large proportion of the DCUs tested did not comply with the Swiss drinking water standards nor with the recommendations of the American Centers for Disease Control and Prevention (CDC).
Network sampling coverage II: The effect of non-random missing data on network measurement.
Smith, Jeffrey A; Moody, James; Morgan, Jonathan
2017-01-01
Missing data is an important, but often ignored, aspect of a network study. Measurement validity is affected by missing data, but the level of bias can be difficult to gauge. Here, we describe the effect of missing data on network measurement across widely different circumstances. In Part I of this study (Smith and Moody, 2013), we explored the effect of measurement bias due to randomly missing nodes. Here, we drop the assumption that data are missing at random: what happens to estimates of key network statistics when central nodes are more/less likely to be missing? We answer this question using a wide range of empirical networks and network measures. We find that bias is worse when more central nodes are missing. With respect to network measures, Bonacich centrality is highly sensitive to the loss of central nodes, while closeness centrality is not; distance and bicomponent size are more affected than triad summary measures and behavioral homophily is more robust than degree-homophily. With respect to types of networks, larger, directed networks tend to be more robust, but the relation is weak. We end the paper with a practical application, showing how researchers can use our results (translated into a publically available java application) to gauge the bias in their own data.
Kim, Diane N. H.; Teitell, Michael A.; Reed, Jason; Zangle, Thomas A.
2015-01-01
Abstract. Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples. PMID:26305212
Characterization of electron microscopes with binary pseudo-random multilayer test samples
Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2010-07-09
We discuss the results of SEM and TEM measurements with the BPRML test samples fabricated from a BPRML (WSi2/Si with fundamental layer thickness of 3 nm) with a Dual Beam FIB (focused ion beam)/SEM technique. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.
NASA Astrophysics Data System (ADS)
Habershon, Scott
2015-09-01
Automatically generating chemical reaction pathways is a significant computational challenge, particularly in the case where a given chemical system can exhibit multiple reactants and products, as well as multiple pathways connecting these. Here, we outline a computational approach to allow automated sampling of chemical reaction pathways, including sampling of different chemical species at the reaction end-points. The key features of this scheme are (i) introduction of a Hamiltonian which describes a reaction "string" connecting reactant and products, (ii) definition of reactant and product species as chemical connectivity graphs, and (iii) development of a scheme for updating the chemical graphs associated with the reaction end-points. By performing molecular dynamics sampling of the Hamiltonian describing the complete reaction pathway, we are able to sample multiple different paths in configuration space between given chemical products; by periodically modifying the connectivity graphs describing the chemical identities of the end-points we are also able to sample the allowed chemical space of the system. Overall, this scheme therefore provides a route to automated generation of a "roadmap" describing chemical reactivity. This approach is first applied to model dissociation pathways in formaldehyde, H2CO, as described by a parameterised potential energy surface (PES). A second application to the HCo(CO)3 catalyzed hydroformylation of ethene (oxo process), using density functional tight-binding to model the PES, demonstrates that our graph-based approach is capable of sampling the intermediate paths in the commonly accepted catalytic mechanism, as well as several secondary reactions. Further algorithmic improvements are suggested which will pave the way for treating complex multi-step reaction processes in a more efficient manner.
Cloud Removal from SENTINEL-2 Image Time Series Through Sparse Reconstruction from Random Samples
NASA Astrophysics Data System (ADS)
Cerra, D.; Bieniarz, J.; Müller, R.; Reinartz, P.
2016-06-01
In this paper we propose a cloud removal algorithm for scenes within a Sentinel-2 satellite image time series based on synthetisation of the affected areas via sparse reconstruction. For this purpose, a clouds and clouds shadow mask must be given. With respect to previous works, the process has an increased automation degree. Several dictionaries, on the basis of which the data are reconstructed, are selected randomly from cloud-free areas around the cloud, and for each pixel the dictionary yielding the smallest reconstruction error in non-corrupted images is chosen for the restoration. The values below a cloudy area are therefore estimated by observing the spectral evolution in time of the non-corrupted pixels around it. The proposed restoration algorithm is fast and efficient, requires minimal supervision and yield results with low overall radiometric and spectral distortions.
Haggard, Megan C; Kang, Linda L; Rowatt, Wade C; Shen, Megan Johnson
2015-01-01
The connection between religiousness and volunteering for the community can be explained through two distinct features of religion. First, religious organizations are social groups that encourage members to help others through planned opportunities. Second, helping others is regarded as an important value for members in religious organizations to uphold. We examined the relationship between religiousness and self-reported community volunteering in two independent national random surveys of American adults (i.e., the 2005 and 2007 waves of the Baylor Religion Survey). In both waves, frequency of religious service attendance was associated with an increase in likelihood that individuals would volunteer, whether through their religious organization or not, whereas frequency of reading sacred texts outside of religious services was associated with an increase in likelihood of volunteering only for or through their religious organization. The role of religion in community volunteering is discussed in light of these findings.
Labad, Javier; Martorell, Lourdes; Gaviria, Ana; Bayón, Carmen; Vilella, Elisabet; Cloninger, C. Robert
2015-01-01
Objectives. The psychometric properties regarding sex and age for the revised version of the Temperament and Character Inventory (TCI-R) and its derived short version, the Temperament and Character Inventory (TCI-140), were evaluated with a randomized sample from the community. Methods. A randomized sample of 367 normal adult subjects from a Spanish municipality, who were representative of the general population based on sex and age, participated in the current study. Descriptive statistics and internal consistency according to α coefficient were obtained for all of the dimensions and facets. T-tests and univariate analyses of variance, followed by Bonferroni tests, were conducted to compare the distributions of the TCI-R dimension scores by age and sex. Results. On both the TCI-R and TCI-140, women had higher scores for Harm Avoidance, Reward Dependence and Cooperativeness than men, whereas men had higher scores for Persistence. Age correlated negatively with Novelty Seeking, Reward Dependence and Cooperativeness and positively with Harm Avoidance and Self-transcendence. Young subjects between 18 and 35 years had higher scores than older subjects in NS and RD. Subjects between 51 and 77 years scored higher in both HA and ST. The alphas for the dimensions were between 0.74 and 0.87 for the TCI-R and between 0.63 and 0.83 for the TCI-140. Conclusion. Results, which were obtained with a randomized sample, suggest that there are specific distributions of personality traits by sex and age. Overall, both the TCI-R and the abbreviated TCI-140 were reliable in the ‘good-to-excellent’ range. A strength of the current study is the representativeness of the sample. PMID:26713237
Gutierrez-Zotes, Alfonso; Labad, Javier; Martorell, Lourdes; Gaviria, Ana; Bayón, Carmen; Vilella, Elisabet; Cloninger, C Robert
2015-01-01
Objectives. The psychometric properties regarding sex and age for the revised version of the Temperament and Character Inventory (TCI-R) and its derived short version, the Temperament and Character Inventory (TCI-140), were evaluated with a randomized sample from the community. Methods. A randomized sample of 367 normal adult subjects from a Spanish municipality, who were representative of the general population based on sex and age, participated in the current study. Descriptive statistics and internal consistency according to α coefficient were obtained for all of the dimensions and facets. T-tests and univariate analyses of variance, followed by Bonferroni tests, were conducted to compare the distributions of the TCI-R dimension scores by age and sex. Results. On both the TCI-R and TCI-140, women had higher scores for Harm Avoidance, Reward Dependence and Cooperativeness than men, whereas men had higher scores for Persistence. Age correlated negatively with Novelty Seeking, Reward Dependence and Cooperativeness and positively with Harm Avoidance and Self-transcendence. Young subjects between 18 and 35 years had higher scores than older subjects in NS and RD. Subjects between 51 and 77 years scored higher in both HA and ST. The alphas for the dimensions were between 0.74 and 0.87 for the TCI-R and between 0.63 and 0.83 for the TCI-140. Conclusion. Results, which were obtained with a randomized sample, suggest that there are specific distributions of personality traits by sex and age. Overall, both the TCI-R and the abbreviated TCI-140 were reliable in the 'good-to-excellent' range. A strength of the current study is the representativeness of the sample.
Multivariate Multi-Objective Allocation in Stratified Random Sampling: A Game Theoretic Approach
Hussain, Ijaz; Shoukry, Alaa Mohamd
2016-01-01
We consider the problem of multivariate multi-objective allocation where no or limited information is available within the stratum variance. Results show that a game theoretic approach (based on weighted goal programming) can be applied to sample size allocation problems. We use simulation technique to determine payoff matrix and to solve a minimax game. PMID:27936039
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis
ERIC Educational Resources Information Center
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio
2010-01-01
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
Vergara, Ismael A; Villouta, Pamela; Herrera, Sandra; Melo, Francisco
2012-05-01
The thirteen autosomal STR loci of the CODIS system were typed from DNA of 732 unrelated male individuals sampled from different locations in Chile. This is the first report of allele frequencies for the thirteen STRs loci defined in the CODIS system from the Chilean population.
Lusinchi, Dominic
2017-02-13
The scientific pollsters (Archibald Crossley, George H. Gallup, and Elmo Roper) emerged onto the American news media scene in 1935. Much of what they did in the following years (1935-1948) was to promote both the political and scientific legitimacy of their enterprise. They sought to be recognized as the sole legitimate producers of public opinion. In this essay I examine the, mostly overlooked, rhetorical work deployed by the pollsters to publicize the scientific credentials of their polling activities, and the central role the concept of sampling has had in that pursuit. First, they distanced themselves from the failed straw poll by claiming that their sampling methodology based on quotas was informed by science. Second, although in practice they did not use random sampling, they relied on it rhetorically to derive the symbolic benefits of being associated with the "laws of probability."
Abdulatif, M; Mukhtar, A; Obayah, G
2015-11-01
We have evaluated the pitfalls in reporting sample size calculation in randomized controlled trials (RCTs) published in the 10 highest impact factor anaesthesia journals.Superiority RCTs published in 2013 were identified and checked for the basic components required for sample size calculation and replication. The difference between the reported and replicated sample size was estimated. The sources used for estimating the expected effect size (Δ) were identified, and the difference between the expected and observed effect sizes (Δ gap) was estimated.We enrolled 194 RCTs. Sample size calculation was reported in 91.7% of studies. Replication of sample size calculation was possible in 80.3% of studies. The original and replicated sample sizes were identical in 67.8% of studies. The difference between the replicated and reported sample sizes exceeded 10% in 28.7% of studies. The expected and observed effect sizes were comparable in RCTs with positive outcomes (P=0.1). Studies with negative outcome tended to overestimate the effect size (Δ gap 42%, 95% confidence interval 32-51%), P<0.001. Post hoc power of negative studies was 20.2% (95% confidence interval 13.4-27.1%). Studies using data derived from pilot studies for sample size calculation were associated with the smallest Δ gaps (P=0.008).Sample size calculation is frequently reported in anaesthesia journals, but the details of basic elements for calculation are not consistently provided. In almost one-third of RCTs, the reported and replicated sample sizes were not identical and the assumptions for the expected effect size and variance were not supported by relevant literature or pilot studies.
Seroincidence of non-typhoid Salmonella infections: convenience vs. random community-based sampling.
Emborg, H-D; Simonsen, J; Jørgensen, C S; Harritshøj, L H; Krogfelt, K A; Linneberg, A; Mølbak, K
2016-01-01
The incidence of reported infections of non-typhoid Salmonella is affected by biases inherent to passive laboratory surveillance, whereas analysis of blood sera may provide a less biased alternative to estimate the force of Salmonella transmission in humans. We developed a mathematical model that enabled a back-calculation of the annual seroincidence of Salmonella based on measurements of specific antibodies. The aim of the present study was to determine the seroincidence in two convenience samples from 2012 (Danish blood donors, n = 500, and pregnant women, n = 637) and a community-based sample of healthy individuals from 2006 to 2007 (n = 1780). The lowest antibody levels were measured in the samples from the community cohort and the highest in pregnant women. The annual Salmonella seroincidences were 319 infections/1000 pregnant women [90% credibility interval (CrI) 210-441], 182/1000 in blood donors (90% CrI 85-298) and 77/1000 in the community cohort (90% CrI 45-114). Although the differences between study populations decreased when accounting for different age distributions the estimates depend on the study population. It is important to be aware of this issue and define a certain population under surveillance in order to obtain consistent results in an application of serological measures for public health purposes.
A Bayesian nonlinear random effects model for identification of defective batteries from lot samples
NASA Astrophysics Data System (ADS)
Cripps, Edward; Pecht, Michael
2017-02-01
Numerous materials and processes go into the manufacture of lithium-ion batteries, resulting in variations across batteries' capacity fade measurements. Accounting for this variability is essential when determining whether batteries are performing satisfactorily. Motivated by a real manufacturing problem, this article presents an approach to assess whether lithium-ion batteries from a production lot are not representative of a healthy population of batteries from earlier production lots, and to determine, based on capacity fade data, the earliest stage (in terms of cycles) that battery anomalies can be identified. The approach involves the use of a double exponential function to describe nonlinear capacity fade data. To capture the variability of repeated measurements on a number of individual batteries, the double exponential function is then embedded as the individual batteries' trajectories in a Bayesian random effects model. The model allows for probabilistic predictions of capacity fading not only at the underlying mean process level but also at the individual battery level. The results show good predictive coverage for individual batteries and demonstrate that, for our data, non-healthy lithium-ion batteries can be identified in as few as 50 cycles.
Global stratigraphy of Venus: Analysis of a random sample of thirty-six test areas
NASA Technical Reports Server (NTRS)
Basilevsky, Alexander T.; Head, James W., III
1995-01-01
The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. Mapping of such units and structures in 36 randomly distributed large regions shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky, 1993) is the earliest event detected. Our stratigraphic analyses suggest that following tessera formation, extensive volcanic flooding resurfaced at least 85% of the planet in the form of the presently-ridged and fractured plains. Several lines of evidence favor a high flux in the post-tessera period but we have no independent evidence for the absolute duration of ridged plains emplacement. During this time, the net state of stress in the lithosphere apparently changed from extensional to compressional, first in the form of extensive ridge belt development, followed by the formation of extensive wrinkle ridges on the flow units. Subsequently, there occurred local emplacement of smooth and lobate plains units which are presently essentially undeformed. The major events in the latest 10% of the presently preserved history of Venus are continued rifting and some associated volcanism, and the redistribution of eolian material largely derived from impact crater deposits. Detailed geologic mapping and stratigraphic synthesis are necessary to test this sequence and to address many of
The Prospective and Retrospective Memory Questionnaire: a population-based random sampling study.
Piauilino, D C; Bueno, O F A; Tufik, S; Bittencourt, L R; Santos-Silva, R; Hachul, H; Gorenstein, C; Pompéia, S
2010-05-01
The Prospective and Retrospective Memory Questionnaire (PRMQ) has been shown to have acceptable reliability and factorial, predictive, and concurrent validity. However, the PRMQ has never been administered to a probability sample survey representative of all ages in adulthood, nor have previous studies controlled for factors that are known to influence metamemory, such as affective status. Here, the PRMQ was applied in a survey adopting a probabilistic three-stage cluster sample representative of the population of Sao Paulo, Brazil, according to gender, age (20-80 years), and economic status (n=1042). After excluding participants who had conditions that impair memory (depression, anxiety, used psychotropics, and/or had neurological/psychiatric disorders), in the remaining 664 individuals we (a) used confirmatory factor analyses to test competing models of the latent structure of the PRMQ, and (b) studied effects of gender, age, schooling, and economic status on prospective and retrospective memory complaints. The model with the best fit confirmed the same tripartite structure (general memory factor and two orthogonal prospective and retrospective memory factors) previously reported. Women complained more of general memory slips, especially those in the first 5 years after menopause, and there were more complaints of prospective than retrospective memory, except in participants with lower family income.
Global Stratigraphy of Venus: Analysis of a Random Sample of Thirty-Six Test Areas
NASA Technical Reports Server (NTRS)
Basilevsky, Alexander T.; Head, James W., III
1995-01-01
The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. These units and structures form a major stratigraphic and geologic sequence (from oldest to youngest): (1) tessera terrain; (2) densely fractured terrains associated with coronae and in the form of remnants among plains; (3) fractured and ridged plains and ridge belts; (4) plains with wrinkle ridges; (5) ridges associated with coronae annulae and ridges of arachnoid annulae which are contemporary with wrinkle ridges of the ridged plains; (6) smooth and lobate plains; (7) fractures of coronae annulae, and fractures not related to coronae annulae, which disrupt ridged and smooth plains; (8) rift-associated fractures; and (9) craters with associated dark paraboloids, which represent the youngest 1O% of the Venus impact crater population (Campbell et al.), and are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; surficial streaks and patches are approximately contemporary with dark-paraboloid craters. Mapping of such units and structures in 36 randomly distributed large regions (each approximately 10(exp 6) sq km) shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky) is the earliest event detected. In the terminal stages of tessera fon
Use of pornography in a random sample of Norwegian heterosexual couples.
Daneback, Kristian; Traeen, Bente; Månsson, Sven-Axel
2009-10-01
This study examined the use of pornography in couple relationships to enhance the sex-life. The study contained a representative sample of 398 heterosexual couples aged 22-67 years. Data collection was carried out by self-administered postal questionnaires. The majority (77%) of the couples did not report any kind of pornography use to enhance the sex-life. In 15% of the couples, both had used pornography; in 3% of the couples, only the female partner had used pornography; and, in 5% of the couples, only the male partner had used pornography for this purpose. Based on the results of a discriminant function analysis, it is suggested that couples where one or both used pornography had a more permissive erotic climate compared to the couples who did not use pornography. In couples where only one partner used pornography, we found more problems related to arousal (male) and negative (female) self-perception. These findings could be of importance for clinicians who work with couples.
Structural Effects of Network Sampling Coverage I: Nodes Missing at Random1
Smith, Jeffrey A.; Moody, James
2013-01-01
Network measures assume a census of a well-bounded population. This level of coverage is rarely achieved in practice, however, and we have only limited information on the robustness of network measures to incomplete coverage. This paper examines the effect of node-level missingness on 4 classes of network measures: centrality, centralization, topology and homophily across a diverse sample of 12 empirical networks. We use a Monte Carlo simulation process to generate data with known levels of missingness and compare the resulting network scores to their known starting values. As with past studies (Borgatti et al 2006; Kossinets 2006), we find that measurement bias generally increases with more missing data. The exact rate and nature of this increase, however, varies systematically across network measures. For example, betweenness and Bonacich centralization are quite sensitive to missing data while closeness and in-degree are robust. Similarly, while the tau statistic and distance are difficult to capture with missing data, transitivity shows little bias even with very high levels of missingness. The results are also clearly dependent on the features of the network. Larger, more centralized networks are generally more robust to missing data, but this is especially true for centrality and centralization measures. More cohesive networks are robust to missing data when measuring topological features but not when measuring centralization. Overall, the results suggest that missing data may have quite large or quite small effects on network measurement, depending on the type of network and the question being posed. PMID:24311893
Loneliness and Ethnic Composition of the School Class: A Nationally Random Sample of Adolescents.
Madsen, Katrine Rich; Damsgaard, Mogens Trab; Rubin, Mark; Jervelund, Signe Smith; Lasgaard, Mathias; Walsh, Sophie; Stevens, Gonneke G W J M; Holstein, Bjørn E
2016-07-01
Loneliness is a public health concern that increases the risk for several health, behavioral and academic problems among adolescents. Some studies have suggested that adolescents with an ethnic minority background have a higher risk for loneliness than adolescents from the majority population. The increasing numbers of migrant youth around the world mean growing numbers of heterogeneous school environments in many countries. Even though adolescents spend a substantial amount of time at school, there is currently very little non-U.S. research that has examined the importance of the ethnic composition of school classes for loneliness in adolescence. The present research aimed to address this gap by exploring the association between loneliness and three dimensions of the ethnic composition in the school class: (1) membership of ethnic majority in the school class, (2) the size of own ethnic group in the school class, and (3) the ethnic diversity of the school class. We used data from the Danish 2014 Health Behaviour in School-aged Children survey: a nationally representative sample of 4383 (51.2 % girls) 11-15-year-olds. Multilevel logistic regression analyses revealed that adolescents who did not belong to the ethnic majority in the school class had increased odds for loneliness compared to adolescents that belonged to the ethnic majority. Furthermore, having more same-ethnic classmates lowered the odds for loneliness. We did not find any statistically significant association between the ethnic diversity of the school classes and loneliness. The study adds novel and important findings to how ethnicity in a school class context, as opposed to ethnicity per se, influences adolescents' loneliness.
Active learning for clinical text classification: is it better than random sampling?
Figueroa, Rosa L; Ngo, Long H; Goryachev, Sergey; Wiechmann, Eduardo P
2012-01-01
Objective This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. Design Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. Measurements Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. Results The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. Conclusion For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty. PMID:22707743
Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Frédéric; Petignat, Patrick
2015-01-01
Background Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. Methods A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY) or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA). After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET). HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic. Results HPV prevalence for high-risk types was 62.3% (95%CI: 53.7–70.2) detected by s-DRY, 56.2% (95%CI: 47.6–64.4) by Dr-WET, and 54.6% (95%CI: 46.1–62.9) by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34), and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56). Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+) were: 64.0% (95%CI: 44.5–79.8) for s-FTA, 84.6% (95%CI: 66.5–93.9) for s-DRY, and 76.9% (95%CI: 58.0–89.0) for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%). Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD)/per card vs. ~1 USD/per swab). Conclusion Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA. Trial Registration International Standard Randomized Controlled Trial Number (ISRCTN): 43310942 PMID:26630353
Zhou, Fuqun; Zhang, Aining
2016-01-01
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data. PMID:27792152
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Alban, Lis; Rugbjerg, Helene; Petersen, Jesper Valentin; Nielsen, Liza Rosenbaum
2016-06-01
more residue cases with higher cost-effectiveness than random monitoring. Sampling 7500 HR pigs and 5000 LR pigs resulted in the most cost-effective monitoring among the alternative scenarios. The associated costs would increase by 4%. A scenario involving testing of 5000 HR and 5000 LR animals would result in slightly fewer positives, but 17% savings in costs. The advantages of using HPLC LC-MS/MS compared to the bioassay are a fast response and a high sensitivity for all relevant substances used in pigs. The Danish abattoir companies have implemented a risk-based monitoring similar to the above per January 2016.
Rhodes, Scott D.; McCoy, Thomas P.; Wilkin, Aimee M.; Wolfson, Mark
2013-01-01
This internet-based study was designed to compare health risk behaviors of gay and non-gay university students from stratified random cross-sectional samples of undergraduate students. Mean age of the 4,167 male participants was 20.5 (±2.7) years. Of these, 206 (4.9%) self-identified as gay and 3,961 (95.1%) self-identified as heterosexual. After adjusting for selected characteristics and clustering within university, gay men had higher odds of reporting: multiple sexual partners; cigarette smoking; methamphetamine use; gamma-hydroxybutyrate (GHB) use; other illicit drug use within the past 30 days and during lifetime; and intimate partner violence (IPV). Understanding the health risk behaviors of gay and heterosexual men is crucial to identifying associated factors and intervening upon them using appropriate and tailored strategies to reduce behavioral risk disparities and improve health outcomes. PMID:19882428
Canu, Will H.; Trout, Krystal L.; Nieman, David C.
2012-01-01
Background: The purpose of the present study was to examine the effects of quercetin supplementation on neurocognitive functioning. Methods: A large community sample (n = 941) completed a 12-week supplementation protocol, and participants were randomly assigned to receive 500 mg/day or 1000 mg/day quercetin, or placebo. Results: Results failed to indicate significant effects of quercetin on memory, psychomotor speed, reaction time, attention, or cognitive flexibility, despite large increases in plasma quercetin levels among the quercetin treatment groups. Discussion: Consistent with recent research, this study raises concerns regarding the generalizability of positive findings of in vitro and animal quercetin research, and provides evidence that quercetin may not have an ergogenic effect on neurocognitive functioning in humans. PMID:23983966
Dent, O. F.; Sulway, M. R.; Broe, G. A.; Creasey, H.; Kos, S. C.; Jorm, A. F.; Tennant, C.; Fairley, M. J.
1997-01-01
OBJECTIVE: To examine the association between the average daily alcohol intake of older men in 1982 and cognitive performance and brain atrophy nine years later. SUBJECTS: Random sample of 209 Australian men living in the community who were veterans of the second world war. Their mean age in 1982 was 64.3 years. MAIN OUTCOME MEASURES: 18 standard neuropsychological tests measuring a range of intellectual functions. Cortical, sylvian, and vermian atrophy on computed tomography. RESULTS: Compared with Australian men of the same age in previous studies these men had sustained a high rate of alcohol consumption into old age. However, there was no significant correlation, linear or non-linear, between alcohol consumption in 1982 and results in any of the neuropsychological tests in 1991; neither was alcohol consumption associated with brain atrophy on computed tomography. CONCLUSION: No evidence was found that apparently persistent lifelong consumption of alcohol was related to the cognitive functioning of these men in old age. PMID:9180067
Buller, David B.; Andersen, Peter A.; Walkosz, Barbara J.; Scott, Michael D.; Beck, Larry; Cutter, Gary R.
2016-01-01
Introduction Exposure to solar ultraviolet radiation during recreation is a risk factor for skin cancer. A trial evaluating an intervention to promote advanced sun protection (sunscreen pre-application/reapplication; protective hats and clothing; use of shade) during vacations. Materials and Methods Adult visitors to hotels/resorts with outdoor recreation (i.e., vacationers) participated in a group-randomized pretest-posttest controlled quasi-experimental design in 2012–14. Hotels/resorts were pair-matched and randomly assigned to the intervention or untreated control group. Sun protection (e.g., clothing, hats, shade and sunscreen) was measured in cross-sectional samples by observation and a face-to-face intercept survey during two-day visits. Results Initially, 41 hotel/resorts (11%) participated but 4 dropped out before posttest. Hotel/resorts were diverse (employees=30 to 900; latitude=24o 78′ N to 50o 52′ N; elevation=2 ft. to 9,726 ft. above sea level), and had a variety of outdoor venues (beaches/pools, court/lawn games, golf courses, common areas, and chairlifts). At pretest, 4,347 vacationers were observed and 3,531 surveyed. More females were surveyed (61%) than observed (50%). Vacationers were mostly 35–60 years old, highly educated (college education = 68%) and non-Hispanic white (93%), with high-risk skin types (22%). Vacationers reported covering 60% of their skin with clothing. Also, 40% of vacationers used shade; 60% applied sunscreen; and 42% had been sunburned. Conclusions The trial faced challenges recruiting resorts but result show that the large, multi-state sample of vacationers were at high risk for solar UV exposure. PMID:26593781
ERIC Educational Resources Information Center
Cochran, Wendell
1976-01-01
Presented is a review of papers presented at the 25th International Geological Congress held August 16-25, 1976, Sydney, Australia. Topics include precambrian geology, tectonics, biostratigraphy, geochemistry, quaternary geology, engineering geology, planetology, geological education, and stress environments. (SL)
Teoh, Shao Thing; Kitamura, Miki; Nakayama, Yasumune; Putri, Sastia; Mukai, Yukio; Fukusaki, Eiichiro
2016-08-01
In recent years, the advent of high-throughput omics technology has made possible a new class of strain engineering approaches, based on identification of possible gene targets for phenotype improvement from omic-level comparison of different strains or growth conditions. Metabolomics, with its focus on the omic level closest to the phenotype, lends itself naturally to this semi-rational methodology. When a quantitative phenotype such as growth rate under stress is considered, regression modeling using multivariate techniques such as partial least squares (PLS) is often used to identify metabolites correlated with the target phenotype. However, linear modeling techniques such as PLS require a consistent metabolite-phenotype trend across the samples, which may not be the case when outliers or multiple conflicting trends are present in the data. To address this, we proposed a data-mining strategy that utilizes random sample consensus (RANSAC) to select subsets of samples with consistent trends for construction of better regression models. By applying a combination of RANSAC and PLS (RANSAC-PLS) to a dataset from a previous study (gas chromatography/mass spectrometry metabolomics data and 1-butanol tolerance of 19 yeast mutant strains), new metabolites were indicated to be correlated with tolerance within certain subsets of the samples. The relevance of these metabolites to 1-butanol tolerance were then validated from single-deletion strains of corresponding metabolic genes. The results showed that RANSAC-PLS is a promising strategy to identify unique metabolites that provide additional hints for phenotype improvement, which could not be detected by traditional PLS modeling using the entire dataset.
Gregianin, L J; McGill, A C; Pinheiro, C M; Brunetto, A L
1997-01-01
Neuroblastoma is the most common solid tumor in childhood and is the most frequent neural crest tumor (NCT). More than 90% of the patients excrete high levels of vanilmandelic acid (VMA) and homovanillic acid (HVA) in the urine. Original biochemical methods for measuring these two metabolites of catecholamines employed a collection of urine for 24 hours to avoid errors related to circadian cycle variations. More recently, attempts have been made to replace the 24-hour collections by random samples (RSs). This has practical advantages particularly for young children. The objective of this study is to assess whether urinary VMA related to urinary creatinine levels can be determined reliably by the method of Pisano et al. from RSs in patients with NCT. The determination of the consumption of VMA in urine stored for prolonged periods of time was also studied. We found a good correlation between the values of metabolites of catecholamines in RSs compared with 24-hour urine collections. There was consumption of VMA in urine samples after storage. We conclude that determination of VMA in RSs of urine by Pisano's method may identify NCT production of catecholamines and that the consumption of these catecholamines is an important factor to consider in the interpretation of values obtained with stored urine specimens.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
Roos, Stefan; Dicksved, Johan; Tarasco, Valentina; Locatelli, Emanuela; Ricceri, Fulvio; Grandin, Ulf; Savino, Francesco
2013-01-01
Objective To analyze the global microbial composition, using large-scale DNA sequencing of 16 S rRNA genes, in faecal samples from colicky infants given L. reuteri DSM 17938 or placebo. Methods Twenty-nine colicky infants (age 10–60 days) were enrolled and randomly assigned to receive either Lactobacillus reuteri (108 cfu) or a placebo once daily for 21 days. Responders were defined as subjects with a decrease of 50% in daily crying time at day 21 compared with the starting point. The microbiota of faecal samples from day 1 and 21 were analyzed using 454 pyrosequencing. The primers: Bakt_341F and Bakt_805R, complemented with 454 adapters and sample specific barcodes were used for PCR amplification of the 16 S rRNA genes. The structure of the data was explored by using permutational multivariate analysis of variance and effects of different variables were visualized with ordination analysis. Results The infants’ faecal microbiota were composed of Proteobacteria, Firmicutes, Actinobacteria and Bacteroidetes as the four main phyla. The composition of the microbiota in infants with colic had very high inter-individual variability with Firmicutes/Bacteroidetes ratios varying from 4000 to 0.025. On an individual basis, the microbiota was, however, relatively stable over time. Treatment with L. reuteri DSM 17938 did not change the global composition of the microbiota, but when comparing responders with non-responders the group responders had an increased relative abundance of the phyla Bacteroidetes and genus Bacteroides at day 21 compared with day 0. Furthermore, the phyla composition of the infants at day 21 could be divided into three enterotype groups, dominated by Firmicutes, Bacteroidetes, and Actinobacteria, respectively. Conclusion L. reuteri DSM 17938 did not affect the global composition of the microbiota. However, the increase of Bacteroidetes in the responder infants indicated that a decrease in colicky symptoms was linked to changes of the microbiota
ERIC Educational Resources Information Center
Steel, Jennifer L.; Herlitz, Claes A.
2005-01-01
Objective: Several studies with small and ''high risk'' samples have demonstrated that a history of childhood or adolescent sexual abuse (CASA) is associated with sexual risk behaviors (SRBs). However, few studies with large random samples from the general population have specifically examined the relationship between CASA and SRBs with a…
2012-01-01
Background Little research has focused on the relationship between health insurance and mental health in the community. The objective of this study is to determine how the basic health insurance system influences depression in Northwest China. Methods Participants were selected from 32 communities in two northwestern Chinese cities through a three-stage random sampling. Three waves of interviews were completed in April 2006, December 2006, and January 2008. The baseline survey was completed by 4,079 participants. Subsequently, 2,220 participants completed the first follow-up, and 1,888 completed the second follow-up. Depression symptoms were measured by the Center for Epidemiologic Studies Depression Scale (CES-D). Results A total of 40.0% of participants had at least one form of health insurance. The percentages of participants with severe depressive symptoms in the three waves were 21.7%, 22.0%, and 17.6%. Depressive symptoms were found to be more severe among participants without health insurance in the follow-up surveys. After adjusting for confounders, participants without health insurance were found to experience a higher risk of developing severe depressive symptoms than participants with health insurance (7 months: OR, 1.40; 95% CI, 1.09-1.82; p = 0.01; 20 months: OR, 1.89; 95% CI, 1.37-2.61; p < 0.001). Conclusion A lack of basic health insurance can dramatically increase the risk of depression based on northwestern Chinese community samples. PMID:22994864
Tran, Kathy V.; Azhar, Gulrez S.; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy
2013-01-01
Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile. PMID:23778061
Tran, Kathy V; Azhar, Gulrez S; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy
2013-06-18
Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile.
Turner, A; Ellertson, C; Thomas, S; Garcia, S
2003-01-01
Objectives: People in developing countries often seek medical advice for common ailments from pharmacies. As one example, pharmacists routinely diagnose and treat symptomatic sexually transmitted infections (STIs). We aimed to assess the quality of advice provided in Mexico City pharmacies by presenting hypothetical STI related syndromes and recording pharmacy attendants' suggested diagnoses and treatments. Methods: We interviewed the first available attendant in each of a 5% random sample of Mexico City's pharmacies. We inquired about the training, age, and experience of the attendant and about the typical number of clients coming for treatment of suspected STIs. After considering three hypothetical case studies, attendants recommended diagnoses, treatments, and, sometimes, physician follow up. Results: Most Mexico City "pharmacists" are actually clerks, with trained pharmacists rarely available on the premises. The average pharmacy attendant was 32 years old, with a median of 5 years' experience at that pharmacy, but very limited (if any) training. 62% reported seeing 10 or more clients with genital or vaginal infections per month. Depending on the case study, attendants provided appropriate diagnoses in 0–12% of cases, recommended appropriate treatments in 12–16% of cases, and suggested physician follow up for 26–67% of cases. Conclusions: In general, surveyed pharmacy personnel were unable to diagnose accurately or offer appropriate treatment advice when presented with classic, common STI symptoms. Given the volume of clients seeking advice from this source, training pharmacy attendants could significantly help to reduce the burden of disease associated with STIs in Mexico City. PMID:12794207
2012-01-01
Background Young children who are overweight are at increased risk of becoming obese and developing type 2 diabetes and cardiovascular disease later in life. Therefore, early intervention is critical. This paper describes the rationale, design, methodology, and sample characteristics of a 5-year cluster randomized controlled trial being conducted in eight elementary schools in rural North Carolina, United States. Methods/Design The first aim of the trial is to examine the effects of a two-phased intervention on weight status, adiposity, nutrition and exercise health behaviors, and self-efficacy in overweight or obese 2nd, 3 rd, and 4th grade children and their overweight or obese parents. The primary outcome in children is stabilization of BMI percentile trajectory from baseline to 18 months. The primary outcome in parents is a decrease in BMI from baseline to 18 months. Secondary outcomes for both children and parents include adiposity, nutrition and exercise health behaviors, and self-efficacy from baseline to 18 months. A secondary aim of the trial is to examine in the experimental group, the relationships between parents and children's changes in weight status, adiposity, nutrition and exercise health behaviors, and self-efficacy. An exploratory aim is to determine whether African American, Hispanic, and non-Hispanic white children and parents in the experimental group benefit differently from the intervention in weight status, adiposity, health behaviors, and self-efficacy. A total of 358 African American, non-Hispanic white, and bilingual Hispanic children with a BMI ≥ 85th percentile and 358 parents with a BMI ≥ 25 kg/m2 have been inducted over 3 1/2 years and randomized by cohort to either an experimental or a wait-listed control group. The experimental group receives a 12-week intensive intervention of nutrition and exercise education, coping skills training and exercise (Phase I), 9 months of continued monthly contact (Phase II) and then 6 months
Andrews, Ross M; Skull, Susan A; Byrnes, Graham B; Campbell, Donald A; Turner, Joy L; McIntyre, Peter B; Kelly, Heath A
2005-01-01
This study was undertaken to assess the uptake of influenza and pneumococcal vaccination based on provider records of the hospitalised elderly, a group at high risk of influenza and pneumococcal disease. The study used a random sample of 3,204 admissions at two Victorian teaching hospitals for patients, aged 65 years or more who were discharged between 1 April 2000 and 31 March 2002. Information on whether the patient had received an influenza vaccination within the year prior to admission or pneumococcal vaccination within the previous five years was ascertained from the patient's nominated medical practitioner/vaccine provider. Vaccination records were obtained from providers for 82 per cent (2,804/2,934) of eligible subjects. Influenza vaccine coverage was 70.9 per cent (95% CI 68.9-72.9), pneumococcal coverage was 52.6 per cent (95% CI 50.4-54.8) and 46.6 per cent (95% CI 44.4-48.8) had received both vaccines. Coverage for each vaccine increased seven per cent over the two study years. For pneumococcal vaccination, there was a marked increase in 1998 coinciding with the introduction of Victoria's publicly funded program. Influenza and pneumococcal vaccine coverage in eligible hospitalised adults was similar to, but did not exceed, estimates in the general elderly population. Pneumococcal vaccination coverage reflected the availability of vaccine through Victoria's publicly funded program. A nationally funded pneumococcal vaccination program for the elderly, as announced recently, should improve coverage. However, these data highlight the need for greater awareness of pneumococcal vaccine among practitioners and for systematic recording of vaccination status, as many of these subjects will soon become eligible for revaccination.
Cassou, B; Derriennic, F; Monfort, C; Norton, J; Touranchet, A
2002-01-01
Aims: To analyse the effects of age and occupational factors on both the incidence and the disappearance of chronic neck and shoulder pain after a five year follow up period. Methods: A prospective longitudinal investigation (ESTEV) was carried out in 1990 and 1995 in seven regions of France. A random sample of male and female workers born in 1938, 1943, 1948, and 1953 was selected from the occupational physicians' files. In 1990, 21 378 subjects were interviewed (88% of those contacted), and 87% were interviewed again in 1995. Chronic neck and shoulder pain satisfying specific criteria, and psychosocial working conditions were investigated by a structured self administered questionnaire and a clinical examination. Results: Prevalence (men 7.8%, women 14.8% in 1990) and incidence (men 7.3%, women 12.5% for the period 1990–95) of chronic neck and shoulder pain increased with age, and were more frequent among women than men in every birth cohort. The disappearance rate of chronic neck and shoulder pain decreased with age. Some adverse working conditions (repetitive work under time constraints, awkward work for men, repetitive work for women) contributed to the development of these disorders, independently of age. Psychosocial factors seemed to play a role in both the development and disappearance of chronic neck and shoulder pain. Data did not show specific interactions between age and working conditions. Conclusions: The aging of the workforce appears to contribute to the widespread concern about chronic neck and shoulder pain. A better understanding of work activity regulation of older workers can open up new preventive prospects. PMID:12151610
Messiah, Antoine; Acuna, Juan M; Castro, Grettel; Rodríguez de la Vega, Pura; Vaiva, Guillaume; Shultz, James M; Neria, Yuval; De La Rosa, Mario
2014-01-01
This study examined the mental health consequences of the January 2010 Haiti earthquake on Haitians living in Miami-Dade County, Florida, 2–3 years following the event. A random-sample household survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants (N = 421) were assessed for their earthquake exposure and its impact on family, friends, and household finances; and for symptoms of post-traumatic stress disorder (PTSD), anxiety, and major depression; using standardized screening measures and thresholds. Exposure was considered as “direct” if the interviewee was in Haiti during the earthquake. Exposure was classified as “indirect” if the interviewee was not in Haiti during the earthquake but (1) family members or close friends were victims of the earthquake, and/or (2) family members were hosted in the respondent's household, and/or (3) assets or jobs were lost because of the earthquake. Interviewees who did not qualify for either direct or indirect exposure were designated as “lower” exposure. Eight percent of respondents qualified for direct exposure, and 63% qualified for indirect exposure. Among those with direct exposure, 19% exceeded threshold for PTSD, 36% for anxiety, and 45% for depression. Corresponding percentages were 9%, 22% and 24% for respondents with indirect exposure, and 6%, 14%, and 10% for those with lower exposure. A majority of Miami Haitians were directly or indirectly exposed to the earthquake. Mental health distress among them remains considerable two to three years post-earthquake. PMID:26753105
ERIC Educational Resources Information Center
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica
2013-01-01
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
Feeding patterns and dietary intake in a random sample of a Swedish population of insured-dogs.
Sallander, Marie; Hedhammar, Ake; Rundgren, Margareta; Lindberg, Jan E
2010-07-01
We used a validated mail and telephone questionnaire to investigate baseline data on feeding patterns and dietary intake in a random sample of 460 Swedish dogs. In 1999, purebred individuals 1-3 years old in the largest insurance database of Sweden completed the study. Most dogs were fed restricted amounts twice a day, and the feeding patterns seldom were changed after the age of 6 months. Typically, the main constituent of the meals was dry food [representing 69% of dry matter (DM)]. Four out of five dogs also got foods that (in descending order of the amount of energy provided) consisted of vegetable oil, meat, sour milk, bread, potatoes, pasta, lard/tallow, sausage, cheese, rice and fish. The heavier the dog (kg), the more dry dog food was consumed (g DM/d). The dry-food intakes (g DM/d) increased linearly with body weight (BW, in kg): intake=-15.3+8.33 BW (P=0.0001; r=0.998), a clear relationship that was not observed for other commercial foods. The non-commercial part of the diet had higher fat (13 and 8 g/megajoule, MJ, respectively; P=0.00001) and lower protein (12 and 16 g/MJ, respectively; P=0.00001) compared to the commercial part of the diet. Six out of ten dogs were given treats, and one-fourth was given vitamin/mineral supplements (most commonly daily). Most dogs consumed diets that were nutritionally balanced. No dogs in the study consumed diets that supplied lower amounts of protein than recommended by the NRC (2006). Only two individuals (<1%) were given total diets that were lower than the nutrient profiles in fat. Few dogs consumed total diets that were lower than recommended by the NRC (2006) in calcium, phosphorus, and vitamins A, D and E (2, 1, 3, 5, and 3% of the individuals, respectively). A few individuals consumed higher levels of vitamins A and D (<1 and 4%, respectively) than recommended. Diets that deviated from recommended levels were those consisting of only table foods with no supplements (too-low in vitamins and minerals) or
Evaluation of lymphocyte levels in a random sample of 218 elderly individuals from São Paulo city
Teixeira, Daniela; Longo-Maugeri, Ieda Maria; Santos, Jair Licio Ferreira; Duarte, Yeda Aparecida Oliveira; Lebrão, Maria Lucia; Bueno, Valquiria
2011-01-01
Background Age-associated changes in the immune system cause decreased protection after vaccination and increased rates of infections and tumor development. Methods Lymphocyte percentages were compared by gender and age to establish differences between subtypes. Three mL blood samples were obtained from 218 randomly selected individuals (60-101 years old) who live in São Paulo city. Blood was lysed with Tris phosphate buffer and stained for 30 minutes with monoclonal antibodies (CD3PerCP, CD4FITC, CD8Pe, CD19Pe) for analysis by flow cytometry. Statistical analysis was by ANOVA. Results The percentage of CD4+ T cells (p-value = 0.005) and the CD4/CD8 ratio (p-value = 0.010) were lower in men, whereas the percentage of CD8+ T cells was lower (p-value = 0.002) in women; the percentage of B cells (CD19+ ) was similar between groups. Individuals grouped by gender and age range and compared showed a drop in CD4+ cells in 75 to 79-year-old men (female: 46.1% ± 8.1% and male: 38.8% ± 10.5%; p-value = 0.023). Also, the 80 to 84-year-old group of men had a higher percentage of CD8+ (female: 20.8% ± 8.2%, and male: 27.2% ± 8.2%; p-value = 0.032). Low percentages of B cells were detected in men in the 75 to 79-year-old (p-value = 0.003), 85 to 89-year-old (p-value = 0.020) and older than 90 year old (p-value = 0.002) age ranges. Conclusion Elderly men present with more changes in lymphocyte subsets compared to elderly women. These findings could demonstrate impairment in the immune response since the lower CD4+ in men would provide less help to B cells (also lower in men) in terms of antibody production. In addition, the increase in CD8+ cells in this group could represent chronic inflammation observed during the aging process. PMID:23049341
ERIC Educational Resources Information Center
Liu, Xiaofeng
2003-01-01
This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…
Gignoux, J; Duby, C; Barot, S
1999-03-01
Diggle's tests of spatial randomness based on empirical distributions of interpoint distances can be performed with and without edge-effect correction. We present here numerical results illustrating that tests without the edge-effect correction proposed by Diggle (1979, Biometrics 35, 87-101) have a higher power for small sample sizes than those with correction. Ignoring the correction enables detection of departure from spatial randomness with smaller samples (down to 10 points vs. 30 points for the tests with correction). These results are confirmed by an example with ecological data consisting of maps of two species of trees in a West African savanna. Tree numbers per species per map were often less than 20. For one of the species, for which maps strongly suggest an aggregated pattern, tests without edge-effect correction enabled rejection of the null hypothesis on three plots out of five vs. on only one for the tests with correction.
Meldrum, R J; Smith, R M M; Ellis, P; Garside, J
2006-05-01
Since 1995, the publicly funded ready-to-eat food sampling and examination activities in Wales have been coordinated and structured, using a novel approach for the identification of samples and premises. The latest set of data from this surveillance system reports the results from 3391 ready-to-eat foods sampled between November 2003 and March 2005. During this seventeen-month period all samples were examined for aerobic colony count, Escherichia coli, Listeria spp., Bacillus cereus, Salmonella, Staphylococcus aureus and Listeria monocytogenes. The food types with the poorest microbiological quality were cream cakes, custard slices and egg mayonnaise sandwiches. The food type with the best microbiological quality was dried fruit. In conclusion, the results indicate that, in general terms, the ready-to-eat food types sampled and examined in this period posed little bacterial hazard to consumers.
Shabani, Fidan; Nayeri, Nahid Dehghan; Karimi, Roghiyeh; Zarei, Khadijeh; Chehrazi, Mohammad
2016-01-01
Background: Premature infants are subjected to many painful procedures during care and treatment. The aim of this study was to assess the effect of music therapy on physiological and behavioral pain responses of premature infants during and after blood sampling. Materials and Methods: This study was a cross-over clinical trial conducted on 20 infants in a hospital affiliated to Tehran University of Medical Sciences for a 5-month period in 2011. In the experimental group, Transitions music was played from 5 min before until 10 min after blood sampling. The infants’ facial expressions and physiological measures were recorded from 10 min before until 10 min after sampling. All steps and measurements, except music therapy, were the same for the control group. Data were analyzed using SAS and SPSS software through analysis of variance (ANOVA) and Chi-square tests. Results: There were significant differences between the experimental and control groups (P = 0.022) in terms of heart rate during needle extraction and at the first 5 min after sampling (P = 0.005). Considering the infant's sleep–wake state in the second 5 min before sampling, the statistical difference was significant (P = 0.044). Difference was significant (P = 0.045) during injection of the needle, in the first 5 min after sampling (P = 0.002), and in the second 5 min after sampling (P = 0.005). There were significant difference in infants’ facial expressions of pain in the first 5 min after sampling (P = 0.001). Conclusions: Music therapy reduces the physiological and behavioral responses of pain during and after blood sampling. PMID:27563323
Kashdan, Todd B; Farmer, Antonina S
2014-06-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning.
Kashdan, Todd B.; Farmer, Antonina S.
2014-01-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning. PMID:24512246
Cohen, Michael H; Sandler, Lynne; Hrbek, Andrea; Davis, Roger B; Eisenberg, David M
2005-01-01
This research documents policies in 39 randomly selected academic medical centers integrating complementary and alternative medical (CAM) services into conventional care. Twenty-three offered CAM services-most commonly, acupuncture, massage, dietary supplements, mind-body therapies, and music therapy. None had written policies concerning credentialing practices or malpractice liability. Only 10 reported a written policy governing use of dietary supplements, although three sold supplements in inpatient formularies, one in the psychiatry department, and five in outpatient pharmacies. Thus, few academic medical centers have sufficiently integrated CAM services into conventional care by developing consensus-written policies governing credentialing, malpractice liability, and dietary supplement use.
Mukherjee, Shubhabrata; Walter, Stefan; Kauwe, John S.K.; Saykin, Andrew J.; Bennett, David A.; Larson, Eric B.; Crane, Paul K.; Glymour, M. Maria
2015-01-01
Observational research shows that higher body mass index (BMI) increases Alzheimer’s disease (AD) risk, but it is unclear whether this association is causal. We applied genetic variants that predict BMI in Mendelian Randomization analyses, an approach that is not biased by reverse causation or confounding, to evaluate whether higher BMI increases AD risk. We evaluated individual level data from the AD Genetics Consortium (ADGC: 10,079 AD cases and 9,613 controls), the Health and Retirement Study (HRS: 8,403 participants with algorithm-predicted dementia status) and published associations from the Genetic and Environmental Risk for AD consortium (GERAD1: 3,177 AD cases and 7,277 controls). No evidence from individual SNPs or polygenic scores indicated BMI increased AD risk. Mendelian Randomization effect estimates per BMI point (95% confidence intervals) were: ADGC OR=0.95 (0.90, 1.01); HRS OR=1.00 (0.75, 1.32); GERAD1 OR=0.96 (0.87, 1.07). One subscore (cellular processes not otherwise specified) unexpectedly predicted lower AD risk. PMID:26079416
Weinhold, Jan; Hunger, Christina; Bornhäuser, Annette; Link, Leoni; Rochon, Justine; Wild, Beate; Schweitzer, Jochen
2013-10-01
The study examined the efficacy of nonrecurring family constellation seminars on psychological health. We conducted a monocentric, single-blind, stratified, and balanced randomized controlled trial (RCT). After choosing their roles for participating in a family constellation seminar as either active participant (AP) or observing participant (OP), 208 adults (M = 48 years, SD = 10; 79% women) from the general population were randomly allocated to the intervention group (IG; 3-day family constellation seminar; 64 AP, 40 OP) or a wait-list control group (WLG; 64 AP, 40 OP). It was predicted that family constellation seminars would improve psychological functioning (Outcome Questionnaire OQ-45.2) at 2-week and 4-month follow-ups. In addition, we assessed the impact of family constellation seminars on psychological distress and motivational incongruence. The IG showed significantly improved psychological functioning (d = 0.45 at 2-week follow-up, p = .003; d = 0.46 at 4-month follow-up, p = .003). Results were confirmed for psychological distress and motivational incongruence. No adverse events were reported. This RCT provides evidence for the efficacy of family constellation in a nonclinical population. The implications of the findings are discussed.
Mukherjee, Shubhabrata; Walter, Stefan; Kauwe, John S K; Saykin, Andrew J; Bennett, David A; Larson, Eric B; Crane, Paul K; Glymour, M Maria
2015-12-01
Observational research shows that higher body mass index (BMI) increases Alzheimer's disease (AD) risk, but it is unclear whether this association is causal. We applied genetic variants that predict BMI in Mendelian randomization analyses, an approach that is not biased by reverse causation or confounding, to evaluate whether higher BMI increases AD risk. We evaluated individual-level data from the AD Genetics Consortium (ADGC: 10,079 AD cases and 9613 controls), the Health and Retirement Study (HRS: 8403 participants with algorithm-predicted dementia status), and published associations from the Genetic and Environmental Risk for AD consortium (GERAD1: 3177 AD cases and 7277 controls). No evidence from individual single-nucleotide polymorphisms or polygenic scores indicated BMI increased AD risk. Mendelian randomization effect estimates per BMI point (95% confidence intervals) were as follows: ADGC, odds ratio (OR) = 0.95 (0.90-1.01); HRS, OR = 1.00 (0.75-1.32); GERAD1, OR = 0.96 (0.87-1.07). One subscore (cellular processes not otherwise specified) unexpectedly predicted lower AD risk.
Jackson, George L; Weinberger, Morris; Kirshner, Miriam A; Stechuchak, Karen M; Melnyk, Stephanie D; Bosworth, Hayden B; Coffman, Cynthia J; Neelon, Brian; Van Houtven, Courtney; Gentry, Pamela W; Morris, Isis J; Rose, Cynthia M; Taylor, Jennifer P; May, Carrie L; Han, Byungjoo; Wainwright, Christi; Alkon, Aviel; Powell, Lesa; Edelman, David
2016-09-01
Despite the availability of efficacious treatments, only half of patients with hypertension achieve adequate blood pressure (BP) control. This paper describes the protocol and baseline subject characteristics of a 2-arm, 18-month randomized clinical trial of titrated disease management (TDM) for patients with pharmaceutically-treated hypertension for whom systolic blood pressure (SBP) is not controlled (≥140mmHg for non-diabetic or ≥130mmHg for diabetic patients). The trial is being conducted among patients of four clinic locations associated with a Veterans Affairs Medical Center. An intervention arm has a TDM strategy in which patients' hypertension control at baseline, 6, and 12months determines the resource intensity of disease management. Intensity levels include: a low-intensity strategy utilizing a licensed practical nurse to provide bi-monthly, non-tailored behavioral support calls to patients whose SBP comes under control; medium-intensity strategy utilizing a registered nurse to provide monthly tailored behavioral support telephone calls plus home BP monitoring; and high-intensity strategy utilizing a pharmacist to provide monthly tailored behavioral support telephone calls, home BP monitoring, and pharmacist-directed medication management. Control arm patients receive the low-intensity strategy regardless of BP control. The primary outcome is SBP. There are 385 randomized (192 intervention; 193 control) veterans that are predominately older (mean age 63.5years) men (92.5%). 61.8% are African American, and the mean baseline SBP for all subjects is 143.6mmHg. This trial will determine if a disease management program that is titrated by matching the intensity of resources to patients' BP control leads to superior outcomes compared to a low-intensity management strategy.
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2014 CFR
2014-07-01
... maximum area of 1 square meter and a minimum dimension of 10 centimeters. (b) Measure the length and width... centimeters in the length and the width measurements to the nearest centimeter. (c) For each 1 square meter... centimeter sample. (1) Orient the 1 square meter surface area so that, when you are facing the area,...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2010 CFR
2010-07-01
... maximum area of 1 square meter and a minimum dimension of 10 centimeters. (b) Measure the length and width... centimeters in the length and the width measurements to the nearest centimeter. (c) For each 1 square meter... centimeter sample. (1) Orient the 1 square meter surface area so that, when you are facing the area,...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2012 CFR
2012-07-01
... maximum area of 1 square meter and a minimum dimension of 10 centimeters. (b) Measure the length and width... centimeters in the length and the width measurements to the nearest centimeter. (c) For each 1 square meter... centimeter sample. (1) Orient the 1 square meter surface area so that, when you are facing the area,...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2011 CFR
2011-07-01
... maximum area of 1 square meter and a minimum dimension of 10 centimeters. (b) Measure the length and width... centimeters in the length and the width measurements to the nearest centimeter. (c) For each 1 square meter... centimeter sample. (1) Orient the 1 square meter surface area so that, when you are facing the area,...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2013 CFR
2013-07-01
... maximum area of 1 square meter and a minimum dimension of 10 centimeters. (b) Measure the length and width... centimeters in the length and the width measurements to the nearest centimeter. (c) For each 1 square meter... centimeter sample. (1) Orient the 1 square meter surface area so that, when you are facing the area,...
Abdel Rahman, Mohamed F.; Hashad, Ingy M.; Abdel-Maksoud, Sahar M.; Farag, Nabil M.; Abou-Aisha, Khaled
2012-01-01
Aim: The aim of this study was to detect endothelial nitric oxide synthase (eNOS) Glu298Asp gene variants in a random sample of the Egyptian population, compare it with those from other populations, and attempt to correlate these variants with serum levels of nitric oxide (NO). The association of eNOS genotypes or serum NO levels with the incidence of acute myocardial infarction (AMI) was also examined. Methods: One hundred one unrelated healthy subjects and 104 unrelated AMI patients were recruited randomly from the 57357 Hospital and intensive care units of El Demerdash Hospital and National Heart Institute, Cairo, Egypt. eNOS genotypes were determined by polymerase chain reaction–restriction fragment length polymorphism. Serum NO was determined spectrophotometrically. Results: The genotype distribution of eNOS Glu298Asp polymorphism determined for our sample was 58.42% GG (wild type), 33.66% GT, and 7.92% TT genotypes while allele frequencies were 75.25% and 24.75% for G and T alleles, respectively. No significant association between serum NO and specific eNOS genotype could be detected. No significant correlation between eNOS genotype distribution or allele frequencies and the incidence of AMI was observed. Conclusion: The present study demonstrated the predominance of the homozygous genotype GG over the heterozygous GT and homozygous TT in random samples of Egyptian population. It also showed the lack of association between eNOS genotypes and mean serum levels of NO, as well as the incidence of AMI. PMID:22731641
Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M
2010-12-01
A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.
Mi, Chunrong; Huettmann, Falk; Han, Xuesong; Wen, Lijia
2017-01-01
Species distribution models (SDMs) have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33), White-naped Crane (Grus vipio, n = 40), and Black-necked Crane (Grus nigricollis, n = 75) in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model), Random Forest, CART (Classification and Regression Tree) and Maxent (Maximum Entropy Models). In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC) and true skill statistic (TSS)) were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial) ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid
Mortimer, James A; Ding, Ding; Borenstein, Amy R; DeCarli, Charles; Guo, Qihao; Wu, Yougui; Zhao, Qianhua; Chu, Shugang
2012-01-01
Physical exercise has been shown to increase brain volume and improve cognition in randomized trials of non-demented elderly. Although greater social engagement was found to reduce dementia risk in observational studies, randomized trials of social interventions have not been reported. A representative sample of 120 elderly from Shanghai, China was randomized to four groups (Tai Chi, Walking, Social Interaction, No Intervention) for 40 weeks. Two MRIs were obtained, one before the intervention period, the other after. A neuropsychological battery was administered at baseline, 20 weeks, and 40 weeks. Comparison of changes in brain volumes in intervention groups with the No Intervention group were assessed by t-tests. Time-intervention group interactions for neuropsychological measures were evaluated with repeated-measures mixed models. Compared to the No Intervention group, significant increases in brain volume were seen in the Tai Chi and Social Intervention groups (p < 0.05). Improvements also were observed in several neuropsychological measures in the Tai Chi group, including the Mattis Dementia Rating Scale score (p = 0.004), the Trailmaking Test A (p = 0.002) and B (p = 0.0002), the Auditory Verbal Learning Test (p = 0.009), and verbal fluency for animals (p = 0.01). The Social Interaction group showed improvement on some, but fewer neuropsychological indices. No differences were observed between the Walking and No Intervention groups. The findings differ from previous clinical trials in showing increases in brain volume and improvements in cognition with a largely non-aerobic exercise (Tai Chi). In addition, intellectual stimulation through social interaction was associated with increases in brain volume as well as with some cognitive improvements.
Mi, Chunrong; Huettmann, Falk; Guo, Yumin; Han, Xuesong; Wen, Lijia
2017-01-01
Species distribution models (SDMs) have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33), White-naped Crane (Grus vipio, n = 40), and Black-necked Crane (Grus nigricollis, n = 75) in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model), Random Forest, CART (Classification and Regression Tree) and Maxent (Maximum Entropy Models). In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC) and true skill statistic (TSS)) were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial) ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid
Thijs, Lutgarde; Staessen, Jan; Amery, Antoon; Bruaux, Pierre; Buchet, Jean-Pierre; Claeys, FranÇoise; De Plaen, Pierre; Ducoffre, Geneviève; Lauwerys, Robert; Lijnen, Paul; Nick, Laurence; Remy, Annie Saint; Roels, Harry; Rondia, Désiré; Sartor, Francis
1992-01-01
This report investigated the distribution of serum zinc and the factors determining serum zinc concentration in a large random population sample. The 1977 participants (959 men and 1018 women), 20–80 years old, constituted a stratified random sample of the population of four Belgian districts, representing two areas with low and two with high environmental exposure to cadmium. For each exposure level, a rural and an urban area were selected. The serum concentration of zinc, frequently used as an index for zinc status in human subjects, was higher in men (13.1 μmole/L, range 6.5–23.0 μmole/L) than in women (12.6 μmole/L, range 6.3–23.2 μmole/L). In men, 20% of the variance of serum zinc was explained by age (linear and squared term, R = 0.29), diurnal variation (r = 0.29), and total cholesterol (r = 0.16). After adjustment for these covariates, a negative relationship was observed between serum zinc and both blood (r = −0.10) and urinary cadmium (r = −0.14). In women, 11% of the variance could be explained by age (linear and squared term, R = 0.15), diurnal variation in serum zinc (r = 0.27), creatinine clearance (r = −0.11), log γ-glutamyltranspeptidase (r = 0.08), cholesterol (r = 0.07), contraceptive pill intake (r = −0.07), and log serum ferritin (r = 0.06). Before and after adjustment for significant covariates, serum zinc was, on average, lowest in the two districts where the body burden of cadmium, as assessed by urinary cadmium excretion, was highest. These results were not altered when subjects exposed to heavy metals at work were excluded from analysis. PMID:1486857
Szirovicza, Leonóra; López, Pilar; Kopena, Renáta; Benkő, Mária; Martín, José; Pénzes, Judit J.
2016-01-01
Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs) in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni), nine Iberian worm lizards (Blanus cinereus), and two Iberian green lizards (Lacerta schreiberi), respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses. PMID:27399970
NASA Astrophysics Data System (ADS)
Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.
2015-03-01
Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.
Skurnik, Geraldine; Zera, Chloe A.; Reforma, Liberty G.; Levkoff, Sue E.; Seely, Ellen W.
2016-01-01
Objective The postpartum period is a window of opportunity for diabetes prevention in women with recent gestational diabetes (GDM), but recruitment for clinical trials during this period of life is a major challenge. Methods We adapted a social-ecologic model to develop a multi-level recruitment strategy at the macro (high or institutional level), meso (mid or provider level), and micro (individual) levels. Our goal was to recruit 100 women with recent GDM into the Balance after Baby randomized controlled trial over a 17-month period. Participants were asked to attend three in-person study visits at 6 weeks, 6 months, and 12 months postpartum. They were randomized into a control arm or a web-based intervention arm at the end of the baseline visit at six weeks postpartum. At the end of the recruitment period, we compared population characteristics of our enrolled subjects to the entire population of women with GDM delivering at Brigham and Women's Hospital (BWH). Results We successfully recruited 107 of 156 (69%) women assessed for eligibility, with the majority (92) recruited during pregnancy at a mean 30 (SD± 5) weeks of gestation, and 15 recruited postpartum, at a mean 2 (SD±3) weeks postpartum. 78 subjects attended the initial baseline visit, and 75 subjects were randomized into the trial at a mean 7 (SD±2) weeks postpartum. The recruited subjects were similar in age and race/ethnicity to the total population of 538 GDM deliveries at BWH over the 17-month recruitment period. Conclusions Our multilevel approach allowed us to successfully meet our recruitment goal and recruit a representative sample of women with recent GDM. We believe that our most successful strategies included using a dedicated in-person recruiter, integrating recruitment into clinical flow, allowing for flexibility in recruitment, minimizing barriers to participation, and using an opt-out strategy with providers. Although the majority of women were recruited while pregnant, women recruited
Rosanowski, S M; Cogger, N; Rogers, C W; Benschop, J; Stevenson, M A
2012-12-01
We conducted a cross-sectional survey to determine the demographic characteristics of non-commercial horses in New Zealand. A sampling frame of properties with non-commercial horses was derived from the national farms database, AgriBase™. Horse properties were stratified by property size and a generalised random-tessellated stratified (GRTS) sampling strategy was used to select properties (n=2912) to take part in the survey. The GRTS sampling design allowed for the selection of properties that were spatially balanced relative to the distribution of horse properties throughout the country. The registered decision maker of the property, as identified in AgriBase™, was sent a questionnaire asking them to describe the demographic characteristics of horses on the property, including the number and reason for keeping horses, as well as information about other animals kept on the property and the proximity of boundary neighbours with horses. The response rate to the survey was 38% (1044/2912) and the response rate was not associated with property size or region. A total of 5322 horses were kept for recreation, competition, racing, breeding, stock work, or as pets. The reasons for keeping horses and the number and class of horses varied significantly between regions and by property size. Of the properties sampled, less than half kept horses that could have been registered with Equestrian Sports New Zealand or either of the racing codes. Of the respondents that reported knowing whether their neighbours had horses, 58.6% (455/776) of properties had at least one boundary neighbour that kept horses. The results of this study have important implications for New Zealand, which has an equine population that is naïve to many equine diseases considered endemic worldwide. The ability to identify, and apply accurate knowledge of the population at risk to infectious disease control strategies would lead to more effective strategies to control and prevent disease spread during an
Guo, Yan; Chen, Xinguang; Gong, Jie; Li, Fang; Zhu, Chaoyang; Yan, Yaqiong; Wang, Liang
2016-01-01
Background Millions of people move from rural areas to urban areas in China to pursue new opportunities while leaving their spouses and children at rural homes. Little is known about the impact of migration-related separation on mental health of these rural migrants in urban China. Methods Survey data from a random sample of rural-to-urban migrants (n = 1113, aged 18–45) from Wuhan were analyzed. The Domestic Migration Stress Questionnaire (DMSQ), an instrument with four subconstructs, was used to measure migration-related stress. The relationship between spouse/child separation and stress was assessed using survey estimation methods to account for the multi-level sampling design. Results 16.46% of couples were separated from their spouses (spouse-separation only), 25.81% of parents were separated from their children (child separation only). Among the participants who married and had children, 5.97% were separated from both their spouses and children (double separation). Spouse-separation only and double separation did not scored significantly higher on DMSQ than those with no separation. Compared to parents without child separation, parents with child separation scored significantly higher on DMSQ (mean score = 2.88, 95% CI: [2.81, 2.95] vs. 2.60 [2.53, 2.67], p < .05). Stratified analysis by separation type and by gender indicated that the association was stronger for child-separation only and for female participants. Conclusion Child-separation is an important source of migration-related stress, and the effect is particularly strong for migrant women. Public policies and intervention programs should consider these factors to encourage and facilitate the co-migration of parents with their children to mitigate migration-related stress. PMID:27124768
Garboś, Sławomir; Święcicka, Dorota
2015-11-01
The random daytime (RDT) sampling method was used for the first time in the assessment of average weekly exposure to uranium through drinking water in a large water supply zone. Data set of uranium concentrations determined in 106 RDT samples collected in three runs from the water supply zone in Wroclaw (Poland), cannot be simply described by normal or log-normal distributions. Therefore, a numerical method designed for the detection and calculation of bimodal distribution was applied. The extracted two distributions containing data from the summer season of 2011 and the winter season of 2012 (nI=72) and from the summer season of 2013 (nII=34) allowed to estimate means of U concentrations in drinking water: 0.947 μg/L and 1.23 μg/L, respectively. As the removal efficiency of uranium during applied treatment process is negligible, the effect of increase in uranium concentration can be explained by higher U concentration in the surface-infiltration water used for the production of drinking water. During the summer season of 2013, heavy rains were observed in Lower Silesia region, causing floods over the territory of the entire region. Fluctuations in uranium concentrations in surface-infiltration water can be attributed to releases of uranium from specific sources - migration from phosphate fertilizers and leaching from mineral deposits. Thus, exposure to uranium through drinking water may increase during extreme rainfall events. The average chronic weekly intakes of uranium through drinking water, estimated on the basis of central values of the extracted normal distributions, accounted for 3.2% and 4.1% of tolerable weekly intake.
Schiamberg, Lawrence B; Oehmke, James; Zhang, Zhenmei; Barboza, Gia E; Griffore, Robert J; Von Heydrich, Levente; Post, Lori A; Weatherill, Robin P; Mastin, Teresa
2012-01-01
Few empirical studies have focused on elder abuse in nursing home settings. The present study investigated the prevalence and risk factors of staff physical abuse among elderly individuals receiving nursing home care in Michigan. A random sample of 452 adults with elderly relatives, older than 65 years, and in nursing home care completed a telephone survey regarding elder abuse and neglect experienced by this elder family member in the care setting. Some 24.3% of respondents reported at least one incident of physical abuse by nursing home staff. A logistic regression model was used to estimate the importance of various risk factors in nursing home abuse. Limitations in activities of daily living (ADLs), older adult behavioral difficulties, and previous victimization by nonstaff perpetrators were associated with a greater likelihood of physical abuse. Interventions that address these risk factors may be effective in reducing older adult physical abuse in nursing homes. Attention to the contextual or ecological character of nursing home abuse is essential, particularly in light of the findings of this study.
Szklo, André Salem; Almeida, Liz Maria de; Figueiredo, Valeska; Lozana, José de Azevedo; Azevedo e Silva Mendonça, Gulnar; Moura, Lenildo de; Szklo, Moysés
2007-04-01
This article examines region-specific relations between prevalence of protection against sunlight and socio-demographic and behavioral variables in Brazil. Data were derived from a cross-sectional population-based random sample. Information on sunlight exposure was available for a total of 16,999 individuals 15 years and older. Comparing the North and South of Brazil, crude differences between women and men in the use of "sunscreen" and "protective headwear" were +10.9% (95%CI: 7.1; 14.6) and -11.6% (95%CI: -17.0; -6.3) in the North and +21.3% (95%CI: 17.7; 24.9) and -16.0% (95%CI: -20.2; -12.5) in the South. Adjusted differences by selected variables confirmed that women use more sunscreen protection and less headwear protection as compared to men in both the North and South, but the difference was not homogeneous by region (interaction term p value < 0.05).
Pita-Fernández, Salvador; González-Martín, Cristina; Seoane-Pillado, Teresa; López-Calviño, Beatriz; Pértega-Díaz, Sonia; Gil-Guillén, Vicente
2015-01-01
Background Research is needed to determine the prevalence and variables associated with the diagnosis of flatfoot, and to evaluate the validity of three footprint analysis methods for diagnosing flatfoot, using clinical diagnosis as a benchmark. Methods We conducted a cross-sectional study of a population-based random sample ≥40 years old (n = 1002) in A Coruña, Spain. Anthropometric variables, Charlson’s comorbidity score, and podiatric examination (including measurement of Clarke’s angle, the Chippaux-Smirak index, and the Staheli index) were used for comparison with a clinical diagnosis method using a podoscope. Multivariate regression was performed. Informed patient consent and ethical review approval were obtained. Results Prevalence of flatfoot in the left and right footprint, measured using the podoscope, was 19.0% and 18.9%, respectively. Variables independently associated with flatfoot diagnosis were age (OR 1.07), female gender (OR 3.55) and BMI (OR 1.39). The area under the receiver operating characteristic curve (AUC) showed that Clarke’s angle is highly accurate in predicting flatfoot (AUC 0.94), followed by the Chippaux-Smirak (AUC 0.83) and Staheli (AUC 0.80) indices. Sensitivity values were 89.8% for Clarke’s angle, 94.2% for the Chippaux-Smirak index, and 81.8% for the Staheli index, with respective positive likelihood ratios or 9.7, 2.1, and 2.0. Conclusions Age, gender, and BMI were associated with a flatfoot diagnosis. The indices studied are suitable for diagnosing flatfoot in adults, especially Clarke’s angle, which is highly accurate for flatfoot diagnosis in this population. PMID:25382154
Bachem, Rahel; Maercker, Andreas
2016-09-01
Adjustment disorders (AjD) are among the most frequent mental disorders yet often remain untreated. The high prevalence, comparatively mild symptom impairment, and transient nature make AjD a promising target for low-threshold self-help interventions. Bibliotherapy represents a potential treatment for AjD problems. This study investigates the effectiveness of a cognitive behavioral self-help manual specifically directed at alleviating AjD symptoms in a homogenous sample of burglary victims. Participants with clinical or subclinical AjD symptoms following experience of burglary were randomized to an intervention group (n = 30) or waiting-list control group (n = 24). The new explicit stress response syndrome model for diagnosing AjD was applied. Participants received no therapist support and assessments took place at baseline, after the one-month intervention, and at three-month follow-up. Based on completer analyses, group by time interactions indicated that the intervention group showed more improvement in AjD symptoms of preoccupation and in post-traumatic stress symptoms. Post-intervention between-group effect sizes ranged from Cohen's d = .17 to .67 and the proportion of participants showing reliable change was consistently higher in the intervention group than in the control group. Engagement with the self-help manual was high: 87% of participants had worked through at least half the manual. This is the first published RCT of a bibliotherapeutic self-help intervention for AjD problems. The findings provide evidence that a low-threshold self-help intervention without therapist contact is a feasible and effective treatment for symptoms of AjD.
Stergiopoulos, Vicky; Gozdzik, Agnes; Misir, Vachan; Skosireva, Anna; Connelly, Jo; Sarang, Aseefa; Whisler, Adam; Hwang, Stephen W.; O’Campo, Patricia; McKenzie, Kwame
2015-01-01
Housing First (HF) is being widely disseminated in efforts to end homelessness among homeless adults with psychiatric disabilities. This study evaluates the effectiveness of HF with Intensive Case Management (ICM) among ethnically diverse homeless adults in an urban setting. 378 participants were randomized to HF with ICM or treatment-as-usual (TAU) in Toronto (Canada), and followed for 24 months. Measures of effectiveness included housing stability, physical (EQ5D-VAS) and mental (CSI, GAIN-SS) health, social functioning (MCAS), quality of life (QoLI20), and health service use. Two-thirds of the sample (63%) was from racialized groups and half (50%) were born outside Canada. Over the 24 months of follow-up, HF participants spent a significantly greater percentage of time in stable residences compared to TAU participants (75.1% 95% CI 70.5 to 79.7 vs. 39.3% 95% CI 34.3 to 44.2, respectively). Similarly, community functioning (MCAS) improved significantly from baseline in HF compared to TAU participants (change in mean difference = +1.67 95% CI 0.04 to 3.30). There was a significant reduction in the number of days spent experiencing alcohol problems among the HF compared to TAU participants at 24 months (ratio of rate ratios = 0.47 95% CI 0.22 to 0.99) relative to baseline, a reduction of 53%. Although the number of emergency department visits and days in hospital over 24 months did not differ significantly between HF and TAU participants, fewer HF participants compared to TAU participants had 1 or more hospitalizations during this period (70.4% vs. 81.1%, respectively; P=0.044). Compared to non-racialized HF participants, racialized HF participants saw an increase in the amount of money spent on alcohol (change in mean difference = $112.90 95% CI 5.84 to 219.96) and a reduction in physical community integration (ratio of rate ratios = 0.67 95% CI 0.47 to 0.96) from baseline to 24 months. Secondary analyses found a significant reduction in the number of days
Berman, Anne H.; Liu, Bojing; Ullman, Sara; Jadbäck, Isabel; Engström, Karin
2016-01-01
Background The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL), with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11–16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured. Methods A random population sample consisting of 600 children aged 11–16, 100 per age group and one of their parents (N = 1200), were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK) coefficient for ordinal data (PABAK-OS); dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots. Results Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU) but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77), Parent relations and autonomy (55.1/49.99), Social Support and peers (54.1/49.94) and School (55.8/50.01). Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences
Random Selection for Drug Screening
Center for Human Reliability Studies
2007-05-01
Simple random sampling is generally the starting point for a random sampling process. This sampling technique ensures that each individual within a group (population) has an equal chance of being selected. There are a variety of ways to implement random sampling in a practical situation.
Lin, Yifei; Yin, Senlin; Lai, Sike; Tang, Ji; Huang, Jin; Du, Liang
2016-10-01
As the relationship between physicians and patients deteriorated in China recently, medical conflicts occurred more frequently now. Physicians, to a certain extent, also take some responsibilities. Awareness of medical professionalism and its influence factors can be helpful to take targeted measures and alleviate the contradiction. Through a combination of physicians' self-assessment and patients' assessment in ambulatory care clinics in Chengdu, this research aims to evaluate the importance of medical professionalism in hospitals and explore the influence factors, hoping to provide decision-making references to improve this grim situation. From February to March, 2013, a cross-sectional study was conducted in 2 tier 3 hospitals, 5 tier 2 hospitals, and 10 community hospitals through a stratified-random sampling method on physicians and patients, at a ratio of 1/5. Questionnaires are adopted from a pilot study. A total of 382 physicians and 1910 patients were matched and surveyed. Regarding the medical professionalism, the scores of the self-assessment for physicians were 85.18 ± 7.267 out of 100 and the scores of patient-assessment were 57.66 ± 7.043 out of 70. The influence factors of self-assessment were physicians' working years (P = 0.003) and patients' complaints (P = 0.006), whereas the influence factors of patient-assessment were patients' ages (P = 0.001) and their physicians' working years (P < 0.01) and satisfaction on the payment mode (P = 0.006). Higher self-assessment on the medical professionalism was in accordance with physicians of more working years and no complaint history. Higher patient-assessment was in line with elder patients, the physicians' more working years, and higher satisfaction on the payment mode. Elder patients, encountering with physicians who worked more years in health care services or with higher satisfaction on the payment mode, contribute to higher scores in patient assessment part. The government should
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Randomized SUSAN edge detector
NASA Astrophysics Data System (ADS)
Qu, Zhi-Guo; Wang, Ping; Gao, Ying-Hui; Wang, Peng
2011-11-01
A speed up technique for the SUSAN edge detector based on random sampling is proposed. Instead of sliding the mask pixel by pixel on an image as the SUSAN edge detector does, the proposed scheme places the mask randomly on pixels to find edges in the image; we hereby name it randomized SUSAN edge detector (R-SUSAN). Specifically, the R-SUSAN edge detector adopts three approaches in the framework of random sampling to accelerate a SUSAN edge detector: procedure integration of response computation and nonmaxima suppression, reduction of unnecessary processing for obvious nonedge pixels, and early termination. Experimental results demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Forkert, Nils Daniel; Fiehler, Jens
2015-03-01
The tissue outcome prediction in acute ischemic stroke patients is highly relevant for clinical and research purposes. It has been shown that the combined analysis of diffusion and perfusion MRI datasets using high-level machine learning techniques leads to an improved prediction of final infarction compared to single perfusion parameter thresholding. However, most high-level classifiers require a previous training and, until now, it is ambiguous how many subjects are required for this, which is the focus of this work. 23 MRI datasets of acute stroke patients with known tissue outcome were used in this work. Relative values of diffusion and perfusion parameters as well as the binary tissue outcome were extracted on a voxel-by- voxel level for all patients and used for training of a random forest classifier. The number of patients used for training set definition was iteratively and randomly reduced from using all 22 other patients to only one other patient. Thus, 22 tissue outcome predictions were generated for each patient using the trained random forest classifiers and compared to the known tissue outcome using the Dice coefficient. Overall, a logarithmic relation between the number of patients used for training set definition and tissue outcome prediction accuracy was found. Quantitatively, a mean Dice coefficient of 0.45 was found for the prediction using the training set consisting of the voxel information from only one other patient, which increases to 0.53 if using all other patients (n=22). Based on extrapolation, 50-100 patients appear to be a reasonable tradeoff between tissue outcome prediction accuracy and effort required for data acquisition and preparation.
Investigating the Randomness of Numbers
ERIC Educational Resources Information Center
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
ERIC Educational Resources Information Center
Wienke, Chris; Hill, Gretchen J.
2009-01-01
Prior research indicates that the married enjoy higher levels of well-being than the unmarried, including unmarried cohabiters. Yet, comparisons of married and unmarried persons routinely exclude partnered gays and lesbians. Using a large probability sample, this study assessed how the well-being of partnered gays and lesbians (282) compares with…
ERIC Educational Resources Information Center
Michaelides, Michalis P.; Haertel, Edward H.
2014-01-01
The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…
Random Selection for Drug Screening
Center for Human Reliability Studies
2007-05-01
Sampling is the process of choosing some members out of a group or population. Probablity sampling, or random sampling, is the process of selecting members by chance with a known probability of each individual being chosen.
Sultana, Farhana; English, Dallas R; Simpson, Julie A; Drennan, Kelly T; Mullins, Robyn; Brotherton, Julia M L; Wrede, C David; Heley, Stella; Saville, Marion; Gertig, Dorota M
2016-07-15
We conducted a randomized controlled trial to determine whether HPV self-sampling increases participation in cervical screening by never- and under-screened (not screened in past 5 years) women when compared with a reminder letter for a Pap test. Never- or under-screened Victorian women aged 30-69 years, not pregnant and with no prior hysterectomy were eligible. Within each stratum (never-screened and under-screened), we randomly allocated 7,140 women to self-sampling and 1,020 to Pap test reminders. The self-sampling kit comprised a nylon tipped flocked swab enclosed in a dry plastic tube. The primary outcome was participation, as indicated by returning a swab or undergoing a Pap test; the secondary outcome, for women in the self-sampling arm with a positive HPV test, was undergoing appropriate clinical investigation. The Roche Cobas® 4800 test was used to measure presence of HPV DNA. Participation was higher for the self-sampling arm: 20.3 versus 6.0% for never-screened women (absolute difference 14.4%, 95% CI: 12.6-16.1%, p < 0.001) and 11.5 versus 6.4% for under-screened women (difference 5.1%, 95% CI: 3.4-6.8%, p < 0.001). Of the 1,649 women who returned a swab, 45 (2.7%) were positive for HPV16/18 and 95 (5.8%) were positive for other high-risk HPV types. Within 6 months, 28 (62.2%) women positive for HPV16/18 had colposcopy as recommended and nine (20%) had cytology only. Of women positive for other high-risk HPV types, 78 (82.1%) had a Pap test as recommended. HPV self-sampling improves participation in cervical screening for never- and under-screened women and most women with HPV detected have appropriate clinical investigation.
Moser, Barry Kurt; Halabi, Susan
2013-01-01
In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described. PMID:25530661
ERIC Educational Resources Information Center
Handley, John C.
1991-01-01
Discussion of sampling methods used in information science research focuses on Fussler's method for sampling catalog cards and on sampling by length. Highlights include simple random sampling, sampling with probability equal to size without replacement, sampling with replacement, and examples of estimating the number of books on shelves in certain…
ROMERO,VICENTE J.
2000-05-04
In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.
Sigmundová, Dagmar; Sigmund, Erik; Vokáčová, Jana; Kopčáková, Jaroslava
2014-01-01
This study investigates whether more physically active parents bring up more physically active children and whether parents’ level of physical activity helps children achieve step count recommendations on weekdays and weekends. The participants (388 parents aged 35–45 and their 485 children aged 9–12) were randomly recruited from 21 Czech government-funded primary schools. The participants recorded pedometer step counts for seven days (≥10 h a day) during April–May and September–October of 2013. Logistic regression (Enter method) was used to examine the achievement of the international recommendations of 11,000 steps/day for girls and 13,000 steps/day for boys. The children of fathers and mothers who met the weekend recommendation of 10,000 steps were 5.48 (95% confidence interval: 1.65; 18.19; p < 0.01) and 3.60 times, respectively (95% confidence interval: 1.21; 10.74; p < 0.05) more likely to achieve the international weekend recommendation than the children of less active parents. The children of mothers who reached the weekday pedometer-based step count recommendation were 4.94 times (95% confidence interval: 1.45; 16.82; p < 0.05) more likely to fulfil the step count recommendation on weekdays than the children of less active mothers. PMID:25026084
Sigmundová, Dagmar; Sigmund, Erik; Vokáčová, Jana; Kopková, Jaroslava
2014-07-14
This study investigates whether more physically active parents bring up more physically active children and whether parents' level of physical activity helps children achieve step count recommendations on weekdays and weekends. The participants (388 parents aged 35-45 and their 485 children aged 9-12) were randomly recruited from 21 Czech government-funded primary schools. The participants recorded pedometer step counts for seven days (≥10 h a day) during April-May and September-October of 2013. Logistic regression (Enter method) was used to examine the achievement of the international recommendations of 11,000 steps/day for girls and 13,000 steps/day for boys. The children of fathers and mothers who met the weekend recommendation of 10,000 steps were 5.48 (95% confidence interval: 1.65; 18.19; p < 0.01) and 3.60 times, respectively (95% confidence interval: 1.21; 10.74; p < 0.05) more likely to achieve the international weekend recommendation than the children of less active parents. The children of mothers who reached the weekday pedometer-based step count recommendation were 4.94 times (95% confidence interval: 1.45; 16.82; p < 0.05) more likely to fulfil the step count recommendation on weekdays than the children of less active mothers.
Chacko, A; Bedard, A. C; Marks, D.J; Feirsen, N; Uderman, J.Z; Chimiklis, A; Rajwan, E; Cornwell, M; Anderson, L; Zwilling, A; Ramon, M
2013-01-01
Background Cogmed Working Memory Training (CWMT) has received considerable attention as a promising intervention for the treatment of Attention-Deficit/Hyperactivity Disorder (ADHD) in children. At the same time, methodological weaknesses in previous clinical trials call into question reported efficacy of CWMT. In particular, lack of equivalence in key aspects of CWMT (i.e., contingent reinforcement, time-on-task with computer training, parent-child interactions, supportive coaching) between CWMT and placebo versions of CWMT used in previous trials may account for the beneficial outcomes favoring CWMT. Methods Eighty-five 7- to 11-year old school-age children with ADHD (66 male; 78%) were randomized to either standard CWMT (CWMT Active) or a well-controlled CWMT placebo condition (CWMT Placebo) and evaluated before and 3 weeks after treatment. Dependent measures included parent and teacher ratings of ADHD symptoms; objective measures of attention, activity level, and impulsivity; and psychometric indices of working memory and academic achievement (Clinical trial title: Combined cognitive remediation and behavioral intervention for the treatment of Attention-Deficit/Hyperactivity Disorder; http://clinicaltrials.gov/ct2/show/NCT01137318). Results CWMT Active participants demonstrated significantly greater improvements in verbal and nonverbal working memory storage, but evidenced no discernible gains in working memory storage plus processing/manipulation. In addition, no treatment group differences were observed for any other outcome measures. Conclusions When a more rigorous comparison condition is utilized, CWMT demonstrates effects on certain aspects of working memory in children with ADHD; however, CWMT does not appear to foster treatment generalization to other domains of functioning. As such, CWMT should not be considered a viable treatment for children with ADHD. PMID:24117656
Jesdapatarakul, Somnuek; Tangjitgamol, Siriwan; Nguansangiam, Sudarat; Manusirivithaya, Sumonmal
2011-01-01
To assess the diagnostic performances of LiquiPrep® (LP) to detect cervical cellular abnormality in comparison to Papanicolaou (Pap) smear in 194 women with abnormal cervical cytology who were scheduled for colposcopy at the institution between January 2008 and November 2008. The women were randomized to undergo a repeated cervical cytologic evaluation by Pap smear followed by LP, or the two methods in alternating order. The pathologist was blinded to previous cytologic diagnosis and the pair of slides assigned for each woman. Cytologic results from each method were compared to subsequent histopathology. Mean screening time for each LP and Pap slides were 4.3 ± 1.2 minutes and 5.4 ± 1.1 minutes, respectively (P < 0.001). From 194 cases, ASC or AGC were diagnosed in 72 cases (37.1%) from LP and 68 cases (35.1%) from Pap smear. After excluding the ASC/AGC group, the overall cytologic diagnostic agreement between the two tests were 69 of 87 cases (73.6%) while the agreements with histologic diagnoses were 39/87 cases from LP (44.8%) and 41 (47.1%) from Pap smear (P = 0.824). The accuracy of LP was not significantly different from Pap test, 43.4% (95% confidence interval [CI]: 34.8-52.1%) compared to 44.4% (95% CI: 35.7-53.1%). LP did not have superior performance over Pap test to detect high-grade lesions (≥ cervical intraepithelial neoplasia II) using ASC/AGC as the threshold with the sensitivity of 70.5% (95% CI: 64.0-76.9%) versus 77.3% (95% CI: 71.4-83.2%), respectively.
Irwin, Peter; Nguyen, Thi Ly-Huong; Chen, Chin-Yi
2008-05-01
For most applications, 3-5 observations, or samplings (n), are utilized to estimate total aerobic plate count in an average population (μ) that is greater than about 50 cells, or colony forming units (CFU), per sampled volume. We have chosen to utilize a 6 × 6 drop plate method for bacterial colony selection because it offers the means to rapidly perform all requisite dilutions in a 96-well format and plate these dilutions on solid media using minimal materials. Besides traditional quantitative purposes, we also need to select colonies which are well-separated from each other for the purpose of bacterial identification. To achieve this goal using the drop plate format requires the utilization of very dilute solutions (μ < 10 CFUs per sampled drop). At such low CFU densities the sampling error becomes problematic. To address this issue we produced both observed and computer-generated colony count data and divided a large sample of individual counts randomly into N subsamples each with n = 2-24 observations (N × n = 360). From these data we calculated the average total mean-normalized (x⁻(tot), n = 360) deviation of the total standard deviation (s (tot)) from each jth subsample's estimate (s ( j )), which we call Δ. When either observed or computer-generated Δ values were analyzed as a function of x⁻(tot), a set of relationships (∞ ₋₂√ ⁻x(tot)) were generated which appeared to converge at an n of about 18 observations. This finding was verified analytically at even lower CFU concentrations (⁻x(tot) ≈ 1 − 10 CFUs per observation). Additional experiments using the drop plate format and n = 18 samplings were performed on food samples along with most probable number (MPN) analyses and it was found that the two enumeration methods did not differ significantly.
NASA Astrophysics Data System (ADS)
Frueh, W. Terry; Lancaster, Stephen T.
2014-03-01
Inherited age is defined herein as the difference between times of carbon fixation in a material and deposition of that material within sediments from which it is eventually sampled in order to estimate deposit age via radiocarbon dating. Inheritance generally leads to over-estimation of the age by an unknown amount and therefore represents unquantified bias and uncertainty that could potentially lead to erroneous inferences. Inherited ages in charcoal are likely to be larger, and therefore detectable relative to analytic error, where forests are dominated by longer-lived trees, material is stored for longer periods upslope, and downstream post-fire delivery of that material is dominated by mass movements, such as in the near-coastal mountains of northwestern North America. Inherited age distribution functions were estimated from radiocarbon dating of 126 charcoal pieces from 14 stream-bank exposures of debris-flow deposits, fluvial fines, and fluvial gravels along a headwater stream in the southern Oregon Coast Range, USA. In the region, these 3 facies are representative of the nearly continuous coalescing fan-fill complexes blanketing valley floors of headwater streams where the dominant transport mechanism shifts from debris-flow to fluvial. Within each depositional unit, and for each charcoal piece within that unit, convolution of the calibrated age distribution with that of the youngest piece yielded an inherited age distribution for the unit. Fits to the normalized sums of inherited age distributions for units of like facies provided estimates of facies-specific inherited age distribution functions. Finally, convolution of these distribution functions with calibrated deposit age distributions yielded corrections to published valley-floor deposit ages and residence time distributions from nearby similar sites. Residence time distributions were inferred from the normalized sums of distributions of ˜30 deposit ages at each of 4 sites: 2 adjacent valley reaches
Lai, Jung-Nien; Wu, Chien-Tung; Chen, Pau-Chung; Huang, Chiun-Sheng; Chow, Song-Nan; Wang, Jung-Der
2011-01-01
Background Hormonal therapy (HT) either estrogen alone (E-alone) or estrogen plus progesterone (E+P) appears to increase the risk for breast cancer in Western countries. However, limited information is available on the association between HT and breast cancer in Asian women characterized mainly by dietary phytoestrogens intake and low prevalence of contraceptive pills prescription. Methodology A total of 65,723 women (20–79 years of age) without cancer or the use of Chinese herbal products were recruited from a nation-wide one-million representative sample of the National Health Insurance of Taiwan and followed from 1997 to 2008. Seven hundred and eighty incidents of invasive breast cancer were diagnosed. Using a reference group that comprised 40,052 women who had never received a hormone prescription, Cox proportional hazard models were constructed to determine the hazard ratios for receiving different types of HT and the occurrence of breast cancer. Conclusions 5,156 (20%) women ever used E+P, 2,798 (10.8%) ever used E-alone, and 17,717 (69%) ever used other preparation types. The Cox model revealed adjusted hazard ratios (HRs) of 2.05 (95% CI 1.37–3.07) for current users of E-alone and 8.65 (95% CI 5.45–13.70) for current users of E+P. Using women who had ceased to take hormonal medication for 6 years or more as the reference group, the adjusted HRs were significantly elevated and greater than current users and women who had discontinued hormonal medication for less than 6 years. Current users of either E-alone or E+P have an increased risk for invasive breast cancer in Taiwan, and precautions should be taken when such agents are prescribed. PMID:21998640
2011-01-01
Background An optimal level of physical activity (PA) in adolescence influences the level of PA in adulthood. Although PA declines with age have been demonstrated repeatedly, few studies have been carried out on secular trends. The present study assessed levels, types and secular trends of PA and sedentary behaviour of a sample of adolescents in the Czech Republic. Methods The study comprised two cross-sectional cohorts of adolescents ten years apart. The analysis compared data collected through a week-long monitoring of adolescents' PA in 1998-2000 and 2008-2010. Adolescents wore either Yamax SW-701 or Omron HJ-105 pedometer continuously for 7 days (at least 10 hours per day) excluding sleeping, hygiene and bathing. They also recorded their number of steps per day, the type and duration of PA and sedentary behaviour (in minutes) on record sheets. In total, 902 adolescents (410 boys; 492 girls) aged 14-18 were eligible for analysis. Results Overweight and obesity in Czech adolescents participating in this study increased from 5.5% (older cohort, 1998-2000) to 10.4% (younger cohort, 2008-2010). There were no inter-cohort significant changes in the total amount of sedentary behaviour in boys. However in girls, on weekdays, there was a significant increase in the total duration of sedentary behaviour of the younger cohort (2008-2010) compared with the older one (1998-2000). Studying and screen time (television and computer) were among the main sedentary behaviours in Czech adolescents. The types of sedentary behaviour also changed: watching TV (1998-2000) was replaced by time spent on computers (2008-2010). The Czech health-related criterion (achieving 11,000 steps per day) decreased only in boys from 68% (1998-2000) to 55% (2008-2010). Across both genders, 55%-75% of Czech adolescents met the health-related criterion of recommended steps per day, however less participants in the younger cohort (2008-2010) met this criterion than in the older cohort (1998-2000) ten
Chen, Xinguang; Yu, Bin; Zhou, Dunjin; Zhou, Wang; Gong, Jie; Li, Shiyue; Stanton, Bonita
2015-01-01
Background Mobile populations and men who have sex with men (MSM) play an increasing role in the current HIV epidemic in China and across the globe. While considerable research has addressed both of these at-risk populations, more effective HIV control requires accurate data on the number of MSM at the population level, particularly MSM among migrant populations. Methods Survey data from a random sample of male rural-to-urban migrants (aged 18-45, n=572) in Wuhan, China were analyzed and compared with those of randomly selected non-migrant urban (n=566) and rural counterparts (580). The GIS/GPS technologies were used for sampling and the survey estimation method was used for data analysis. Results HIV-related risk behaviors among rural-to-urban migrants were similar to those among the two comparison groups. The estimated proportion of MSM among migrants [95% CI] was 5.8% [4.7, 6.8], higher than 2.8% [1.2, 4.5] for rural residents and 1.0% [0.0, 2.4] for urban residents, respectively. Among these migrants, the MSM were more likely than non-MSM to be older in age, married, and migrated to more cities. They were also more likely to co-habit with others in rental properties located in new town and neighborhoods with fewer old acquaintances and more entertainment establishments. In addition, they were more likely to engage in commercial sex and less likely to consistently use condoms. Conclusion Findings of this study indicate that compared to rural and urban populations, the migrant population in Wuhan consists of a higher proportion of MSM who also exhibit higher levels of HIV-related risk behaviors. More effective interventions should target this population with a focus on neighborhood factors, social capital and collective efficacy for risk reduction. PMID:26241900
Control of Randomly Sampled Robotic Systems
1989-05-01
Artificial Inteligence Laboratory, 1972. PumA26O.c Ned Mar 8 17:51:04 1989 1 #include <rnath.h> #define real float #define mm 6 #define G 9.83. #define M6...equivalent transformation of the state variables, the matrix norm depends on the transformation jirii^HT-TTij. This fact is taken into account in the
Sugarman, R.M.
1960-08-30
An oscilloscope is designed for displaying transient signal waveforms having random time and amplitude distributions. The oscilloscopc is a sampling device that selects for display a portion of only those waveforms having a particular range of amplitudes. For this purpose a pulse-height analyzer is provided to screen the pulses. A variable voltage-level shifter and a time-scale rampvoltage generator take the pulse height relative to the start of the waveform. The variable voltage shifter produces a voltage level raised one step for each sequential signal waveform to be sampled and this results in an unsmeared record of input signal waveforms. Appropriate delay devices permit each sample waveform to pass its peak amplitude before the circuit selects it for display.
Kepler, Christopher K
2017-04-01
An understanding of randomization is important both for study design and to assist medical professionals in evaluating the medical literature. Simple randomization can be done through a variety of techniques, but carries a risk of unequal distribution of subjects into treatment groups. Block randomization can be used to overcome this limitation by ensuring that small subgroups are distributed evenly between treatment groups. Finally, techniques can be used to evenly distribute subjects between treatment groups while accounting for confounding variables, so as to not skew results when there is a high index of suspicion that a particular variable will influence outcome.
NASA Astrophysics Data System (ADS)
ajansen; kwhitefoot; panteltje1; edprochak; sudhakar, the
2014-07-01
In reply to the physicsworld.com news story “How to make a quantum random-number generator from a mobile phone” (16 May, http://ow.ly/xFiYc, see also p5), which describes a way of delivering random numbers by counting the number of photons that impinge on each of the individual pixels in the camera of a Nokia N9 smartphone.
NASA Technical Reports Server (NTRS)
Messaro. Semma; Harrison, Phillip
2010-01-01
Ares I Zonal Random vibration environments due to acoustic impingement and combustion processes are develop for liftoff, ascent and reentry. Random Vibration test criteria for Ares I Upper Stage pyrotechnic components are developed by enveloping the applicable zonal environments where each component is located. Random vibration tests will be conducted to assure that these components will survive and function appropriately after exposure to the expected vibration environments. Methodology: Random Vibration test criteria for Ares I Upper Stage pyrotechnic components were desired that would envelope all the applicable environments where each component was located. Applicable Ares I Vehicle drawings and design information needed to be assessed to determine the location(s) for each component on the Ares I Upper Stage. Design and test criteria needed to be developed by plotting and enveloping the applicable environments using Microsoft Excel Spreadsheet Software and documenting them in a report Using Microsoft Word Processing Software. Conclusion: Random vibration liftoff, ascent, and green run design & test criteria for the Upper Stage Pyrotechnic Components were developed by using Microsoft Excel to envelope zonal environments applicable to each component. Results were transferred from Excel into a report using Microsoft Word. After the report is reviewed and edited by my mentor it will be submitted for publication as an attachment to a memorandum. Pyrotechnic component designers will extract criteria from my report for incorporation into the design and test specifications for components. Eventually the hardware will be tested to the environments I developed to assure that the components will survive and function appropriately after exposure to the expected vibration environments.
GEOSTATISTICAL SAMPLING DESIGNS FOR HAZARDOUS WASTE SITES
This chapter discusses field sampling design for environmental sites and hazardous waste sites with respect to random variable sampling theory, Gy's sampling theory, and geostatistical (kriging) sampling theory. The literature often presents these sampling methods as an adversari...
Pham, Vy P; Luce, Andrea M; Ruppelt, Sara C; Wei, Wenjing; Aitken, Samuel L; Musick, William L; Roux, Ryan K; Garey, Kevin W
2015-10-01
Consensus on the optimal treatment of Clostridium difficile infection (CDI) is rapidly changing. Treatment with metronidazole has been associated with increased clinical failure rates; however, the reasons for this are unclear. The purpose of this study was to assess age-related treatment response rates in hospitalized patients with CDI treated with metronidazole. This was a retrospective, multicenter cohort study of hospitalized patients with CDI. Patients were assessed for refractory CDI, defined as persistent diarrhea after 7 days of metronidazole therapy, and stratified by age and clinical characteristics. A total of 242 individuals, aged 60 ± 18 years (Charlson comorbidity index, 3.8 ± 2.4; Horn's index, 1.7 ± 1.0) were included. One hundred twenty-eight patients (53%) had severe CDI. Seventy patients (29%) had refractory CDI, a percentage that increased from 22% to 28% and to 37% for patients aged less than 50 years, for patients from 50 to 70 years, and for patients aged >70 years, respectively (P = 0.05). In multivariate analysis, Horn's index (odds ratio [OR], 2.04; 95% confidence interval [CI], 1.50 to 2.77; P < 0.001), severe CDI (OR, 2.25; 95% CI, 1.15 to 4.41; P = 0.018), and continued use of antibiotics (OR, 2.65; 95% CI, 1.30 to 5.39; P = 0.0072) were identified as significant predictors of refractory CDI. Age was not identified as an independent risk factor for refractory CDI. Therefore, hospitalized elderly patients with CDI treated with metronidazole had increased refractory CDI rates likely due to increased underlying severity of illness, severity of CDI, and concomitant antibiotic use. These results may help identify patients that may benefit from alternative C. difficile treatments other than metronidazole.
Response of the Elderly to Disaster: An Age-Stratified Analysis.
ERIC Educational Resources Information Center
Bolin, Robert; Klenow, Daniel J.
1982-01-01
Analyzed the effect of age on elderly tornado victims' (N=62) responses to stress effects. Compared to younger victims (N=240), the elderly did not suffer disproportionate material losses, but were more likely to be injured and have a death in the household. Elderly victims had a lower incidene of emotional and family problems. (Author/JAC)
NASA Astrophysics Data System (ADS)
Tapiero, Charles S.; Vallois, Pierre
2016-11-01
The premise of this paper is that a fractional probability distribution is based on fractional operators and the fractional (Hurst) index used that alters the classical setting of random variables. For example, a random variable defined by its density function might not have a fractional density function defined in its conventional sense. Practically, it implies that a distribution's granularity defined by a fractional kernel may have properties that differ due to the fractional index used and the fractional calculus applied to define it. The purpose of this paper is to consider an application of fractional calculus to define the fractional density function of a random variable. In addition, we provide and prove a number of results, defining the functional forms of these distributions as well as their existence. In particular, we define fractional probability distributions for increasing and decreasing functions that are right continuous. Examples are used to motivate the usefulness of a statistical approach to fractional calculus and its application to economic and financial problems. In conclusion, this paper is a preliminary attempt to construct statistical fractional models. Due to the breadth and the extent of such problems, this paper may be considered as an initial attempt to do so.
NASA Astrophysics Data System (ADS)
Malyshev, V. A.
1998-04-01
Contents § 1. Definitions1.1. Grammars1.2. Random grammars and L-systems1.3. Semigroup representations § 2. Infinite string dynamics2.1. Cluster expansion2.2. Cluster dynamics2.3. Local observer § 3. Large time behaviour: small perturbations3.1. Invariant measures3.2. Classification § 4. Large time behaviour: context free case4.1. Invariant measures for grammars4.2. L-systems4.3. Fractal correlation functions4.4. Measures on languages Bibliography
Is random access memory random?
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Most software is contructed on the assumption that the programs and data are stored in random access memory (RAM). Physical limitations on the relative speeds of processor and memory elements lead to a variety of memory organizations that match processor addressing rate with memory service rate. These include interleaved and cached memory. A very high fraction of a processor's address requests can be satified from the cache without reference to the main memory. The cache requests information from main memory in blocks that can be transferred at the full memory speed. Programmers who organize algorithms for locality can realize the highest performance from these computers.
NASA Astrophysics Data System (ADS)
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-09-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-01-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514
True Randomness from Big Data.
Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang
2016-09-26
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
Randomly Hyperbranched Polymers
NASA Astrophysics Data System (ADS)
Konkolewicz, Dominik; Gilbert, Robert G.; Gray-Weale, Angus
2007-06-01
We describe a model for the structures of randomly hyperbranched polymers in solution, and find a logarithmic growth of radius with polymer mass. We include segmental overcrowding, which puts an upper limit on the density. The model is tested against simulations, against data on amylopectin, a major component of starch, on glycogen, and on polyglycerols. For samples of synthetic polyglycerol and glycogen, our model holds well for all the available data. The model reveals higher-level scaling structure in glycogen, related to the β particles seen in electron microscopy.
Randomly hyperbranched polymers.
Konkolewicz, Dominik; Gilbert, Robert G; Gray-Weale, Angus
2007-06-08
We describe a model for the structures of randomly hyperbranched polymers in solution, and find a logarithmic growth of radius with polymer mass. We include segmental overcrowding, which puts an upper limit on the density. The model is tested against simulations, against data on amylopectin, a major component of starch, on glycogen, and on polyglycerols. For samples of synthetic polyglycerol and glycogen, our model holds well for all the available data. The model reveals higher-level scaling structure in glycogen, related to the beta particles seen in electron microscopy.
ERIC Educational Resources Information Center
Flournoy, Nancy
Designs for sequential sampling procedures that adapt to cumulative information are discussed. A familiar illustration is the play-the-winner rule in which there are two treatments; after a random start, the same treatment is continued as long as each successive subject registers a success. When a failure occurs, the other treatment is used until…
... repeat the test with blood drawn from a vein. Alternative Names Blood sample - capillary; Fingerstick; Heelstick Images Phenylketonuria test Phenylketonuria test Capillary sample References Garza ...
Methodology Series Module 5: Sampling Strategies.
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.
Random broadcast on random geometric graphs
Bradonjic, Milan; Elsasser, Robert; Friedrich, Tobias
2009-01-01
In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.
How random is a random vector?
Eliazar, Iddo
2015-12-15
Over 80 years ago Samuel Wilks proposed that the “generalized variance” of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the “Wilks standard deviation” –the square root of the generalized variance–is indeed the standard deviation of a random vector. We further establish that the “uncorrelation index” –a derivative of the Wilks standard deviation–is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: “randomness measures” and “independence indices” of random vectors. In turn, these general notions give rise to “randomness diagrams”—tangible planar visualizations that answer the question: How random is a random vector? The notion of “independence indices” yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Methodology Series Module 5: Sampling Strategies
Setia, Maninder Singh
2016-01-01
Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438
... parts of the body, including: Adrenal venous sampling (AVS) , in which blood samples are taken from the ... for a few days before the procedure. For AVS, you will be asked to stop taking certain ...
Mansour-Ghanaei, Fariborz; Joukar, Farahnaz; Atshani, Seyed Mehrbod; Chagharvand, Sepideh; Souti, Fatemeh
2013-01-01
Many people with gastro-esophageal reflux symptoms do not consult a physician; therefore studies on gastro-esophageal reflux in general practice or in hospitals may not accurately describe the burden of gastro-esophageal reflux symptoms in the general population. The aim of this study was to assess the prevalence of gastro-esophageal reflux disease and its association with some life-style parameters in Rasht-Iran. A telephone survey was performed. Phone numbers was randomly collected from the telecommunication service center of Rasht. 1473 people (Mean age: 38.31 ± 13.09) were included in the study. People who didn’t answer the phone after three times or didn’t have consent to enter the study were excluded. Data were collected by an examiner using a GerdQ questionnaire. The validity and reliability of the questionnaire was tested by translation and retranslation and a pilot study was performed to assess its appropriateness. The prevalence of gastro-esophageal reflux was achieved 2.4% daily, 9.1% weekly and 11.3% monthly. Among the patients with gastro-esophageal reflux, 69.5% were female. There was a significant positive association between gastro-esophageal reflux prevalence and body mass index, smoking habits, eating salted or smoked foods, lying down immediately after the meal, taking certain drugs as non-steroidal anti-inflammatory drugs/Amino salicylic acid and the age group of 30-45 year old. Overall, the prevalence of the weekly gastro-esophageal reflux in the present survey was 9.1% which was less than other similar studies in Iran and some other countries. PMID:24046810
Directed random walk with random restarts: The Sisyphus random walk
NASA Astrophysics Data System (ADS)
Montero, Miquel; Villarroel, Javier
2016-09-01
In this paper we consider a particular version of the random walk with restarts: random reset events which suddenly bring the system to the starting value. We analyze its relevant statistical properties, like the transition probability, and show how an equilibrium state appears. Formulas for the first-passage time, high-water marks, and other extreme statistics are also derived; we consider counting problems naturally associated with the system. Finally we indicate feasible generalizations useful for interpreting different physical effects.
Directed random walk with random restarts: The Sisyphus random walk.
Montero, Miquel; Villarroel, Javier
2016-09-01
In this paper we consider a particular version of the random walk with restarts: random reset events which suddenly bring the system to the starting value. We analyze its relevant statistical properties, like the transition probability, and show how an equilibrium state appears. Formulas for the first-passage time, high-water marks, and other extreme statistics are also derived; we consider counting problems naturally associated with the system. Finally we indicate feasible generalizations useful for interpreting different physical effects.
NASA Astrophysics Data System (ADS)
Gorbatenko, A. A.; Revina, E. I.
2015-10-01
The review is devoted to the major advances in laser sampling. The advantages and drawbacks of the technique are considered. Specific features of combinations of laser sampling with various instrumental analytical methods, primarily inductively coupled plasma mass spectrometry, are discussed. Examples of practical implementation of hybrid methods involving laser sampling as well as corresponding analytical characteristics are presented. The bibliography includes 78 references.
Lunar Sample Quarantine & Sample Curation
NASA Technical Reports Server (NTRS)
Allton, Judith H.
2000-01-01
The main goal of this presentation is to discuss some of the responsibility of the lunar sample quarantine project. The responsibilities are: flying the mission safely, and on schedule, protect the Earth from biohazard, and preserve scientific integrity of samples.
Direct dialling of Haar random unitary matrices
NASA Astrophysics Data System (ADS)
Russell, Nicholas J.; Chakhmakhchyan, Levon; O’Brien, Jeremy L.; Laing, Anthony
2017-03-01
Random unitary matrices find a number of applications in quantum information science, and are central to the recently defined boson sampling algorithm for photons in linear optics. We describe an operationally simple method to directly implement Haar random unitary matrices in optical circuits, with no requirement for prior or explicit matrix calculations. Our physically motivated and compact representation directly maps independent probability density functions for parameters in Haar random unitary matrices, to optical circuit components. We go on to extend the results to the case of random unitaries for qubits.
Spectroscopy with Random and Displaced Random Ensembles
NASA Astrophysics Data System (ADS)
Velázquez, V.; Zuker, A. P.
2002-02-01
Because of the time reversal invariance of the angular momentum operator J2, the average energies and variances at fixed J for random two-body Hamiltonians exhibit odd-even- J staggering that may be especially strong for J = 0. It is shown that upon ensemble averaging over random runs, this behavior is reflected in the yrast states. Displaced (attractive) random ensembles lead to rotational spectra with strongly enhanced B(E2) transitions for a certain class of model spaces. It is explained how to generalize these results to other forms of collectivity.
Adolph, Karen E.; Robinson, Scott R.
2011-01-01
Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of the enterprise. This article discusses how to sample development in order to accurately discern the shape of developmental change. The ideal solution is daunting: to summarize behavior over 24-hour intervals and collect daily samples over the critical periods of change. We discuss the magnitude of errors due to undersampling, and the risks associated with oversampling. When daily sampling is not feasible, we offer suggestions for sampling methods that can provide preliminary reference points and provisional sketches of the general shape of a developmental trajectory. Denser sampling then can be applied strategically during periods of enhanced variability, inflections in the rate of developmental change, or in relation to key events or processes that may affect the course of change. Despite the challenges of dense repeated sampling, researchers must take seriously the problem of sampling on a developmental time scale if we are to know the true shape of developmental change. PMID:22140355
On Gaussian random supergravity
NASA Astrophysics Data System (ADS)
Bachlechner, Thomas C.
2014-04-01
We study the distribution of metastable vacua and the likelihood of slow roll inflation in high dimensional random landscapes. We consider two examples of landscapes: a Gaussian random potential and an effective supergravity potential defined via a Gaussian random superpotential and a trivial Kähler potential. To examine these landscapes we introduce a random matrix model that describes the correlations between various derivatives and we propose an efficient algorithm that allows for a numerical study of high dimensional random fields. Using these novel tools, we find that the vast majority of metastable critical points in N dimensional random supergravities are either approximately supersymmetric with | F| ≪ M susy or supersymmetric. Such approximately supersymmetric points are dynamical attractors in the landscape and the probability that a randomly chosen critical point is metastable scales as log( P ) ∝ - N. We argue that random supergravities lead to potentially interesting inflationary dynamics.
ERIC Educational Resources Information Center
Adolph, Karen E.; Robinson, Scott R.
2011-01-01
Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…
Labuz, Joseph M.; Takayama, Shuichi
2014-01-01
Sampling – the process of collecting, preparing, and introducing an appropriate volume element (voxel) into a system – is often under appreciated and pushed behind the scenes in lab-on-a-chip research. What often stands in the way between proof-of-principle demonstrations of potentially exciting technology and its broader dissemination and actual use, however, is the effectiveness of sample collection and preparation. The power of micro- and nanofluidics to improve reactions, sensing, separation, and cell culture cannot be accessed if sampling is not equally efficient and reliable. This perspective will highlight recent successes as well as assess current challenges and opportunities in this area. PMID:24781100
Quantum random number generation
Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; Zhang, Zhen; Qi, Bing
2016-06-28
Quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a high speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.
Quantum random number generation
Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; ...
2016-06-28
Quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a highmore » speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less
Quantum random number generation
NASA Astrophysics Data System (ADS)
Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; Qi, Bing; Zhang, Zhen
2016-06-01
Quantum physics can be exploited to generate true random numbers, which have important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness—coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. On the basis of the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at a high speed by properly modelling the devices. The second category is self-testing QRNG, in which verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category that provides a tradeoff between the trustworthiness on the device and the random number generation speed.
Hannaford, B.A.; Rosenberg, R.; Segaser, C.L.; Terry, C.L.
1961-01-17
An apparatus is given for the batch sampling of radioactive liquids such as slurries from a system by remote control, while providing shielding for protection of operating personnel from the harmful effects of radiation.
Experimental scattershot boson sampling
Bentivegna, Marco; Spagnolo, Nicolò; Vitelli, Chiara; Flamini, Fulvio; Viggianiello, Niko; Latmiral, Ludovico; Mataloni, Paolo; Brod, Daniel J.; Galvão, Ernesto F.; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto; Sciarrino, Fabio
2015-01-01
Boson sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialized quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, scattershot boson sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric down-conversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We use recently proposed statistical tools to analyze our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy. PMID:26601164
Gage, Julia C.; Katki, Hormuzd A.; Schiffman, Mark; Fetterman, Barbara; Poitras, Nancy E.; Lorey, Thomas; Cheung, Li C.; Castle, Philip E.; Kinney, Walter K.
2014-01-01
It is unclear whether a woman's age influences her risk of cervical intraepithelial neoplasia grade 3 or worse (CIN3+) upon detection of HPV. A large change in risk as women age would influence vaccination and screening policies. Among 972,029 women age 30-64 undergoing screening with Pap and HPV testing (Hybrid Capture 2, Qiagen, Germantown, MD, USA) at Kaiser Permanente Northern California (KPNC), we calculated age-specific 5-year CIN3+ risks among women with HPV infections detected at enrollment, and among women with “newly detected” HPV infections at their second screening visit. 57,899 women (6.0%) had an enrollment HPV infection. Among the women testing HPV negative at enrollment with a second screening visit, 16,724 (3.3%) had a newly detected HPV infection at their second visit. Both enrollment and newly detected HPV rates declined with age (p<.001). Women with enrollment vs. newly detected HPV infection had higher 5-year CIN3+ risks: 8.5% vs. 3.9%, (p<.0001). Risks did not increase with age but declined slightly from 30-34 years to 60-64 years: 9.4% vs. 7.4% (p=0.017) for enrollment HPV and 5.1% vs. 3.5% (p=0.014) for newly detected HPV. Among women age 30-64 in an established screening program, women with newly detected HPV infections were at lower risk than women with enrollment infections, suggesting reduced benefit vaccinating women at older ages. Although the rates of HPV infection declined dramatically with age, the subsequent CIN3+ risks associated with HPV infection declined only slightly. The CIN3+ risks among older women are sufficiently elevated to warrant continued screening through age 65. PMID:25136967
Gage, Julia C; Katki, Hormuzd A; Schiffman, Mark; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas; Cheung, Li C; Castle, Philip E; Kinney, Walter K
2015-04-01
It is unclear whether a woman's age influences her risk of cervical intraepithelial neoplasia grade 3 or worse (CIN3+) upon detection of HPV. A large change in risk as women age would influence vaccination and screening policies. Among 972,029 women age 30-64 undergoing screening with Pap and HPV testing (Hybrid Capture 2, Qiagen, Germantown, MD) at Kaiser Permanente Northern California (KPNC), we calculated age-specific 5-year CIN3+ risks among women with HPV infections detected at enrollment, and among women with "newly detected" HPV infections at their second screening visit. Women (57,899, 6.0%) had an enrollment HPV infection. Among the women testing HPV negative at enrollment with a second screening visit, 16,724 (3.3%) had a newly detected HPV infection at their second visit. Both enrollment and newly detected HPV rates declined with age (p < 0.001). Women with enrollment versus newly detected HPV infection had higher 5-year CIN3+ risks: 8.5% versus 3.9%, (p < 0.0001). Risks did not increase with age but declined slightly from 30-34 years to 60-64 years: 9.4% versus 7.4% (p = 0.017) for enrollment HPV and 5.1% versus 3.5% (p = 0.014) for newly detected HPV. Among women age 30-64 in an established screening program, women with newly detected HPV infections were at lower risk than women with enrollment infections, suggesting reduced benefit vaccinating women at older ages. Although the rates of HPV infection declined dramatically with age, the subsequent CIN3+ risks associated with HPV infection declined only slightly. The CIN3+ risks among older women are sufficiently elevated to warrant continued screening through age 65.
Universality in numerical computations with random data.
Deift, Percy A; Menon, Govind; Olver, Sheehan; Trogdon, Thomas
2014-10-21
The authors present evidence for universality in numerical computations with random data. Given a (possibly stochastic) numerical algorithm with random input data, the time (or number of iterations) to convergence (within a given tolerance) is a random variable, called the halting time. Two-component universality is observed for the fluctuations of the halting time--i.e., the histogram for the halting times, centered by the sample average and scaled by the sample variance, collapses to a universal curve, independent of the input data distribution, as the dimension increases. Thus, up to two components--the sample average and the sample variance--the statistics for the halting time are universally prescribed. The case studies include six standard numerical algorithms as well as a model of neural computation and decision-making. A link to relevant software is provided for readers who would like to do computations of their own.
[A comparison of convenience sampling and purposive sampling].
Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien
2014-06-01
Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.
Gordon, N.R.; King, L.L.; Jackson, P.O.; Zulich, A.W.
1989-07-18
A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface. 15 figs.
Gordon, Norman R.; King, Lloyd L.; Jackson, Peter O.; Zulich, Alan W.
1989-01-01
A sampling apparatus is provided for sampling substances from solid surfaces. The apparatus includes first and second elongated tubular bodies which telescopically and sealingly join relative to one another. An absorbent pad is mounted to the end of a rod which is slidably received through a passageway in the end of one of the joined bodies. The rod is preferably slidably and rotatably received through the passageway, yet provides a selective fluid tight seal relative thereto. A recess is formed in the rod. When the recess and passageway are positioned to be coincident, fluid is permitted to flow through the passageway and around the rod. The pad is preferably laterally orientable relative to the rod and foldably retractable to within one of the bodies. A solvent is provided for wetting of the pad and solubilizing or suspending the material being sampled from a particular surface.
Quantum random number generators
NASA Astrophysics Data System (ADS)
Herrero-Collantes, Miguel; Garcia-Escartin, Juan Carlos
2017-01-01
Random numbers are a fundamental resource in science and engineering with important applications in simulation and cryptography. The inherent randomness at the core of quantum mechanics makes quantum systems a perfect source of entropy. Quantum random number generation is one of the most mature quantum technologies with many alternative generation methods. This review discusses the different technologies in quantum random number generation from the early devices based on radioactive decay to the multiple ways to use the quantum states of light to gather entropy from a quantum origin. Randomness extraction and amplification and the notable possibility of generating trusted random numbers even with untrusted hardware using device-independent generation protocols are also discussed.
NASA Astrophysics Data System (ADS)
Gurau, Razvan
2016-09-01
This article is preface to the SIGMA special issue ''Tensor Models, Formalism and Applications'', http://www.emis.de/journals/SIGMA/Tensor_Models.html. The issue is a collection of eight excellent, up to date reviews on random tensor models. The reviews combine pedagogical introductions meant for a general audience with presentations of the most recent developments in the field. This preface aims to give a condensed panoramic overview of random tensors as the natural generalization of random matrices to higher dimensions.
Random Packing and Random Covering Sequences.
1988-03-24
obtained by appeain~g to a result due to Marsaglia [39, and de Finetti [8]. Their result states that if (XI. X2 .. X,) is a random point on the simplex {X E...to sequeil~ coverage problems. J. App). Prob. 11. 281-293. [81 de Finetti . B. (1964). Alcune ossevazioni in tema de "suddivisione casuale." Giornale I
1972-08-01
35609 Advanced Techniques Branch Plans and Programs Analysis Division Directorate for Product Assurance U. S. Army Missile Command Redstone Arsenal...Ray Heathcock Advanced Techniques Branch Plans and Programs Analysis Division Directorate for Product Assurance U. S. Army Missile Command...for Product Assurance has established a rather unique computer program for handling a variety of chain sampling schemes and is available for
The Prevalence of Elder Abuse: A Random Sample Survey.
ERIC Educational Resources Information Center
Pillemer, Karl; Finkelhor, David
1988-01-01
Conducted interviews with 2,020 community-dwelling elderly persons in Boston regarding their experience of physical violence, verbal aggression, and neglect. Prevalence rate of overall maltreatment was 32 elderly persons per 1,000. Spouses were found to be most likely abusers, and similar numbers of men and women were victims, although women…
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in time at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
NASA Technical Reports Server (NTRS)
2008-01-01
Three locations to the right of the test dig area are identified for the first samples to be delivered to the Thermal and Evolved Gas Analyzer (TEGA), the Wet Chemistry Lab (WCL), and the Optical Microscope (OM) on NASA's Phoenix Mars Lander. These sampling areas are informally labeled 'Baby Bear', 'Mama Bear', and 'Papa Bear' respectively. This image was taken on the seventh day of the Mars mission, or Sol 7 (June 1, 2008) by the Surface Stereo Imager aboard NASA's Phoenix Mars Lander.
The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.
Quantum random number generator
Pooser, Raphael C.
2016-05-10
A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.
Randomness: Quantum versus classical
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2016-05-01
Recent tremendous development of quantum information theory has led to a number of quantum technological projects, e.g. quantum random generators. This development had stimulated a new wave of interest in quantum foundations. One of the most intriguing problems of quantum foundations is the elaboration of a consistent and commonly accepted interpretation of a quantum state. Closely related problem is the clarification of the notion of quantum randomness and its interrelation with classical randomness. In this short review, we shall discuss basics of classical theory of randomness (which by itself is very complex and characterized by diversity of approaches) and compare it with irreducible quantum randomness. We also discuss briefly “digital philosophy”, its role in physics (classical and quantum) and its coupling to the information interpretation of quantum mechanics (QM).
Autonomous Byte Stream Randomizer
NASA Technical Reports Server (NTRS)
Paloulian, George K.; Woo, Simon S.; Chow, Edward T.
2013-01-01
Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.
Acclerated rare event sampling
NASA Astrophysics Data System (ADS)
Yevick, David
2015-03-01
We suggest a strategy for biased transition matrix Monte-Carlo calculations that both ensures the most rapid coverage of the entire computational window in the macroscopic variables of interest E --> and yields estimates of transition probabilities between states that are equally accurate in low and high probability regions. Further, paths between different low probability regions are sampled at regular intervals. For the case of a single E variable, a random system realization for which the value of E falls in e.g. the i:th histogram bin is generated. This state is perturbed and the resulting realization is rejected until a transition is observed to a neighboring bin, taken here as i + 1 . All accepted and rejected transitions are simultaneously employed to generate the elements of a transition matrix. Subsequently, only a transition to bin i + 2 is accepted and this procedure is continued until the last of the N bins comprising the computational window is sampled. The procedure is then repeated but in the direction of decreasing bin number. The probability distribution of E can then be obtained by e.g. repeatedly multiplying a random vector by the transition matrix.
Creating Ensembles of Decision Trees Through Sampling
Kamath,C; Cantu-Paz, E
2001-07-26
Recent work in classification indicates that significant improvements in accuracy can be obtained by growing an ensemble of classifiers and having them vote for the most popular class. This paper focuses on ensembles of decision trees that are created with a randomized procedure based on sampling. Randomization can be introduced by using random samples of the training data (as in bagging or boosting) and running a conventional tree-building algorithm, or by randomizing the induction algorithm itself. The objective of this paper is to describe the first experiences with a novel randomized tree induction method that uses a sub-sample of instances at a node to determine the split. The empirical results show that ensembles generated using this approach yield results that are competitive in accuracy and superior in computational cost to boosting and bagging.
NASA Astrophysics Data System (ADS)
Shivakiran Bhaktha, B. N.; Bachelard, Nicolas; Noblin, Xavier; Sebbah, Patrick
2012-10-01
Random lasing is reported in a dye-circulated structured polymeric microfluidic channel. The role of disorder, which results from limited accuracy of photolithographic process, is demonstrated by the variation of the emission spectrum with local-pump position and by the extreme sensitivity to a local perturbation of the structure. Thresholds comparable to those of conventional microfluidic lasers are achieved, without the hurdle of state-of-the-art cavity fabrication. Potential applications of optofluidic random lasers for on-chip sensors are discussed. Introduction of random lasers in the field of optofluidics is a promising alternative to on-chip laser integration with light and fluidic functionalities.
NASA Astrophysics Data System (ADS)
Barkhofen, Sonja; Bartley, Tim J.; Sansoni, Linda; Kruse, Regina; Hamilton, Craig S.; Jex, Igor; Silberhorn, Christine
2017-01-01
Sampling the distribution of bosons that have undergone a random unitary evolution is strongly believed to be a computationally hard problem. Key to outperforming classical simulations of this task is to increase both the number of input photons and the size of the network. We propose driven boson sampling, in which photons are input within the network itself, as a means to approach this goal. We show that the mean number of photons entering a boson sampling experiment can exceed one photon per input mode, while maintaining the required complexity, potentially leading to less stringent requirements on the input states for such experiments. When using heralded single-photon sources based on parametric down-conversion, this approach offers an ˜e -fold enhancement in the input state generation rate over scattershot boson sampling, reaching the scaling limit for such sources. This approach also offers a dramatic increase in the signal-to-noise ratio with respect to higher-order photon generation from such probabilistic sources, which removes the need for photon number resolution during the heralding process as the size of the system increases.
Fenimore, E.E.
1980-08-22
A hexagonally shaped quasi-random no-two-holes touching grid collimator. The quasi-random array grid collimator eliminates contamination from small angle off-axis rays by using a no-two-holes-touching pattern which simultaneously provides for a self-supporting array increasng throughput by elimination of a substrate. The presentation invention also provides maximum throughput using hexagonally shaped holes in a hexagonal lattice pattern for diffraction limited applications. Mosaicking is also disclosed for reducing fabrication effort.
Randomized Algorithms for Matrices and Data
NASA Astrophysics Data System (ADS)
Mahoney, Michael W.
2012-03-01
This chapter reviews recent work on randomized matrix algorithms. By “randomized matrix algorithms,” we refer to a class of recently developed random sampling and random projection algorithms for ubiquitous linear algebra problems such as least-squares (LS) regression and low-rank matrix approximation. These developments have been driven by applications in large-scale data analysis—applications which place very different demands on matrices than traditional scientific computing applications. Thus, in this review, we will focus on highlighting the simplicity and generality of several core ideas that underlie the usefulness of these randomized algorithms in scientific applications such as genetics (where these algorithms have already been applied) and astronomy (where, hopefully, in part due to this review they will soon be applied). The work we will review here had its origins within theoretical computer science (TCS). An important feature in the use of randomized algorithms in TCS more generally is that one must identify and then algorithmically deal with relevant “nonuniformity structure” in the data. For the randomized matrix algorithms to be reviewed here and that have proven useful recently in numerical linear algebra (NLA) and large-scale data analysis applications, the relevant nonuniformity structure is defined by the so-called statistical leverage scores. Defined more precisely below, these leverage scores are basically the diagonal elements of the projection matrix onto the dominant part of the spectrum of the input matrix. As such, they have a long history in statistical data analysis, where they have been used for outlier detection in regression diagnostics. More generally, these scores often have a very natural interpretation in terms of the data and processes generating the data. For example, they can be interpreted in terms of the leverage or influence that a given data point has on, say, the best low-rank matrix approximation; and this
An optical ultrafast random bit generator
NASA Astrophysics Data System (ADS)
Kanter, Ido; Aviad, Yaara; Reidler, Igor; Cohen, Elad; Rosenbluh, Michael
2010-01-01
The generation of random bit sequences based on non-deterministic physical mechanisms is of paramount importance for cryptography and secure communications. High data rates also require extremely fast generation rates and robustness to external perturbations. Physical generators based on stochastic noise sources have been limited in bandwidth to ~100 Mbit s-1 generation rates. We present a physical random bit generator, based on a chaotic semiconductor laser, having time-delayed self-feedback, which operates reliably at rates up to 300 Gbit s-1. The method uses a high derivative of the digitized chaotic laser intensity and generates the random sequence by retaining a number of the least significant bits of the high derivative value. The method is insensitive to laser operational parameters and eliminates the necessity for all external constraints such as incommensurate sampling rates and laser external cavity round trip time. The randomness of long bit strings is verified by standard statistical tests.
Two random repeat recall methods to assess alcohol use.
Midanik, L T
1993-01-01
Two random repeat recall methods were compared with a summary measure to assess alcohol use. Subjects (n = 142) were randomly assigned to one of two groups; they were called either on 14 random days during three 30-day waves and asked about drinking yesterday, or on 2 random days during each wave and asked about drinking in the past week. Follow-up telephone interviews obtained summary measures for each wave. Random repeat methods generally obtained higher estimates. However, the high dropout rate makes questionable the feasibility of using this approach with general population samples. PMID:8498631
Quantum Random Number Generation Using a Quanta Image Sensor.
Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R
2016-06-29
A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed.
Quantum Random Number Generation Using a Quanta Image Sensor
Amri, Emna; Felk, Yacine; Stucki, Damien; Ma, Jiaju; Fossum, Eric R.
2016-01-01
A new quantum random number generation method is proposed. The method is based on the randomness of the photon emission process and the single photon counting capability of the Quanta Image Sensor (QIS). It has the potential to generate high-quality random numbers with remarkable data output rate. In this paper, the principle of photon statistics and theory of entropy are discussed. Sample data were collected with QIS jot device, and its randomness quality was analyzed. The randomness assessment method and results are discussed. PMID:27367698
Babin, S. A.; Podivilov, E. V.; El-Taher, A. E.; Harper, P.; Turitsyn, S. K.
2011-08-15
An optical fiber is treated as a natural one-dimensional random system where lasing is possible due to a combination of Rayleigh scattering by refractive index inhomogeneities and distributed amplification through the Raman effect. We present such a random fiber laser that is tunable over a broad wavelength range with uniquely flat output power and high efficiency, which outperforms traditional lasers of the same category. Outstanding characteristics defined by deep underlying physics and the simplicity of the scheme make the demonstrated laser a very attractive light source both for fundamental science and practical applications.
Biological Sampling Variability Study
Amidan, Brett G.; Hutchison, Janine R.
2016-11-08
There are many sources of variability that exist in the sample collection and analysis process. This paper addresses many, but not all, sources of variability. The main focus of this paper was to better understand and estimate variability due to differences between samplers. Variability between days was also studied, as well as random variability within each sampler. Experiments were performed using multiple surface materials (ceramic and stainless steel), multiple contaminant concentrations (10 spores and 100 spores), and with and without the presence of interfering material. All testing was done with sponge sticks using 10-inch by 10-inch coupons. Bacillus atrophaeus was used as the BA surrogate. Spores were deposited using wet deposition. Grime was coated on the coupons which were planned to include the interfering material (Section 3.3). Samples were prepared and analyzed at PNNL using CDC protocol (Section 3.4) and then cultured and counted. Five samplers were trained so that samples were taken using the same protocol. Each sampler randomly sampled eight coupons each day, four coupons with 10 spores deposited and four coupons with 100 spores deposited. Each day consisted of one material being tested. The clean samples (no interfering materials) were run first, followed by the dirty samples (coated with interfering material). There was a significant difference in recovery efficiency between the coupons with 10 spores deposited (mean of 48.9%) and those with 100 spores deposited (mean of 59.8%). There was no general significant difference between the clean and dirty (containing interfering material) coupons or between the two surface materials; however, there was a significant interaction between concentration amount and presence of interfering material. The recovery efficiency was close to the same for coupons with 10 spores deposited, but for the coupons with 100 spores deposited, the recovery efficiency for the dirty samples was significantly larger (65
Sampling properties of directed networks
NASA Astrophysics Data System (ADS)
Son, S.-W.; Christensen, C.; Bizhani, G.; Foster, D. V.; Grassberger, P.; Paczuski, M.
2012-10-01
For many real-world networks only a small “sampled” version of the original network may be investigated; those results are then used to draw conclusions about the actual system. Variants of breadth-first search (BFS) sampling, which are based on epidemic processes, are widely used. Although it is well established that BFS sampling fails, in most cases, to capture the IN component(s) of directed networks, a description of the effects of BFS sampling on other topological properties is all but absent from the literature. To systematically study the effects of sampling biases on directed networks, we compare BFS sampling to random sampling on complete large-scale directed networks. We present new results and a thorough analysis of the topological properties of seven complete directed networks (prior to sampling), including three versions of Wikipedia, three different sources of sampled World Wide Web data, and an Internet-based social network. We detail the differences that sampling method and coverage can make to the structural properties of sampled versions of these seven networks. Most notably, we find that sampling method and coverage affect both the bow-tie structure and the number and structure of strongly connected components in sampled networks. In addition, at a low sampling coverage (i.e., less than 40%), the values of average degree, variance of out-degree, degree autocorrelation, and link reciprocity are overestimated by 30% or more in BFS-sampled networks and only attain values within 10% of the corresponding values in the complete networks when sampling coverage is in excess of 65%. These results may cause us to rethink what we know about the structure, function, and evolution of real-world directed networks.
Randomness Of Amoeba Movements
NASA Astrophysics Data System (ADS)
Hashiguchi, S.; Khadijah, Siti; Kuwajima, T.; Ohki, M.; Tacano, M.; Sikula, J.
2005-11-01
Movements of amoebas were automatically traced using the difference between two successive frames of the microscopic movie. It was observed that the movements were almost random in that the directions and the magnitudes of the successive two steps are not correlated, and that the distance from the origin was proportional to the square root of the step number.
Feng Haidong; Siegel, Warren
2006-08-15
We propose some new simplifying ingredients for Feynman diagrams that seem necessary for random lattice formulations of superstrings. In particular, half the fermionic variables appear only in particle loops (similarly to loop momenta), reducing the supersymmetry of the constituents of the type IIB superstring to N=1, as expected from their interpretation in the 1/N expansion as super Yang-Mills.
ERIC Educational Resources Information Center
Griffiths, Martin
2011-01-01
One of the author's undergraduate students recently asked him whether it was possible to generate a random positive integer. After some thought, the author realised that there were plenty of interesting mathematical ideas inherent in her question. So much so in fact, that the author decided to organise a workshop, open both to undergraduates and…
Randomization Does Not Help Much, Comparability Does
Saint-Mont, Uwe
2015-01-01
According to R.A. Fisher, randomization “relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed.” Since, in particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result, it has become very popular. This contribution challenges the received view. First, looking for quantitative support, we study a number of straightforward, mathematically simple models. They all demonstrate that the optimism surrounding randomization is questionable: In small to medium-sized samples, random allocation of units to treatments typically yields a considerable imbalance between the groups, i.e., confounding due to randomization is the rule rather than the exception. In the second part of this contribution, the reasoning is extended to a number of traditional arguments in favour of randomization. This discussion is rather non-technical, and sometimes touches on the rather fundamental Frequentist/Bayesian debate. However, the result of this analysis turns out to be quite similar: While the contribution of randomization remains doubtful, comparability contributes much to a compelling conclusion. Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable. PMID:26193621
ERIC Educational Resources Information Center
Ben-Ari, Morechai
2004-01-01
The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…
[Variance estimation considering multistage sampling design in multistage complex sample analysis].
Li, Yichong; Zhao, Yinjun; Wang, Limin; Zhang, Mei; Zhou, Maigeng
2016-03-01
Multistage sampling is a frequently-used method in random sampling survey in public health. Clustering or independence between observations often exists in the sampling, often called complex sample, generated by multistage sampling. Sampling error may be underestimated and the probability of type I error may be increased if the multistage sample design was not taken into consideration in analysis. As variance (error) estimator in complex sample is often complicated, statistical software usually adopt ultimate cluster variance estimate (UCVE) to approximate the estimation, which simply assume that the sample comes from one-stage sampling. However, with increased sampling fraction of primary sampling unit, contribution from subsequent sampling stages is no more trivial, and the ultimate cluster variance estimate may, therefore, lead to invalid variance estimation. This paper summarize a method of variance estimation considering multistage sampling design. The performances are compared with UCVE and the method considering multistage sampling design by simulating random sampling under different sampling schemes using real world data. Simulation showed that as primary sampling unit (PSU) sampling fraction increased, UCVE tended to generate increasingly biased estimation, whereas accurate estimates were obtained by using the method considering multistage sampling design.
Bootstrapped models for intrinsic random functions
Campbell, K.
1988-08-01
Use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process. The fact that this function has to be estimated from data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the bootstrap in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as their kriging variance, provide a reasonable picture of variability introduced by imperfect estimation of the generalized covariance function.
Bootstrapped models for intrinsic random functions
Campbell, K.
1987-01-01
The use of intrinsic random function stochastic models as a basis for estimation in geostatistical work requires the identification of the generalized covariance function of the underlying process, and the fact that this function has to be estimated from the data introduces an additional source of error into predictions based on the model. This paper develops the sample reuse procedure called the ''bootstrap'' in the context of intrinsic random functions to obtain realistic estimates of these errors. Simulation results support the conclusion that bootstrap distributions of functionals of the process, as well as of their ''kriging variance,'' provide a reasonable picture of the variability introduced by imperfect estimation of the generalized covariance function.
Summer School Effects in a Randomized Field Trial
ERIC Educational Resources Information Center
Zvoch, Keith; Stevens, Joseph J.
2013-01-01
This field-based randomized trial examined the effect of assignment to and participation in summer school for two moderately at-risk samples of struggling readers. Application of multiple regression models to difference scores capturing the change in summer reading fluency revealed that kindergarten students randomly assigned to summer school…
Power Analysis of Cutoff-Based Randomized Clinical Trials.
ERIC Educational Resources Information Center
Cappelleri, Joseph C.; And Others
1994-01-01
A statistical power algorithm based on the Fisher Z method is developed for cutoff-based random clinical trials and the single cutoff-point (regression-discontinuity) design that has no randomization. This article quantifies power and sample size estimates for various levels of power and cutoff-based assignment. (Author/SLD)
Creating ensembles of decision trees through sampling
Kamath, Chandrika; Cantu-Paz, Erick
2005-08-30
A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.
NASA Astrophysics Data System (ADS)
Edelman, Alan; Rao, N. Raj
Random matrix theory is now a big subject with applications in many disciplines of science, engineering and finance. This article is a survey specifically oriented towards the needs and interests of a numerical analyst. This survey includes some original material not found anywhere else. We include the important mathematics which is a very modern development, as well as the computational software that is transforming the theory into useful practice.
Zhang, Duan Z.; Padrino, Juan C.
2017-06-01
The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt$-$1/4 rather than xt$-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less
Random number generation from spontaneous Raman scattering
NASA Astrophysics Data System (ADS)
Collins, M. J.; Clark, A. S.; Xiong, C.; Mägi, E.; Steel, M. J.; Eggleton, B. J.
2015-10-01
We investigate the generation of random numbers via the quantum process of spontaneous Raman scattering. Spontaneous Raman photons are produced by illuminating a highly nonlinear chalcogenide glass ( As 2 S 3 ) fiber with a CW laser at a power well below the stimulated Raman threshold. Single Raman photons are collected and separated into two discrete wavelength detuning bins of equal scattering probability. The sequence of photon detection clicks is converted into a random bit stream. Postprocessing is applied to remove detector bias, resulting in a final bit rate of ˜650 kb/s. The collected random bit-sequences pass the NIST statistical test suite for one hundred 1 Mb samples, with the significance level set to α = 0.01 . The fiber is stable, robust and the high nonlinearity (compared to silica) allows for a short fiber length and low pump power favourable for real world application.
On the pertinence to Physics of random walks induced by random dynamical systems: a survey
NASA Astrophysics Data System (ADS)
Petritis, Dimitri
2016-08-01
Let be an abstract space and a denumerable (finite or infinite) alphabet. Suppose that is a family of functions such that for all we have and a family of transformations . The pair ((Sa)a , (pa)a ) is termed an iterated function system with place dependent probabilities. Such systems can be thought as generalisations of random dynamical systems. As a matter of fact, suppose we start from a given ; we pick then randomly, with probability pa (x), the transformation Sa and evolve to Sa (x). We are interested in the behaviour of the system when the iteration continues indefinitely. Random walks of the above type are omnipresent in both classical and quantum Physics. To give a small sample of occurrences we mention: random walks on the affine group, random walks on Penrose lattices, random walks on partially directed lattices, evolution of density matrices induced by repeated quantum measurements, quantum channels, quantum random walks, etc. In this article, we review some basic properties of such systems and provide with a pathfinder in the extensive bibliography (both on mathematical and physical sides) where the main results have been originally published.
The RANDOM computer program: A linear congruential random number generator
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.
Certified randomness in quantum physics.
Acín, Antonio; Masanes, Lluis
2016-12-07
The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.
Certified randomness in quantum physics
NASA Astrophysics Data System (ADS)
Acín, Antonio; Masanes, Lluis
2016-12-01
The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.
Generation of kth-order random toposequences
NASA Astrophysics Data System (ADS)
Odgers, Nathan P.; McBratney, Alex. B.; Minasny, Budiman
2008-05-01
The model presented in this paper derives toposequences from a digital elevation model (DEM). It is written in ArcInfo Macro Language (AML). The toposequences are called kth-order random toposequences, because they take a random path uphill to the top of a hill and downhill to a stream or valley bottom from a randomly selected seed point, and they are located in a streamshed of order k according to a particular stream-ordering system. We define a kth-order streamshed as the area of land that drains directly to a stream segment of stream order k. The model attempts to optimise the spatial configuration of a set of derived toposequences iteratively by using simulated annealing to maximise the total sum of distances between each toposequence hilltop in the set. The user is able to select the order, k, of the derived toposequences. Toposequences are useful for determining soil sampling locations for use in collecting soil data for digital soil mapping applications. Sampling locations can be allocated according to equal elevation or equal-distance intervals along the length of the toposequence, for example. We demonstrate the use of this model for a study area in the Hunter Valley of New South Wales, Australia. Of the 64 toposequences derived, 32 were first-order random toposequences according to Strahler's stream-ordering system, and 32 were second-order random toposequences. The model that we present in this paper is an efficient method for sampling soil along soil toposequences. The soils along a toposequence are related to each other by the topography they are found in, so soil data collected by this method is useful for establishing soil-landscape rules for the preparation of digital soil maps.
Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design
ERIC Educational Resources Information Center
Wagler, Amy; Wagler, Ron
2014-01-01
Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…
Random numbers spring from alpha decay
Frigerio, N.A.; Sanathanan, L.P.; Morley, M.; Clark, N.A.; Tyler, S.A.
1980-05-01
Congruential random number generators, which are widely used in Monte Carlo simulations, are deficient in that the number they generate are concentrated in a relatively small number of hyperplanes. While this deficiency may not be a limitation in small Monte Carlo studies involving a few variables, it introduces a significant bias in large simulations requiring high resolution. This bias was recognized and assessed during preparations for an accident analysis study of nuclear power plants. This report describes a random number device based on the radioactive decay of alpha particles from a /sup 235/U source in a high-resolution gas proportional counter. The signals were fed to a 4096-channel analyzer and for each channel the frequency of signals registered in a 20,000-microsecond interval was recorded. The parity bits of these frequency counts (0 for an even count and 1 for an odd count) were then assembled in sequence to form 31-bit binary random numbers and transcribed to a magnetic tape. This cycle was repeated as many times as were necessary to create 3 million random numbers. The frequency distribution of counts from the present device conforms to the Brockwell-Moyal distribution, which takes into account the dead time of the counter (both the dead time and decay constant of the underlying Poisson process were estimated). Analysis of the count data and tests of randomness on a sample set of the 31-bit binary numbers indicate that this random number device is a highly reliable source of truly random numbers. Its use is, therefore, recommended in Monte Carlo simulations for which the congruential pseudorandom number generators are found to be inadequate. 6 figures, 5 tables.
Index statistical properties of sparse random graphs
NASA Astrophysics Data System (ADS)
Metz, F. L.; Stariolo, Daniel A.
2015-10-01
Using the replica method, we develop an analytical approach to compute the characteristic function for the probability PN(K ,λ ) that a large N ×N adjacency matrix of sparse random graphs has K eigenvalues below a threshold λ . The method allows to determine, in principle, all moments of PN(K ,λ ) , from which the typical sample-to-sample fluctuations can be fully characterized. For random graph models with localized eigenvectors, we show that the index variance scales linearly with N ≫1 for |λ |>0 , with a model-dependent prefactor that can be exactly calculated. Explicit results are discussed for Erdös-Rényi and regular random graphs, both exhibiting a prefactor with a nonmonotonic behavior as a function of λ . These results contrast with rotationally invariant random matrices, where the index variance scales only as lnN , with an universal prefactor that is independent of λ . Numerical diagonalization results confirm the exactness of our approach and, in addition, strongly support the Gaussian nature of the index fluctuations.
NASA Technical Reports Server (NTRS)
Boyce, Lola; Lovelace, Thomas B.
1989-01-01
FORTRAN programs RANDOM3 and RANDOM4 are documented in the form of a user's manual. Both programs are based on fatigue strength reduction, using a probabilistic constitutive model. The programs predict the random lifetime of an engine component to reach a given fatigue strength. The theoretical backgrounds, input data instructions, and sample problems illustrating the use of the programs are included.
On grey levels in random CAPTCHA generation
NASA Astrophysics Data System (ADS)
Newton, Fraser; Kouritzin, Michael A.
2011-06-01
A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.
Universal microbial diagnostics using random DNA probes
Aghazadeh, Amirali; Lin, Adam Y.; Sheikh, Mona A.; Chen, Allen L.; Atkins, Lisa M.; Johnson, Coreen L.; Petrosino, Joseph F.; Drezek, Rebekah A.; Baraniuk, Richard G.
2016-01-01
Early identification of pathogens is essential for limiting development of therapy-resistant pathogens and mitigating infectious disease outbreaks. Most bacterial detection schemes use target-specific probes to differentiate pathogen species, creating time and cost inefficiencies in identifying newly discovered organisms. We present a novel universal microbial diagnostics (UMD) platform to screen for microbial organisms in an infectious sample, using a small number of random DNA probes that are agnostic to the target DNA sequences. Our platform leverages the theory of sparse signal recovery (compressive sensing) to identify the composition of a microbial sample that potentially contains novel or mutant species. We validated the UMD platform in vitro using five random probes to recover 11 pathogenic bacteria. We further demonstrated in silico that UMD can be generalized to screen for common human pathogens in different taxonomy levels. UMD’s unorthodox sensing approach opens the door to more efficient and universal molecular diagnostics. PMID:27704040
Random numbers from vacuum fluctuations
NASA Astrophysics Data System (ADS)
Shi, Yicheng; Chng, Brenda; Kurtsiefer, Christian
2016-07-01
We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.
NASA Astrophysics Data System (ADS)
Abramson, Nils H.
2010-12-01
Information is carried by matter or by energy and thus Einstein stated that "no information can travel faster than light." He also was very critical to the "Spooky action at distance" as described in Quantum Physics. However, many verified experiments have proven that the "Spooky actions" not only work at distance but also that they travel at a velocity faster than light, probably at infinite velocity. Examples are Young's fringes at low light levels or entanglements. My explanation is that this information is without energy. In the following I will refer to this spooky information as exformation, where "ex-" refers to existence, the information is not transported in any way, it simply exists. Thus Einstein might have been wrong when he stated that no information can travel faster than light. But he was right in that no detectable information can travel faster than light. Phenomena connected to entanglement appear at first to be exceptions, but in those cases the information can not be reconstructed until energy is later sent in the form of correlation using ordinary information at the velocity of light. In entanglement we see that even if the exformation can not be detected directly because its luck of energy it still can influence what happens at random, because in Quantum Physics there is by definition no energy difference between two states that happen randomly.
NASA Astrophysics Data System (ADS)
Kalay, Z.; Ben-Naim, E.
2015-01-01
We study fragmentation of a random recursive tree into a forest by repeated removal of nodes. The initial tree consists of N nodes and it is generated by sequential addition of nodes with each new node attaching to a randomly-selected existing node. As nodes are removed from the tree, one at a time, the tree dissolves into an ensemble of separate trees, namely, a forest. We study statistical properties of trees and nodes in this heterogeneous forest, and find that the fraction of remaining nodes m characterizes the system in the limit N\\to ∞ . We obtain analytically the size density {{φ }s} of trees of size s. The size density has power-law tail {{φ }s}˜ {{s}-α } with exponent α =1+\\frac{1}{m}. Therefore, the tail becomes steeper as further nodes are removed, and the fragmentation process is unusual in that exponent α increases continuously with time. We also extend our analysis to the case where nodes are added as well as removed, and obtain the asymptotic size density for growing trees.
NASA Astrophysics Data System (ADS)
Kalay, Ziya; Ben-Naim, Eli
2015-03-01
We investigate the fragmentation of a random recursive tree by repeated removal of nodes, resulting in a forest of disjoint trees. The initial tree is generated by sequentially attaching new nodes to randomly chosen existing nodes until the tree contains N nodes. As nodes are removed, one at a time, the tree dissolves into an ensemble of separate trees, namely a forest. We study the statistical properties of trees and nodes in this heterogeneous forest. In the limit N --> ∞ , we find that the system is characterized by a single parameter: the fraction of remaining nodes m. We obtain analytically the size density ϕs of trees of size s, which has a power-law tail ϕs ~s-α , with exponent α = 1 + 1 / m . Therefore, the tail becomes steeper as further nodes are removed, producing an unusual scaling exponent that increases continuously with time. Furthermore, we investigate the fragment size distribution in a growing tree, where nodes are added as well as removed, and find that the distribution for this case is much narrower.
Mak, Chi H; Pham, Phuong; Afif, Samir A; Goodman, Myron F
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C→U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-01-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C → U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508
NASA Astrophysics Data System (ADS)
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C →U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
NASA Astrophysics Data System (ADS)
Senno, Gabriel; Bendersky, Ariel; Figueira, Santiago
2016-07-01
The concepts of randomness and non-locality are intimately intertwined outcomes of randomly chosen measurements over entangled systems exhibiting non-local correlations are, if we preclude instantaneous influence between distant measurement choices and outcomes, random. In this paper, we survey some recent advances in the knowledge of the interplay between these two important notions from a quantum information science perspective.
Random Numbers and Quantum Computers
ERIC Educational Resources Information Center
McCartney, Mark; Glass, David
2002-01-01
The topic of random numbers is investigated in such a way as to illustrate links between mathematics, physics and computer science. First, the generation of random numbers by a classical computer using the linear congruential generator and logistic map is considered. It is noted that these procedures yield only pseudo-random numbers since…
Wireless Network Security Using Randomness
2012-06-19
REPORT WIRELESS NETWORK SECURITY USING RANDOMNESS 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The present invention provides systems and methods for... securing communications in a wireless network by utilizing the inherent randomness of propagation errors to enable legitimate users to dynamically...Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Patent, security , wireless networks, randomness Sheng Xiao, Weibo Gong
Reidys, C.M.
1996-06-01
A mapping in random-structures is defined on the vertices of a generalized hypercube Q{sub {alpha}}{sup n}. A random-structure will consist of (1) a random contact graph and (2) a family of relations imposed on adjacent vertices. The vertex set of a random contact graph will be the set of all coordinates of a vertex P {element_of} Q{sub {alpha}}{sup n}. Its edge will be the union of the edge sets of two random graphs. The first is a random 1-regular graph on 2m vertices (coordinates) and the second is a random graph G{sub p} with p = c{sub 2}/n on all n vertices (coordinates). The structure of the random contact graphs will be investigated and it will be shown that for certain values of m, c{sub 2} the mapping in random-structures allows to search by the set of random-structures. This is applied to mappings in RNA-secondary structures. Also, the results on random-structures might be helpful for designing 3D-folding algorithms for RNA.
NASA Astrophysics Data System (ADS)
Kraynik, Andrew M.; Reinelt, Douglas A.; van Swol, Frank
2004-11-01
The Surface Evolver was used to compute the equilibrium microstructure of dry soap foams with random structure and a wide range of cell-size distributions. Topological and geometric properties of foams and individual cells were evaluated. The theory for isotropic Plateau polyhedra describes the dependence of cell geometric properties on their volume and number of faces. The surface area of all cells is about 10% greater than a sphere of equal volume; this leads to a simple but accurate theory for the surface free energy density of foam. A novel parameter based on the surface-volume mean bubble radius R32 is used to characterize foam polydispersity. The foam energy, total cell edge length, and average number of faces per cell all decrease with increasing polydispersity. Pentagonal faces are the most common in monodisperse foam but quadrilaterals take over in highly polydisperse structures.
Investments in random environments
NASA Astrophysics Data System (ADS)
Navarro-Barrientos, Jesús Emeterio; Cantero-Álvarez, Rubén; Matias Rodrigues, João F.; Schweitzer, Frank
2008-03-01
We present analytical investigations of a multiplicative stochastic process that models a simple investor dynamics in a random environment. The dynamics of the investor's budget, x(t) , depends on the stochasticity of the return on investment, r(t) , for which different model assumptions are discussed. The fat-tail distribution of the budget is investigated and compared with theoretical predictions. We are mainly interested in the most probable value xmp of the budget that reaches a constant value over time. Based on an analytical investigation of the dynamics, we are able to predict xmpstat . We find a scaling law that relates the most probable value to the characteristic parameters describing the stochastic process. Our analytical results are confirmed by stochastic computer simulations that show a very good agreement with the predictions.
Reinelt, Douglas A.; van Swol, Frank B.; Kraynik, Andrew Michael
2004-06-01
The Surface Evolver was used to compute the equilibrium microstructure of dry soap foams with random structure and a wide range of cell-size distributions. Topological and geometric properties of foams and individual cells were evaluated. The theory for isotropic Plateau polyhedra describes the dependence of cell geometric properties on their volume and number of faces. The surface area of all cells is about 10% greater than a sphere of equal volume; this leads to a simple but accurate theory for the surface free energy density of foam. A novel parameter based on the surface-volume mean bubble radius R32 is used to characterize foam polydispersity. The foam energy, total cell edge length, and average number of faces per cell all decrease with increasing polydispersity. Pentagonal faces are the most common in monodisperse foam but quadrilaterals take over in highly polydisperse structures.
Generalized random sequential adsorption
NASA Astrophysics Data System (ADS)
Tarjus, G.; Schaaf, P.; Talbot, J.
1990-12-01
Adsorption of hard spherical particles onto a flat uniform surface is analyzed by using generalized random sequential adsorption (RSA) models. These models are defined by releasing the condition of immobility present in the usual RSA rules to allow for desorption or surface diffusion. Contrary to the simple RSA case, generalized RSA processes are no longer irreversible and the system formed by the adsorbed particles on the surface may reach an equilibrium state. We show by using a distribution function approach that the kinetics of such processes can be described by means of an exact infinite hierarchy of equations reminiscent of the Kirkwood-Salsburg hierarchy for systems at equilibrium. We illustrate the way in which the systems produced by adsorption/desorption and by adsorption/diffusion evolve between the two limits represented by ``simple RSA'' and ``equilibrium'' by considering approximate solutions in terms of truncated density expansions.
Cluster randomized trials for pharmacy practice research.
Gums, Tyler; Carter, Barry; Foster, Eric
2016-06-01
Introduction Cluster randomized trials (CRTs) are now the gold standard in health services research, including pharmacy-based interventions. Studies of behaviour, epidemiology, lifestyle modifications, educational programs, and health care models are utilizing the strengths of cluster randomized analyses. Methodology The key property of CRTs is the unit of randomization (clusters), which may be different from the unit of analysis (individual). Subject sample size and, ideally, the number of clusters is determined by the relationship of between-cluster and within-cluster variability. The correlation among participants recruited from the same cluster is known as the intraclass correlation coefficient (ICC). Generally, having more clusters with smaller ICC values will lead to smaller sample sizes. When selecting clusters, stratification before randomization may be useful in decreasing imbalances between study arms. Participant recruitment methods can differ from other types of randomized trials, as blinding a behavioural intervention cannot always be done. When to use CRTs can yield results that are relevant for making "real world" decisions. CRTs are often used in non-therapeutic intervention studies (e.g. change in practice guidelines). The advantages of CRT design in pharmacy research have been avoiding contamination and the generalizability of the results. A large CRT that studied physician-pharmacist collaborative management of hypertension is used in this manuscript as a CRT example. The trial, entitled Collaboration Among Pharmacists and physicians To Improve Outcomes Now (CAPTION), was implemented in primary care offices in the United States for hypertensive patients. Limitations CRT design limitations include the need for a large number of clusters, high costs, increased training, increased monitoring, and statistical complexity.
Associative Hierarchical Random Fields.
Ladický, L'ubor; Russell, Chris; Kohli, Pushmeet; Torr, Philip H S
2014-06-01
This paper makes two contributions: the first is the proposal of a new model-The associative hierarchical random field (AHRF), and a novel algorithm for its optimization; the second is the application of this model to the problem of semantic segmentation. Most methods for semantic segmentation are formulated as a labeling problem for variables that might correspond to either pixels or segments such as super-pixels. It is well known that the generation of super pixel segmentations is not unique. This has motivated many researchers to use multiple super pixel segmentations for problems such as semantic segmentation or single view reconstruction. These super-pixels have not yet been combined in a principled manner, this is a difficult problem, as they may overlap, or be nested in such a way that the segmentations form a segmentation tree. Our new hierarchical random field model allows information from all of the multiple segmentations to contribute to a global energy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalizes much of the previous work based on pixels or segments, and the resulting labelings can be viewed both as a detailed segmentation at the pixel level, or at the other extreme, as a segment selector that pieces together a solution like a jigsaw, selecting the best segments from different segmentations as pieces. We evaluate its performance on some of the most challenging data sets for object class segmentation, and show that this ability to perform inference using multiple overlapping segmentations leads to state-of-the-art results.
Sample size of the reference sample in a case-augmented study.
Ghosh, Palash; Dewanji, Anup
2017-03-13
The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd.
Differential Cost Avoidance and Successful Criminal Careers: Random or Rational?
ERIC Educational Resources Information Center
Kazemian, Lila; Le Blanc, Marc
2007-01-01
Using a sample of adjudicated French Canadian males from the Montreal Two Samples Longitudinal Study, this article investigates individual and social characteristics associated with differential cost avoidance. The main objective of this study is to determine whether such traits are randomly distributed across differential degrees of cost…
Can the Randomized Controlled Trial Literature Generalize to Nonrandomized Patients?
ERIC Educational Resources Information Center
Stirman, Shannon Wiltsey; DeRubeis, Robert J.; Crits-Christoph, Paul; Rothman, Allison
2005-01-01
To determine the extent to which published randomized controlled trials (RCTs) of psychotherapy can be generalized to a sample of outpatients, the authors matched information obtained from charts of patients who had been screened out of RCTs to inclusion and exclusion criteria from published RCT studies. Most of the patients in the sample who had…
Subrandom methods for multidimensional nonuniform sampling
NASA Astrophysics Data System (ADS)
Worley, Bradley
2016-08-01
Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
NASA Astrophysics Data System (ADS)
Miszczak, Jarosław Adam
2013-01-01
numbers generated by quantum real number generator. Reasons for new version: Added support for the high-speed on-line quantum random number generator and improved methods for retrieving lists of random numbers. Summary of revisions: The presented version provides two signicant improvements. The first one is the ability to use the on-line Quantum Random Number Generation service developed by PicoQuant GmbH and the Nano-Optics groups at the Department of Physics of Humboldt University. The on-line service supported in the version 2.0 of the TRQS package provides faster access to true randomness sources constructed using the laws of quantum physics. The service is freely available at https://qrng.physik.hu-berlin.de/. The use of this service allows using the presented package with the need of a physical quantum random number generator. The second improvement introduced in this version is the ability to retrieve arrays of random data directly for the used source. This increases the speed of the random number generation, especially in the case of an on-line service, where it reduces the time necessary to establish the connection. Thanks to the speed improvement of the presented version, the package can now be used in simulations requiring larger amounts of random data. Moreover, the functions for generating random numbers provided by the current version of the package more closely follow the pattern of functions for generating pseudo- random numbers provided in Mathematica. Additional comments: Speed comparison: The implementation of the support for the QRNG on-line service provides a noticeable improvement in the speed of random number generation. For the samples of real numbers of size 101; 102,…,107 the times required to generate these samples using Quantis USB device and QRNG service are compared in Fig. 1. The presented results show that the use of the on-line service provides faster access to random numbers. One should note, however, that the speed gain can increase or
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
Csesztregi, T; Bovens, M; Dujourdy, L; Franc, A; Nagy, J
2014-08-01
The findings in this paper are based on the results of our drug homogeneity studies and particle size investigations. Using that information, a general sampling plan (depicted in the form of a flow-chart) was devised that could be applied to the quantitative instrumental analysis of the most common illicit drugs: namely heroin, cocaine, amphetamine, cannabis resin, MDMA tablets and herbal cannabis in 'bud' form (type I). Other more heterogeneous forms of cannabis (type II) were found to require alternative, more traditional sampling methods. A table was constructed which shows the sampling uncertainty expected when a particular number of random increments are taken and combined to form a single primary sample. It also includes a recommended increment size; which is 1 g for powdered drugs and cannabis resin, 1 tablet for MDMA and 1 bud for herbal cannabis in bud form (type I). By referring to that table, individual laboratories can ensure that the sampling uncertainty for a particular drug seizure can be minimised, such that it lies in the same region as their analytical uncertainty for that drug. The table shows that assuming a laboratory wishes to quantitatively analyse a seizure of powdered drug or cannabis resin with a 'typical' heterogeneity, a primary sample of 15×1 g increments is generally appropriate. The appropriate primary sample for MDMA tablets is 20 tablets, while for herbal cannabis (in bud form) 50 buds were found to be appropriate. Our study also showed that, for a suitably homogenised primary sample of the most common powdered drugs, an analytical sample size of between 20 and 35 mg was appropriate and for herbal cannabis the appropriate amount was 200 mg. The need to ensure that the results from duplicate or multiple incremental sampling were compared, to demonstrate whether or not a particular seized material has a 'typical' heterogeneity and that the sampling procedure applied has resulted in a 'correct sample', was highlighted and the setting
Posterior sampling with improved efficiency
Hanson, K.M.; Cunningham, G.S.
1998-12-01
The Markov Chain Monte Carlo (MCMC) technique provides a means to generate a random sequence of model realizations that sample the posterior probability distribution of a Bayesian analysis. That sequence may be used to make inferences about the model uncertainties that derive from measurement uncertainties. This paper presents an approach to improving the efficiency of the Metropolis approach to MCMC by incorporating an approximation to the covariance matrix of the posterior distribution. The covariance matrix is approximated using the update formula from the BFGS quasi-Newton optimization algorithm. Examples are given for uncorrelated and correlated multidimensional Gaussian posterior distributions.
Random-phase metasurfaces at optical wavelengths
Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P.; Bozhevolnyi, Sergey I.
2016-01-01
Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector. PMID:27328635
Random-phase metasurfaces at optical wavelengths
NASA Astrophysics Data System (ADS)
Pors, Anders; Ding, Fei; Chen, Yiting; Radko, Ilya P.; Bozhevolnyi, Sergey I.
2016-06-01
Random-phase metasurfaces, in which the constituents scatter light with random phases, have the property that an incident plane wave will diffusely scatter, hereby leading to a complex far-field response that is most suitably described by statistical means. In this work, we present and exemplify the statistical description of the far-field response, particularly highlighting how the response for polarised and unpolarised light might be alike or different depending on the correlation of scattering phases for two orthogonal polarisations. By utilizing gap plasmon-based metasurfaces, consisting of an optically thick gold film overlaid by a subwavelength thin glass spacer and an array of gold nanobricks, we design and realize random-phase metasurfaces at a wavelength of 800 nm. Optical characterisation of the fabricated samples convincingly demonstrates the diffuse scattering of reflected light, with statistics obeying the theoretical predictions. We foresee the use of random-phase metasurfaces for camouflage applications and as high-quality reference structures in dark-field microscopy, while the control of the statistics for polarised and unpolarised light might find usage in security applications. Finally, by incorporating a certain correlation between scattering by neighbouring metasurface constituents new types of functionalities can be realised, such as a Lambertian reflector.
Coring Sample Acquisition Tool
NASA Technical Reports Server (NTRS)
Haddad, Nicolas E.; Murray, Saben D.; Walkemeyer, Phillip E.; Badescu, Mircea; Sherrit, Stewart; Bao, Xiaoqi; Kriechbaum, Kristopher L.; Richardson, Megan; Klein, Kerry J.
2012-01-01
A sample acquisition tool (SAT) has been developed that can be used autonomously to sample drill and capture rock cores. The tool is designed to accommodate core transfer using a sample tube to the IMSAH (integrated Mars sample acquisition and handling) SHEC (sample handling, encapsulation, and containerization) without ever touching the pristine core sample in the transfer process.
Nonvolatile random access memory
NASA Technical Reports Server (NTRS)
Wu, Jiin-Chuan (Inventor); Stadler, Henry L. (Inventor); Katti, Romney R. (Inventor)
1994-01-01
A nonvolatile magnetic random access memory can be achieved by an array of magnet-Hall effect (M-H) elements. The storage function is realized with a rectangular thin-film ferromagnetic material having an in-plane, uniaxial anisotropy and inplane bipolar remanent magnetization states. The thin-film magnetic element is magnetized by a local applied field, whose direction is used to form either a 0 or 1 state. The element remains in the 0 or 1 state until a switching field is applied to change its state. The stored information is detcted by a Hall-effect sensor which senses the fringing field from the magnetic storage element. The circuit design for addressing each cell includes transistor switches for providing a current of selected polarity to store a binary digit through a separate conductor overlying the magnetic element of the cell. To read out a stored binary digit, transistor switches are employed to provide a current through a row of Hall-effect sensors connected in series and enabling a differential voltage amplifier connected to all Hall-effect sensors of a column in series. To avoid read-out voltage errors due to shunt currents through resistive loads of the Hall-effect sensors of other cells in the same column, at least one transistor switch is provided between every pair of adjacent cells in every row which are not turned on except in the row of the selected cell.
Average subentropy, coherence and entanglement of random mixed quantum states
NASA Astrophysics Data System (ADS)
Zhang, Lin; Singh, Uttam; Pati, Arun K.
2017-02-01
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
Fractional random walk lattice dynamics
NASA Astrophysics Data System (ADS)
Michelitsch, T. M.; Collet, B. A.; Riascos, A. P.; Nowakowski, A. F.; Nicolleau, F. C. G. A.
2017-02-01
We analyze time-discrete and time-continuous ‘fractional’ random walks on undirected regular networks with special focus on cubic periodic lattices in n = 1, 2, 3,.. dimensions. The fractional random walk dynamics is governed by a master equation involving fractional powers of Laplacian matrices {{L}\\fracα{2}}} where α =2 recovers the normal walk. First we demonstrate that the interval 0<α ≤slant 2 is admissible for the fractional random walk. We derive analytical expressions for the transition matrix of the fractional random walk and closely related the average return probabilities. We further obtain the fundamental matrix {{Z}(α )} , and the mean relaxation time (Kemeny constant) for the fractional random walk. The representation for the fundamental matrix {{Z}(α )} relates fractional random walks with normal random walks. We show that the matrix elements of the transition matrix of the fractional random walk exihibit for large cubic n-dimensional lattices a power law decay of an n-dimensional infinite space Riesz fractional derivative type indicating emergence of Lévy flights. As a further footprint of Lévy flights in the n-dimensional space, the transition matrix and return probabilities of the fractional random walk are dominated for large times t by slowly relaxing long-wave modes leading to a characteristic {{t}-\\frac{n{α}} -decay. It can be concluded that, due to long range moves of fractional random walk, a small world property is emerging increasing the efficiency to explore the lattice when instead of a normal random walk a fractional random walk is chosen.
Does Random Dispersion Help Survival?
NASA Astrophysics Data System (ADS)
Schinazi, Rinaldo B.
2015-04-01
Many species live in colonies that prosper for a while and then collapse. After the collapse the colony survivors disperse randomly and found new colonies that may or may not make it depending on the new environment they find. We use birth and death chains in random environments to model such a population and to argue that random dispersion is a superior strategy for survival.
Bürger, Kai; Krüger, Jens; Westermann, Rüdiger
2011-01-01
In this paper, we present a sample-based approach for surface coloring, which is independent of the original surface resolution and representation. To achieve this, we introduce the Orthogonal Fragment Buffer (OFB)—an extension of the Layered Depth Cube—as a high-resolution view-independent surface representation. The OFB is a data structure that stores surface samples at a nearly uniform distribution over the surface, and it is specifically designed to support efficient random read/write access to these samples. The data access operations have a complexity that is logarithmic in the depth complexity of the surface. Thus, compared to data access operations in tree data structures like octrees, data-dependent memory access patterns are greatly reduced. Due to the particular sampling strategy that is employed to generate an OFB, it also maintains sample coherence, and thus, exhibits very good spatial access locality. Therefore, OFB-based surface coloring performs significantly faster than sample-based approaches using tree structures. In addition, since in an OFB, the surface samples are internally stored in uniform 2D grids, OFB-based surface coloring can efficiently be realized on the GPU to enable interactive coloring of high-resolution surfaces. On the OFB, we introduce novel algorithms for color painting using volumetric and surface-aligned brushes, and we present new approaches for particle-based color advection along surfaces in real time. Due to the intermediate surface representation we choose, our method can be used to color polygonal surfaces as well as any other type of surface that can be sampled. PMID:20616392
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
A Mars Sample Return Sample Handling System
NASA Technical Reports Server (NTRS)
Wilson, David; Stroker, Carol
2013-01-01
We present a sample handling system, a subsystem of the proposed Dragon landed Mars Sample Return (MSR) mission [1], that can return to Earth orbit a significant mass of frozen Mars samples potentially consisting of: rock cores, subsurface drilled rock and ice cuttings, pebble sized rocks, and soil scoops. The sample collection, storage, retrieval and packaging assumptions and concepts in this study are applicable for the NASA's MPPG MSR mission architecture options [2]. Our study assumes a predecessor rover mission collects samples for return to Earth to address questions on: past life, climate change, water history, age dating, understanding Mars interior evolution [3], and, human safety and in-situ resource utilization. Hence the rover will have "integrated priorities for rock sampling" [3] that cover collection of subaqueous or hydrothermal sediments, low-temperature fluidaltered rocks, unaltered igneous rocks, regolith and atmosphere samples. Samples could include: drilled rock cores, alluvial and fluvial deposits, subsurface ice and soils, clays, sulfates, salts including perchlorates, aeolian deposits, and concretions. Thus samples will have a broad range of bulk densities, and require for Earth based analysis where practical: in-situ characterization, management of degradation such as perchlorate deliquescence and volatile release, and contamination management. We propose to adopt a sample container with a set of cups each with a sample from a specific location. We considered two sample cups sizes: (1) a small cup sized for samples matching those submitted to in-situ characterization instruments, and, (2) a larger cup for 100 mm rock cores [4] and pebble sized rocks, thus providing diverse samples and optimizing the MSR sample mass payload fraction for a given payload volume. We minimize sample degradation by keeping them frozen in the MSR payload sample canister using Peltier chip cooling. The cups are sealed by interference fitted heat activated memory
Replica trick for rare samples
NASA Astrophysics Data System (ADS)
Rizzo, Tommaso
2014-05-01
In the context of disordered systems with quenched Hamiltonians I address the problem of characterizing rare samples where the thermal average of a specific observable has a value different from the typical one. These rare samples can be selected through a variation of the replica trick which amounts to replicating the system and dividing the replicas intwo two groups containing, respectively, M and -M replicas. Replicas in the first (second) group experience a positive (negative) small field O (1/M) conjugate to the observable considered and the M →∞ limit is to be taken in the end. Applications to the random-field Ising model and to the Sherrington-Kirkpatrick model are discussed.
Phase transitions on random lattices: how random is topological disorder?
Barghathi, Hatem; Vojta, Thomas
2014-09-19
We study the effects of topological (connectivity) disorder on phase transitions. We identify a broad class of random lattices whose disorder fluctuations decay much faster with increasing length scale than those of generic random systems, yielding a wandering exponent of ω=(d-1)/(2d) in d dimensions. The stability of clean critical points is thus governed by the criterion (d+1)ν>2 rather than the usual Harris criterion dν>2, making topological disorder less relevant than generic randomness. The Imry-Ma criterion is also modified, allowing first-order transitions to survive in all dimensions d>1. These results explain a host of puzzling violations of the original criteria for equilibrium and nonequilibrium phase transitions on random lattices. We discuss applications, and we illustrate our theory by computer simulations of random Voronoi and other lattices.
Simulation of pedigree genotypes by random walks.
Lange, K; Matthysse, S
1989-01-01
A random walk method, based on the Metropolis algorithm, is developed for simulating the distribution of trait and linkage marker genotypes in pedigrees where trait phenotypes are already known. The method complements techniques suggested by Ploughman and Boehnke and by Ott that are based on sequential sampling of genotypes within a pedigree. These methods are useful for estimating the power of linkage analysis before complete study of a pedigree is undertaken. We apply the random walk technique to a partially penetrant disease, schizophrenia, and to a recessive disease, ataxia-telangiectasia. In the first case we show that accessory phenotypes with higher penetrance than that of schizophrenia itself may be crucial for effective linkage analysis, and in the second case we show that impressionistic selection of informative pedigrees may be misleading. PMID:2589323
Random distributed feedback fibre lasers
NASA Astrophysics Data System (ADS)
Turitsyn, Sergei K.; Babin, Sergey A.; Churkin, Dmitry V.; Vatnik, Ilya D.; Nikulin, Maxim; Podivilov, Evgenii V.
2014-09-01
The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with “negative absorption” of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors-random distributed feedback fibre laser-was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100 km. Although an effective reflection due to the Rayleigh scattering is extremely small (˜0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the generation
54 Gbps real time quantum random number generator with simple implementation
NASA Astrophysics Data System (ADS)
Yang, Jie; Liu, Jinlu; Su, Qi; Li, Zhengyu; Fan, Fan; Xu, Bingjie; Guo, Hong
2016-11-01
We present a random number generation scheme based on measuring the phase fluctuations of a laser with a simple and compact experimental setup. A simple model is established to analyze the randomness and the simulation result based on this model fits well with the experiment data. After the analog to digital sampling and suitable randomness extraction integrated in the field programmable gate array, the final random bits are delivered to a PC, realizing a 5.4 Gbps real time quantum random number generation. The final random bit sequences have passed all the NIST and DIEHARD tests.
Network Sampling with Memory: A proposal for more efficient sampling from social networks
Mouw, Ted; Verdery, Ashton M.
2013-01-01
Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246
Suicidality in a Sample of Arctic Households
ERIC Educational Resources Information Center
Haggarty, John M.; Cernovsky, Zack; Bedard, Michel; Merskey, Harold
2008-01-01
We investigated the association of suicidal ideation and behavior with depression, anxiety, and alcohol abuse in a Canadian Arctic Inuit community. Inuit (N = 111) from a random sample of households completed assessments of anxiety and depression, alcohol abuse, and suicidality. High rates of suicidal ideation within the past week (43.6%), and…
Ranked set sampling with unequal samples.
Bhoj, D S
2001-09-01
A ranked set sampling procedure with unequal samples (RSSU) is proposed and used to estimate the population mean. This estimator is then compared with the estimators based on the ranked set sampling (RSS) and median ranked set sampling (MRSS) procedures. It is shown that the relative precisions of the estimator based on RSSU are higher than those of the estimators based on RSS and MRSS. An example of estimating the mean diameter at breast height of longleaf-pine trees on the Wade Tract in Thomas County, Georgia, is presented.
NASA Astrophysics Data System (ADS)
Jung, P.; Talkner, P.
2010-09-01
A simple way to convert a purely random sequence of events into a signal with a strong periodic component is proposed. The signal consists of those instants of time at which the length of the random sequence exceeds an integer multiple of a given number. The larger this number the more pronounced the periodic behavior becomes.
The random continued fraction transformation
NASA Astrophysics Data System (ADS)
Kalle, Charlene; Kempton, Tom; Verbitskiy, Evgeny
2017-03-01
We introduce a random dynamical system related to continued fraction expansions. It uses random combinations of the Gauss map and the Rényi (or backwards) continued fraction map. We explore the continued fraction expansions that this system produces, as well as the dynamical properties of the system.
A brief note regarding randomization.
Senn, Stephen
2013-01-01
This note argues, contrary to claims in this journal, that the possible existence of indefinitely many causal factors does not invalidate randomization. The effect of such factors has to be bounded by outcome, and since inference is based on a ratio of between-treatment-group to within-treatment-group variation, randomization remains valid.
Quantum to classical randomness extractors
NASA Astrophysics Data System (ADS)
Wehner, Stephanie; Berta, Mario; Fawzi, Omar
2013-03-01
The goal of randomness extraction is to distill (almost) perfect randomness from a weak source of randomness. When the source yields a classical string X, many extractor constructions are known. Yet, when considering a physical randomness source, X is itself ultimately the result of a measurement on an underlying quantum system. When characterizing the power of a source to supply randomness it is hence a natural question to ask, how much classical randomness we can extract from a quantum system. To tackle this question we here introduce the notion of quantum-to-classical randomness extractors (QC-extractors). We identify an entropic quantity that determines exactly how much randomness can be obtained. Furthermore, we provide constructions of QC-extractors based on measurements in a full set of mutually unbiased bases (MUBs), and certain single qubit measurements. As the first application, we show that any QC-extractor gives rise to entropic uncertainty relations with respect to quantum side information. Such relations were previously only known for two measurements. As the second application, we resolve the central open question in the noisy-storage model [Wehner et al., PRL 100, 220502 (2008)] by linking security to the quantum capacity of the adversary's storage device.
Aging transition by random errors
Sun, Zhongkui; Ma, Ning; Xu, Wei
2017-01-01
In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430
Aging transition by random errors
NASA Astrophysics Data System (ADS)
Sun, Zhongkui; Ma, Ning; Xu, Wei
2017-02-01
In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.
Sampling for Telephone Surveys: Do the Results Depend on Technique?
ERIC Educational Resources Information Center
Franz, Jennifer D.
Two basic methods exist for drawing probability samples to be used in telephone surveys: directory sampling (from alphabetical or street directories) and random digit dialing (RDD). RDD includes unlisted numbers, whereas directory sampling includes only listed numbers. The goal of this paper is to estimate the effect of failure to include…
Pedagogical Simulation of Sampling Distributions and the Central Limit Theorem
ERIC Educational Resources Information Center
Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari
2007-01-01
Students often find the fact that a sample statistic is a random variable very hard to grasp. Even more mysterious is why a sample mean should become ever more Normal as the sample size increases. This simulation tool is meant to illustrate the process, thereby giving students some intuitive grasp of the relationship between a parent population…
How Sample Size Affects a Sampling Distribution
ERIC Educational Resources Information Center
Mulekar, Madhuri S.; Siegel, Murray H.
2009-01-01
If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…
On the stability of robotic systems with random communication rates
NASA Technical Reports Server (NTRS)
Kobayashi, H.; Yun, X.; Paul, R. P.
1989-01-01
Control problems of sampled data systems which are subject to random sample rate variations and delays are studied. Due to the rapid growth of the use of computers more and more systems are controlled digitally. Complex systems such as space telerobotic systems require the integration of a number of subsystems at different hierarchical levels. While many subsystems may run on a single processor, some subsystems require their own processor or processors. The subsystems are integrated into functioning systems through communications. Communications between processes sharing a single processor are also subject to random delays due to memory management and interrupt latency. Communications between processors involve random delays due to network access and to data collisions. Furthermore, all control processes involve delays due to casual factors in measuring devices and to signal processing. Traditionally, sampling rates are chosen to meet the worst case communication delay. Such a strategy is wasteful as the processors are then idle a great proportion of the time; sample rates are not as high as possible resulting in poor performance or in the over specification of control processors; there is the possibility of missing data no matter how low the sample rate is picked. Asymptotical stability with probability one for randomly sampled multi-dimensional linear systems is studied. A sufficient condition for the stability is obtained. This condition is so simple that it can be applied to practical systems. A design procedure is also shown.
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
Non-Hermitian random matrix models: Free random variable approach
Janik, R.A.,; Nowak, M.A., ||; Papp, G.,; Wambach, J.,; Zahed, I., |
1997-04-01
Using the standard concepts of free random variables, we show that for a large class of non-Hermitian random matrix models, the support of the eigenvalue distribution follows from their Hermitian analogs using a conformal transformation. We also extend the concepts of free random variables to the class of non-Hermitian matrices, and apply them to the models discussed by Ginibre-Girko (elliptic ensemble) [J. Ginibre, J. Math. Phys. {bold 6}, 1440 (1965); V. L. Girko, {ital Spectral Theory of Random Matrices} (in Russian) (Nauka, Moscow, 1988)] and Mahaux-Weidenm{umlt u}ller (chaotic resonance scattering) [C. Mahaux and H. A. Weidenm{umlt u}ller, {ital Shell-model Approach to Nuclear Reactions} (North-Holland, Amsterdam, 1969)]. {copyright} {ital 1997} {ital The American Physical Society}
23 CFR Appendix A to Part 1340 - Sample Design
Code of Federal Regulations, 2010 CFR
2010-04-01
... sites can be done. Sample sites should be grouped into geographic clusters, with each cluster containing major and local roads. Assignment of sites and times within clusters should be random. F. Two...
Nahorniak, Matthew; Larsen, David P; Volk, Carol; Jordan, Chris E
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools--linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
Nahorniak, Matthew
2015-01-01
In ecology, as in other research fields, efficient sampling for population estimation often drives sample designs toward unequal probability sampling, such as in stratified sampling. Design based statistical analysis tools are appropriate for seamless integration of sample design into the statistical analysis. However, it is also common and necessary, after a sampling design has been implemented, to use datasets to address questions that, in many cases, were not considered during the sampling design phase. Questions may arise requiring the use of model based statistical tools such as multiple regression, quantile regression, or regression tree analysis. However, such model based tools may require, for ensuring unbiased estimation, data from simple random samples, which can be problematic when analyzing data from unequal probability designs. Despite numerous method specific tools available to properly account for sampling design, too often in the analysis of ecological data, sample design is ignored and consequences are not properly considered. We demonstrate here that violation of this assumption can lead to biased parameter estimates in ecological research. In addition, to the set of tools available for researchers to properly account for sampling design in model based analysis, we introduce inverse probability bootstrapping (IPB). Inverse probability bootstrapping is an easily implemented method for obtaining equal probability re-samples from a probability sample, from which unbiased model based estimates can be made. We demonstrate the potential for bias in model-based analyses that ignore sample inclusion probabilities, and the effectiveness of IPB sampling in eliminating this bias, using both simulated and actual ecological data. For illustration, we considered three model based analysis tools—linear regression, quantile regression, and boosted regression tree analysis. In all models, using both simulated and actual ecological data, we found inferences to be
Texture synthesis and transfer from multiple samples
NASA Astrophysics Data System (ADS)
Qi, Yue; Zhao, Qinping
2003-09-01
Texture Mapping plays a very important role in Computer Graphics. Texture Synthesis is one of the main methods to obtain textures, it makes use of sample textures to generate new textures. Texture Transfer is based on Texture Synthesis, it renders objects with textures taken from different objects. Currently, most of Texture Synthesis and Transfer methods use a single sample texture. A method for Texture Synthesis adn Transfer from multi samples was presented. For texture synthesis, the L-shaped neighborhood seaching approach was used. Users specify the proportion of each sample, the number of seed points, and these seed points are scattered randomly according to their samples in horizontal and vertical direction synchronously to synthesize textures. The synthesized textures are very good. For texture transfer, the luminance of the target image and the sample textures are analyzed. This procedure is from coarse to fine, and can produce a visually pleasing result.
Sampling Motif-Constrained Ensembles of Networks
NASA Astrophysics Data System (ADS)
Fischer, Rico; Leitão, Jorge C.; Peixoto, Tiago P.; Altmann, Eduardo G.
2015-10-01
The statistical significance of network properties is conditioned on null models which satisfy specified properties but that are otherwise random. Exponential random graph models are a principled theoretical framework to generate such constrained ensembles, but which often fail in practice, either due to model inconsistency or due to the impossibility to sample networks from them. These problems affect the important case of networks with prescribed clustering coefficient or number of small connected subgraphs (motifs). In this Letter we use the Wang-Landau method to obtain a multicanonical sampling that overcomes both these problems. We sample, in polynomial time, networks with arbitrary degree sequences from ensembles with imposed motifs counts. Applying this method to social networks, we investigate the relation between transitivity and homophily, and we quantify the correlation between different types of motifs, finding that single motifs can explain up to 60% of the variation of motif profiles.
Random sequential adsorption on fractals.
Ciesla, Michal; Barbasz, Jakub
2012-07-28
Irreversible adsorption of spheres on flat collectors having dimension d < 2 is studied. Molecules are adsorbed on Sierpinski's triangle and carpet-like fractals (1 < d < 2), and on general Cantor set (d < 1). Adsorption process is modeled numerically using random sequential adsorption (RSA) algorithm. The paper concentrates on measurement of fundamental properties of coverages, i.e., maximal random coverage ratio and density autocorrelation function, as well as RSA kinetics. Obtained results allow to improve phenomenological relation between maximal random coverage ratio and collector dimension. Moreover, simulations show that, in general, most of known dimensional properties of adsorbed monolayers are valid for non-integer dimensions.
Social network sampling using spanning trees
NASA Astrophysics Data System (ADS)
Jalali, Zeinab S.; Rezvanian, Alireza; Meybodi, Mohammad Reza
2016-12-01
Due to the large scales and limitations in accessing most online social networks, it is hard or infeasible to directly access them in a reasonable amount of time for studying and analysis. Hence, network sampling has emerged as a suitable technique to study and analyze real networks. The main goal of sampling online social networks is constructing a small scale sampled network which preserves the most important properties of the original network. In this paper, we propose two sampling algorithms for sampling online social networks using spanning trees. The first proposed sampling algorithm finds several spanning trees from randomly chosen starting nodes; then the edges in these spanning trees are ranked according to the number of times that each edge has appeared in the set of found spanning trees in the given network. The sampled network is then constructed as a sub-graph of the original network which contains a fraction of nodes that are incident on highly ranked edges. In order to avoid traversing the entire network, the second sampling algorithm is proposed using partial spanning trees. The second sampling algorithm is similar to the first algorithm except that it uses partial spanning trees. Several experiments are conducted to examine the performance of the proposed sampling algorithms on well-known real networks. The obtained results in comparison with other popular sampling methods demonstrate the efficiency of the proposed sampling algorithms in terms of Kolmogorov-Smirnov distance (KSD), skew divergence distance (SDD) and normalized distance (ND).
Effect of noise correlations on randomized benchmarking
NASA Astrophysics Data System (ADS)
Ball, Harrison; Stace, Thomas M.; Flammia, Steven T.; Biercuk, Michael J.
2016-02-01
Among the most popular and well-studied quantum characterization, verification, and validation techniques is randomized benchmarking (RB), an important statistical tool used to characterize the performance of physical logic operations useful in quantum information processing. In this work we provide a detailed mathematical treatment of the effect of temporal noise correlations on the outcomes of RB protocols. We provide a fully analytic framework capturing the accumulation of error in RB expressed in terms of a three-dimensional random walk in "Pauli space." Using this framework we derive the probability density function describing RB outcomes (averaged over noise) for both Markovian and correlated errors, which we show is generally described by a Γ distribution with shape and scale parameters depending on the correlation structure. Long temporal correlations impart large nonvanishing variance and skew in the distribution towards high-fidelity outcomes—consistent with existing experimental data—highlighting potential finite-sampling pitfalls and the divergence of the mean RB outcome from worst-case errors in the presence of noise correlations. We use the filter-transfer function formalism to reveal the underlying reason for these differences in terms of effective coherent averaging of correlated errors in certain random sequences. We conclude by commenting on the impact of these calculations on the utility of single-metric approaches to quantum characterization, verification, and validation.
Fast phase randomization via two-folds.
Simpson, D J W; Jeffrey, M R
2016-02-01
A two-fold is a singular point on the discontinuity surface of a piecewise-smooth vector field, at which the vector field is tangent to the discontinuity surface on both sides. If an orbit passes through an invisible two-fold (also known as a Teixeira singularity) before settling to regular periodic motion, then the phase of that motion cannot be determined from initial conditions, and, in the presence of small noise, the asymptotic phase of a large number of sample solutions is highly random. In this paper, we show how the probability distribution of the asymptotic phase depends on the global nonlinear dynamics. We also show how the phase of a smooth oscillator can be randomized by applying a simple discontinuous control law that generates an invisible two-fold. We propose that such a control law can be used to desynchronize a collection of oscillators, and that this manner of phase randomization is fast compared with existing methods (which use fixed points as phase singularities), because there is no slowing of the dynamics near a two-fold.
Garcia, Anthony R.; Johnston, Roger G.; Martinez, Ronald K.
2000-01-01
A fluid-sampling tool for obtaining a fluid sample from a container. When used in combination with a rotatable drill, the tool bores a hole into a container wall, withdraws a fluid sample from the container, and seals the borehole. The tool collects fluid sample without exposing the operator or the environment to the fluid or to wall shavings from the container.
Quantifying randomness in real networks
Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-01-01
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks—the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain—and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs. PMID:26482121
Quantifying randomness in real networks
NASA Astrophysics Data System (ADS)
Orsini, Chiara; Dankulov, Marija M.; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E.; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-10-01
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.
Quantum-noise randomized ciphers
Nair, Ranjith; Yuen, Horace P.; Kumar, Prem; Corndorf, Eric; Eguchi, Takami
2006-11-15
We review the notion of a classical random cipher and its advantages. We sharpen the usual description of random ciphers to a particular mathematical characterization suggested by the salient feature responsible for their increased security. We describe a concrete system known as {alpha}{eta} and show that it is equivalent to a random cipher in which the required randomization is affected by coherent-state quantum noise. We describe the currently known security features of {alpha}{eta} and similar systems, including lower bounds on the unicity distances against ciphertext-only and known-plaintext attacks. We show how {alpha}{eta} used in conjunction with any standard stream cipher such as the Advanced Encryption Standard provides an additional, qualitatively different layer of security from physical encryption against known-plaintext attacks on the key. We refute some claims in the literature that {alpha}{eta} is equivalent to a nonrandom stream cipher.
Cluster randomization and political philosophy.
Chwang, Eric
2012-11-01
In this paper, I will argue that, while the ethical issues raised by cluster randomization can be challenging, they are not new. My thesis divides neatly into two parts. In the first, easier part I argue that many of the ethical challenges posed by cluster randomized human subjects research are clearly present in other types of human subjects research, and so are not novel. In the second, more difficult part I discuss the thorniest ethical challenge for cluster randomized research--cases where consent is genuinely impractical to obtain. I argue that once again these cases require no new analytic insight; instead, we should look to political philosophy for guidance. In other words, the most serious ethical problem that arises in cluster randomized research also arises in political philosophy.
Quantifying randomness in real networks.
Orsini, Chiara; Dankulov, Marija M; Colomer-de-Simón, Pol; Jamakovic, Almerima; Mahadevan, Priya; Vahdat, Amin; Bassler, Kevin E; Toroczkai, Zoltán; Boguñá, Marián; Caldarelli, Guido; Fortunato, Santo; Krioukov, Dmitri
2015-10-20
Represented as graphs, real networks are intricate combinations of order and disorder. Fixing some of the structural properties of network models to their values observed in real networks, many other properties appear as statistical consequences of these fixed observables, plus randomness in other respects. Here we employ the dk-series, a complete set of basic characteristics of the network structure, to study the statistical dependencies between different network properties. We consider six real networks--the Internet, US airport network, human protein interactions, technosocial web of trust, English word network, and an fMRI map of the human brain--and find that many important local and global structural properties of these networks are closely reproduced by dk-random graphs whose degree distributions, degree correlations and clustering are as in the corresponding real network. We discuss important conceptual, methodological, and practical implications of this evaluation of network randomness, and release software to generate dk-random graphs.
Staggered chiral random matrix theory
Osborn, James C.
2011-02-01
We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.
Linear equations with random variables.
Tango, Toshiro
2005-10-30
A system of linear equations is presented where the unknowns are unobserved values of random variables. A maximum likelihood estimator assuming a multivariate normal distribution and a non-parametric proportional allotment estimator are proposed for the unobserved values of the random variables and for their means. Both estimators can be computed by simple iterative procedures and are shown to perform similarly. The methods are illustrated with data from a national nutrition survey in Japan.
Large Deviations for Random Trees
Heitsch, Christine
2010-01-01
We consider large random trees under Gibbs distributions and prove a Large Deviation Principle (LDP) for the distribution of degrees of vertices of the tree. The LDP rate function is given explicitly. An immediate consequence is a Law of Large Numbers for the distribution of vertex degrees in a large random tree. Our motivation for this study comes from the analysis of RNA secondary structures. PMID:20216937
On Pfaffian Random Point Fields
NASA Astrophysics Data System (ADS)
Kargin, V.
2014-02-01
We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.
The MIXMAX random number generator
NASA Astrophysics Data System (ADS)
Savvidy, Konstantin G.
2015-11-01
In this paper, we study the randomness properties of unimodular matrix random number generators. Under well-known conditions, these discrete-time dynamical systems have the highly desirable K-mixing properties which guarantee high quality random numbers. It is found that some widely used random number generators have poor Kolmogorov entropy and consequently fail in empirical tests of randomness. These tests show that the lowest acceptable value of the Kolmogorov entropy is around 50. Next, we provide a solution to the problem of determining the maximal period of unimodular matrix generators of pseudo-random numbers. We formulate the necessary and sufficient condition to attain the maximum period and present a family of specific generators in the MIXMAX family with superior performance and excellent statistical properties. Finally, we construct three efficient algorithms for operations with the MIXMAX matrix which is a multi-dimensional generalization of the famous cat-map. First, allowing to compute the multiplication by the MIXMAX matrix with O(N) operations. Second, to recursively compute its characteristic polynomial with O(N2) operations, and third, to apply skips of large number of steps S to the sequence in O(N2 log(S)) operations.
Enhanced conformational sampling using enveloping distribution sampling.
Lin, Zhixiong; van Gunsteren, Wilfred F
2013-10-14
To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.
Probing cell activity in random access modality
NASA Astrophysics Data System (ADS)
Sacconi, L.; Crocini, C.; Lotti, J.; Coppini, R.; Ferrantini, C.; Tesi, C.; Yan, P.; Loew, L. M.; Cerbai, E.; Poggesi, C.; Pavone, F. S.
2013-06-01
We combined the advantage of an ultrafast random access microscope with novel labelling technologies to study the intra- and inter-cellular action potential propagation in neurons and cardiac myocytes with sub-millisecond time resolution. The random accesses microscopy was used in combination with a new fluorinated voltage sensitive dye with improved photostability to record membrane potential from multiple Purkinje cells with near simultaneous sampling. The RAMP system rapidly scanned between lines drawn in the membranes of neurons to perform multiplex measurements of the TPF signal. This recording was achieved by rapidly positioning the laser excitation with the AOD to sample a patch of membrane from each cell in <100 μs for recording from five cells, multiplexing permits a temporal resolution of 400 μs sufficient to capture every spike. The system is capable to record spontaneous activity over 800 ms from five neighbouring cells simultaneously, showing that spiking is not temporally correlated. The system was also used to investigate the electrical properties of tubular system (TATS) in isolated rat ventricular myocytes.
Sample design effects in landscape genetics
Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.
2012-01-01
An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.
Remanent magnetization of lunar samples.
NASA Technical Reports Server (NTRS)
Strangway, D. W.; Pearce, G. W.; Gose, W. A.; Timme, R. W.
1971-01-01
The remanent magnetization of samples returned from the moon by the Apollo 11 and 12 missions consists, in most cases, of two distinct components. An unstable component is readily removed upon alternating field (AF) demagnetization in fields less than 100 Oe and is considered to be an isothermal remanence acquired during or after return to earth. The second component is unaltered by demagnetization in fields up to 400 Oe. It is probably a thermoremanent magnetization due to cooling from above 800 C in the presence of a field of a few thousand gammas. Chips from individual rocks have the same direction of magnetization after demagnetization, while the directions of different samples are random. This again demonstrates the high stability. Our data imply that the moon experienced a magnetic field that lasted at least from about 3.0 to 3.8 b.y., which is the age of Apollo 11 and 12 samples. One explanation of the origin of this field is that the moon had a liquid core and a self-exciting dynamo early in its history.
NASA Technical Reports Server (NTRS)
Carlson, I. C.
1978-01-01
Petrographic descriptions of all Apollo 14 samples larger than 1 cm in any dimension are presented. The sample description format consists of: (1) an introductory section which includes information on lunar sample location, orientation, and return containers, (2) a section on physical characteristics, which contains the sample mass, dimensions, and a brief description; (3) surface features, including zap pits, cavities, and fractures as seen in binocular view; (4) petrographic description, consisting of a binocular description and, if possible, a thin section description; and (5) a discussion of literature relevant to sample petrology is included for samples which have previously been examined by the scientific community.
Nelson, Danny A.; Tomich, Stanley D.; Glover, Donald W.; Allen, Errol V.; Hales, Jeremy M.; Dana, Marshall T.
1991-01-01
The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of said precipitation from said chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device.
Nelson, D.A.; Tomich, S.D.; Glover, D.W.; Allen, E.V.; Hales, J.M.; Dana, M.T.
1991-05-14
The present invention constitutes a rain sampling device adapted for independent operation at locations remote from the user which allows rainfall to be sampled in accordance with any schedule desired by the user. The rain sampling device includes a mechanism for directing wet precipitation into a chamber, a chamber for temporarily holding the precipitation during the process of collection, a valve mechanism for controllably releasing samples of the precipitation from the chamber, a means for distributing the samples released from the holding chamber into vessels adapted for permanently retaining these samples, and an electrical mechanism for regulating the operation of the device. 11 figures.
Stardust Sample: Investigator's Guidebook
NASA Technical Reports Server (NTRS)
Allen, Carl
2006-01-01
In January 2006, the Stardust spacecraft returned the first in situ collection of samples from a comet, and the first samples of contemporary interstellar dust. Stardust is the first US sample return mission from a planetary body since Apollo, and the first ever from beyond the moon. This handbook is a basic reference source for allocation procedures and policies for Stardust samples. These samples consist of particles and particle residues in aerogel collectors, in aluminum foil, and in spacecraft components. Contamination control samples and unflown collection media are also available for allocation.
Heuristic-biased stochastic sampling
Bresina, J.L.
1996-12-31
This paper presents a search technique for scheduling problems, called Heuristic-Biased Stochastic Sampling (HBSS). The underlying assumption behind the HBSS approach is that strictly adhering to a search heuristic often does not yield the best solution and, therefore, exploration off the heuristic path can prove fruitful. Within the HBSS approach, the balance between heuristic adherence and exploration can be controlled according to the confidence one has in the heuristic. By varying this balance, encoded as a bias function, the HBSS approach encompasses a family of search algorithms of which greedy search and completely random search are extreme members. We present empirical results from an application of HBSS to the realworld problem of observation scheduling. These results show that with the proper bias function, it can be easy to outperform greedy search.
Is the Non-Dipole Magnetic Field Random?
NASA Technical Reports Server (NTRS)
Walker, Andrew D.; Backus, George E.
1996-01-01
Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.
R. A. Fisher and his advocacy of randomization.
Hall, Nancy S
2007-01-01
The requirement of randomization in experimental design was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimental design were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods.
Neither fixed nor random: weighted least squares meta-regression.
Stanley, T D; Doucouliagos, Hristos
2017-03-01
Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd.
Maximum of the Characteristic Polynomial of Random Unitary Matrices
NASA Astrophysics Data System (ADS)
Arguin, Louis-Pierre; Belius, David; Bourgade, Paul
2017-01-01
It was recently conjectured by Fyodorov, Hiary and Keating that the maximum of the characteristic polynomial on the unit circle of a {N× N} random unitary matrix sampled from the Haar measure grows like {CN/(log N)^{3/4}} for some random variable C. In this paper, we verify the leading order of this conjecture, that is, we prove that with high probability the maximum lies in the range {[N^{1 - ɛ},N^{1 + ɛ}]}, for arbitrarily small ɛ. The method is based on identifying an approximate branching random walk in the Fourier decomposition of the characteristic polynomial, and uses techniques developed to describe the extremes of branching random walks and of other log-correlated random fields. A key technical input is the asymptotic analysis of Toeplitz determinants with dimension-dependent symbols. The original argument for these asymptotics followed the general idea that the statistical mechanics of 1/ f-noise random energy models is governed by a freezing transition. We also prove the conjectured freezing of the free energy for random unitary matrices.
Initial data sampling in design optimization
NASA Astrophysics Data System (ADS)
Southall, Hugh L.; O'Donnell, Terry H.
2011-06-01
Evolutionary computation (EC) techniques in design optimization such as genetic algorithms (GA) or efficient global optimization (EGO) require an initial set of data samples (design points) to start the algorithm. They are obtained by evaluating the cost function at selected sites in the input space. A two-dimensional input space can be sampled using a Latin square, a statistical sampling technique which samples a square grid such that there is a single sample in any given row and column. The Latin hypercube is a generalization to any number of dimensions. However, a standard random Latin hypercube can result in initial data sets which may be highly correlated and may not have good space-filling properties. There are techniques which address these issues. We describe and use one technique in this paper.
Random Effects: Variance Is the Spice of Life.
Jupiter, Daniel C
Covariates in regression analyses allow us to understand how independent variables of interest impact our dependent outcome variable. Often, we consider fixed effects covariates (e.g., gender or diabetes status) for which we examine subjects at each value of the covariate. We examine both men and women and, within each gender, examine both diabetic and nondiabetic patients. Occasionally, however, we consider random effects covariates for which we do not examine subjects at every value. For example, we examine patients from only a sample of hospitals and, within each hospital, examine both diabetic and nondiabetic patients. The random sampling of hospitals is in contrast to the complete coverage of all genders. In this column I explore the differences in meaning and analysis when thinking about fixed and random effects variables.
Obtaining representative ground water samples is important for site assessment and
remedial performance monitoring objectives. Issues which must be considered prior to initiating a ground-water monitoring program include defining monitoring goals and objectives, sampling point...
On Convergent Probability of a Random Walk
ERIC Educational Resources Information Center
Lee, Y.-F.; Ching, W.-K.
2006-01-01
This note introduces an interesting random walk on a straight path with cards of random numbers. The method of recurrent relations is used to obtain the convergent probability of the random walk with different initial positions.
EDITORIAL: Nano and random lasers Nano and random lasers
NASA Astrophysics Data System (ADS)
Wiersma, Diederik S.; Noginov, Mikhail A.
2010-02-01
The field of extreme miniature sources of stimulated emission represented by random lasers and nanolasers has gone through an enormous development in recent years. Random lasers are disordered optical structures in which light waves are both multiply scattered and amplified. Multiple scattering is a process that we all know very well from daily experience. Many familiar materials are actually disordered dielectrics and owe their optical appearance to multiple light scattering. Examples are white marble, white painted walls, paper, white flowers, etc. Light waves inside such materials perform random walks, that is they are scattered several times in random directions before they leave the material, and this gives it an opaque white appearance. This multiple scattering process does not destroy the coherence of the light. It just creates a very complex interference pattern (also known as speckle). Random lasers can be made of basically any disordered dielectric material by adding an optical gain mechanism to the structure. In practice this can be achieved with, for instance, laser dye that is dissolved in the material and optically excited by a pump laser. Alternative routes to incorporate gain are achieved using rare-earth or transition metal doped solid-state laser materials or direct band gap semiconductors. The latter can potentially be pumped electrically. After excitation, the material is capable of scattering light and amplifying it, and these two ingredients form the basis for a random laser. Random laser emission can be highly coherent, even in the absence of an optical cavity. The reason is that random structures can sustain optical modes that are spectrally narrow. This provides a spectral selection mechanism that, together with gain saturation, leads to coherent emission. A random laser can have a large number of (randomly distributed) modes that are usually strongly coupled. This means that many modes compete for the gain that is available in a random
Wave propagation through a random medium - The random slab problem
NASA Technical Reports Server (NTRS)
Acquista, C.
1978-01-01
The first-order smoothing approximation yields integral equations for the mean and the two-point correlation function of a wave in a random medium. A method is presented for the approximate solution of these equations that combines features of the eiconal approximation and of the Born expansion. This method is applied to the problem of reflection and transmission of a plane wave by a slab of a random medium. Both the mean wave and the covariance are calculated to determine the reflected and transmitted amplitudes and intensities.
Improved Sampling Method Reduces Isokinetic Sampling Errors.
ERIC Educational Resources Information Center
Karels, Gale G.
The particulate sampling system currently in use by the Bay Area Air Pollution Control District, San Francisco, California is described in this presentation for the 12th Conference on Methods in Air Pollution and Industrial Hygiene Studies, University of Southern California, April, 1971. The method represents a practical, inexpensive tool that can…
NASA Astrophysics Data System (ADS)
Hu, Jun; Liu, Steven; Ji, Donghai; Li, Shanqiang
2016-07-01
In this paper, the co-design problem of filter and fault estimator is studied for a class of time-varying non-linear stochastic systems subject to randomly occurring nonlinearities and randomly occurring deception attacks. Two mutually independent random variables obeying the Bernoulli distribution are employed to characterize the phenomena of the randomly occurring nonlinearities and randomly occurring deception attacks, respectively. By using the augmentation approach, the co-design problem of the robust filter and fault estimator is converted into the recursive filter design problem. A new compensation scheme is proposed such that, for both randomly occurring nonlinearities and randomly occurring deception attacks, an upper bound of the filtering error covariance is obtained and such an upper bound is minimized by properly designing the filter gain at each sampling instant. Moreover, the explicit form of the filter gain is given based on the solution to two Riccati-like difference equations. It is shown that the proposed co-design algorithm is of a recursive form that is suitable for online computation. Finally, a simulation example is given to illustrate the usefulness of the developed filtering approach.
Cover times of random searches
NASA Astrophysics Data System (ADS)
Chupeau, Marie; Bénichou, Olivier; Voituriez, Raphaël
2015-10-01
How long must one undertake a random search to visit all sites of a given domain? This time, known as the cover time, is a key observable to quantify the efficiency of exhaustive searches, which require a complete exploration of an area and not only the discovery of a single target. Examples range from immune-system cells chasing pathogens to animals harvesting resources, from robotic exploration for cleaning or demining to the task of improving search algorithms. Despite its broad relevance, the cover time has remained elusive and so far explicit results have been scarce and mostly limited to regular random walks. Here we determine the full distribution of the cover time for a broad range of random search processes, including Lévy strategies, intermittent strategies, persistent random walks and random walks on complex networks, and reveal its universal features. We show that for all these examples the mean cover time can be minimized, and that the corresponding optimal strategies also minimize the mean search time for a single target, unambiguously pointing towards their robustness.
ERIC Educational Resources Information Center
Stewart, Neil; Chater, Nick; Brown, Gordon D. A.
2006-01-01
We present a theory of decision by sampling (DbS) in which, in contrast with traditional models, there are no underlying psychoeconomic scales. Instead, we assume that an attribute's subjective value is constructed from a series of binary, ordinal comparisons to a sample of attribute values drawn from memory and is its rank within the sample. We…
Developing Water Sampling Standards
ERIC Educational Resources Information Center
Environmental Science and Technology, 1974
1974-01-01
Participants in the D-19 symposium on aquatic sampling and measurement for water pollution assessment were informed that determining the extent of waste water stream pollution is not a cut and dry procedure. Topics discussed include field sampling, representative sampling from storm sewers, suggested sampler features and application of improved…
SAMPLING OF CONTAMINATED SITES
A critical aspect of characterization of the amount and species of contamination of a hazardous waste site is the sampling plan developed for that site. f the sampling plan is not thoroughly conceptualized before sampling takes place, then certain critical aspects of the limits o...
Allen, P.V.; Nimberger, M.; Ward, R.L.
1991-12-24
This patent describes a fluid sampling pump for withdrawing pressurized sample fluid from a flow line and for pumping a preselected quantity of sample fluid with each pump driving stroke from the pump to a sample vessel, the sampling pump including a pump body defining a pump bore therein having a central axis, a piston slideably moveable within the pump bore and having a fluid inlet end and an opposing operator end, a fluid sample inlet port open to sample fluid in the flow line, a fluid sample outlet port for transmitting fluid from the pump bore to the sample vessel, and a line pressure port in fluid pressure sample fluid in the flow line, an inlet valve for selectively controlling sample fluid flow from the flow line through the fluid sample inlet port, an operator unit for periodically reciprocating the piston within the pump bore, and a controller for regulating the stroke of the piston within the pump bore, and thereby the quantity of fluid pumped with each pump driving stroke. It comprises a balanced check valve seat; a balanced check valve seal; a compression member; and a central plunger.
What Does a Random Line Look Like: An Experimental Study
ERIC Educational Resources Information Center
Turner, Nigel E.; Liu, Eleanor; Toneatto, Tony
2011-01-01
The study examined the perception of random lines by people with gambling problems compared to people without gambling problems. The sample consisted of 67 probable pathological gamblers and 46 people without gambling problems. Participants completed a number of questionnaires about their gambling and were then presented with a series of random…
The XXZ Heisenberg model on random surfaces
NASA Astrophysics Data System (ADS)
Ambjørn, J.; Sedrakyan, A.
2013-09-01
We consider integrable models, or in general any model defined by an R-matrix, on random surfaces, which are discretized using random Manhattan lattices. The set of random Manhattan lattices is defined as the set dual to the lattice random surfaces embedded on a regular d-dimensional lattice. They can also be associated with the random graphs of multiparticle scattering nodes. As an example we formulate a random matrix model where the partition function reproduces the annealed average of the XXZ Heisenberg model over all random Manhattan lattices. A technique is presented which reduces the random matrix integration in partition function to an integration over their eigenvalues.
Masquelier, Donald A.
2004-02-10
A system for sampling air and collecting particulate of a predetermined particle size range. A low pass section has an opening of a preselected size for gathering the air but excluding particles larger than the sample particles. An impactor section is connected to the low pass section and separates the air flow into a bypass air flow that does not contain the sample particles and a product air flow that does contain the sample particles. A wetted-wall cyclone collector, connected to the impactor section, receives the product air flow and traps the sample particles in a liquid.
Rockballer Sample Acquisition Tool
NASA Technical Reports Server (NTRS)
Giersch, Louis R.; Cook, Brant T.
2013-01-01
It would be desirable to acquire rock and/or ice samples that extend below the surface of the parent rock or ice in extraterrestrial environments such as the Moon, Mars, comets, and asteroids. Such samples would allow measurements to be made further back into the geologic history of the rock, providing critical insight into the history of the local environment and the solar system. Such samples could also be necessary for sample return mission architectures that would acquire samples from extraterrestrial environments for return to Earth for more detailed scientific investigation.
Triangulation in Random Refractive Distortions.
Alterman, Marina; Schechner, Yoav Y; Swirski, Yohay
2017-03-01
Random refraction occurs in turbulence and through a wavy water-air interface. It creates distortion that changes in space, time and with viewpoint. Localizing objects in three dimensions (3D) despite this random distortion is important to some predators and also to submariners avoiding the salient use of periscopes. We take a multiview approach to this task. Refracted distortion statistics induce a probabilistic relation between any pixel location and a line of sight in space. Measurements of an object's random projection from multiple views and times lead to a likelihood function of the object's 3D location. The likelihood leads to estimates of the 3D location and its uncertainty. Furthermore, multiview images acquired simultaneously in a wide stereo baseline have uncorrelated distortions. This helps reduce the acquisition time needed for localization. The method is demonstrated in stereoscopic video sequences, both in a lab and a swimming pool.
Risk, randomness, crashes and quants
NASA Astrophysics Data System (ADS)
Farhadi, Alessio; Vvedensky, Dimitri
2003-03-01
Market movements, whether short-term fluctuations, long-term trends, or sudden surges or crashes, have an immense and widespread economic impact. These movements are suggestive of the complex behaviour seen in many non-equilibrium physical systems. Not surprisingly, therefore, the characterization of market behaviour presents an inviting challenge to the physical sciences and, indeed, many concepts and methods developed for modelling non-equilibrium natural phenomena have found fertile ground in financial settings. In this review, we begin with the simplest random process, the random walk, and, assuming no prior knowledge of markets, build up to the conceptual and computational machinery used to analyse and model the behaviour of financial systems. We then consider the evidence that calls into question several aspects of the random walk model of markets and discuss some ideas that have been put forward to account for the observed discrepancies. The application of all of these methods is illustrated with examples of actual market data.
Withers, Mark R; McKinney, Denise; Ogutu, Bernhards R; Waitumbi, John N; Milman, Jessica B; Apollo, Odika J; Allen, Otieno G; Tucker, Kathryn; Soisson, Lorraine A; Diggs, Carter; Leach, Amanda; Wittes, Janet; Dubovsky, Filip; Stewart, V. Ann; Remich, Shon A; Cohen, Joe; Ballou, W. Ripley; Holland, Carolyn A; Lyon, Jeffrey A; Angov, Evelina; Stoute, José A; Martin, Samuel K; Heppner, D. Gray
2006-01-01
Objective: Our aim was to evaluate the safety, reactogenicity, and immunogenicity of an investigational malaria vaccine. Design: This was an age-stratified phase Ib, double-blind, randomized, controlled, dose-escalation trial. Children were recruited into one of three cohorts (dosage groups) and randomized in 2:1 fashion to receive either the test product or a comparator. Setting: The study was conducted in a rural population in Kombewa Division, western Kenya. Participants: Subjects were 135 children, aged 12–47 mo. Interventions: Subjects received 10, 25, or 50 μg of falciparum malaria protein 1 (FMP1) formulated in 100, 250, and 500 μL, respectively, of AS02A, or they received a comparator (Imovax® rabies vaccine). Outcome Measures: We performed safety and reactogenicity parameters and assessment of adverse events during solicited (7 d) and unsolicited (30 d) periods after each vaccination. Serious adverse events were monitored for 6 mo after the last vaccination. Results: Both vaccines were safe and well tolerated. FMP1/AS02A recipients experienced significantly more pain and injection-site swelling with a dose-effect relationship. Systemic reactogenicity was low at all dose levels. Hemoglobin levels remained stable and similar across arms. Baseline geometric mean titers were comparable in all groups. Anti-FMP1 antibody titers increased in a dose-dependent manner in subjects receiving FMP1/AS02A; no increase in anti-FMP1 titers occurred in subjects who received the comparator. By study end, subjects who received either 25 or 50 μg of FMP1 had similar antibody levels, which remained significantly higher than that of those who received the comparator or 10 μg of FMP1. A longitudinal mixed effects model showed a statistically significant effect of dosage level on immune response (F3,1047 = 10.78, or F3, 995 = 11.22, p < 0.001); however, the comparison of 25 μg and 50 μg recipients indicated no significant difference (F1,1047 = 0.05; p = 0.82). Conclusions
Sample Proficiency Test exercise
Alcaraz, A; Gregg, H; Koester, C
2006-02-05
The current format of the OPCW proficiency tests has multiple sets of 2 samples sent to an analysis laboratory. In each sample set, one is identified as a sample, the other as a blank. This method of conducting proficiency tests differs from how an OPCW designated laboratory would receive authentic samples (a set of three containers, each not identified, consisting of the authentic sample, a control sample, and a blank sample). This exercise was designed to test the reporting if the proficiency tests were to be conducted. As such, this is not an official OPCW proficiency test, and the attached report is one method by which LLNL might report their analyses under a more realistic testing scheme. Therefore, the title on the report ''Report of the Umpteenth Official OPCW Proficiency Test'' is meaningless, and provides a bit of whimsy for the analyses and readers of the report.
Noordzij, Marlies; Dekker, Friedo W; Zoccali, Carmine; Jager, Kitty J
2011-01-01
The sample size is the number of patients or other experimental units that need to be included in a study to answer the research question. Pre-study calculation of the sample size is important; if a sample size is too small, one will not be able to detect an effect, while a sample that is too large may be a waste of time and money. Methods to calculate the sample size are explained in statistical textbooks, but because there are many different formulas available, it can be difficult for investigators to decide which method to use. Moreover, these calculations are prone to errors, because small changes in the selected parameters can lead to large differences in the sample size. This paper explains the basic principles of sample size calculations and demonstrates how to perform such a calculation for a simple study design.
NASA Technical Reports Server (NTRS)
Fletcher, L. A.; Allen, C. C.; Bastien, R.
2008-01-01
NASA's Johnson Space Center (JSC) and the Astromaterials Curator are charged by NPD 7100.10D with the curation of all of NASA s extraterrestrial samples, including those from future missions. This responsibility includes the development of new sample handling and preparation techniques; therefore, the Astromaterials Curator must begin developing procedures to preserve, prepare and ship samples at sub-freezing temperatures in order to enable future sample return missions. Such missions might include the return of future frozen samples from permanently-shadowed lunar craters, the nuclei of comets, the surface of Mars, etc. We are demonstrating the ability to curate samples under cold conditions by designing, installing and testing a cold curation glovebox. This glovebox will allow us to store, document, manipulate and subdivide frozen samples while quantifying and minimizing contamination throughout the curation process.
NASA Technical Reports Server (NTRS)
Meyer, Charles
2009-01-01
The Lunar Sample Compendium is a succinct summary of the data obtained from 40 years of study of Apollo and Luna samples of the Moon. Basic petrographic, chemical and age information is compiled, sample-by-sample, in the form of an advanced catalog in order to provide a basic description of each sample. The LSC can be found online using Google. The initial allocation of lunar samples was done sparingly, because it was realized that scientific techniques would improve over the years and new questions would be formulated. The LSC is important because it enables scientists to select samples within the context of the work that has already been done and facilitates better review of proposed allocations. It also provides back up material for public displays, captures information found only in abstracts, grey literature and curatorial databases and serves as a ready access to the now-vast scientific literature.
NASA Technical Reports Server (NTRS)
Meyer, C.
2009-01-01
The Lunar Sample Compendium is a succinct summary of what has been learned from the study of Apollo and Luna samples of the Moon. Basic information is compiled, sample-by-sample, in the form of an advanced catalog in order to provide a basic description of each sample. Information presented is carefully attributed to the original source publication, thus the Compendium also serves as a ready access to the now vast scientific literature pertaining to lunar smples. The Lunar Sample Compendium is a work in progress (and may always be). Future plans include: adding sections on additional samples, adding new thin section photomicrographs, replacing the faded photographs with newly digitized photos from the original negatives, attempting to correct the age data using modern decay constants, adding references to each section, and adding an internal search engine.
Fragmentation of Fractal Random Structures
NASA Astrophysics Data System (ADS)
Elçi, Eren Metin; Weigel, Martin; Fytas, Nikolaos G.
2015-03-01
We analyze the fragmentation behavior of random clusters on the lattice under a process where bonds between neighboring sites are successively broken. Modeling such structures by configurations of a generalized Potts or random-cluster model allows us to discuss a wide range of systems with fractal properties including trees as well as dense clusters. We present exact results for the densities of fragmenting edges and the distribution of fragment sizes for critical clusters in two dimensions. Dynamical fragmentation with a size cutoff leads to broad distributions of fragment sizes. The resulting power laws are shown to encode characteristic fingerprints of the fragmented objects.
Random matrix theory within superstatistics.
Abul-Magd, A Y
2005-12-01
We propose a generalization of the random matrix theory following the basic prescription of the recently suggested concept of superstatistics. Spectral characteristics of systems with mixed regular-chaotic dynamics are expressed as weighted averages of the corresponding quantities in the standard theory assuming that the mean level spacing itself is a stochastic variable. We illustrate the method by calculating the level density, the nearest-neighbor-spacing distributions, and the two-level correlation functions for systems in transition from order to chaos. The calculated spacing distribution fits the resonance statistics of random binary networks obtained in a recent numerical experiment.
Neutron transport in random media
Makai, M.
1996-08-01
The survey reviews the methods available in the literature which allow a discussion of corium recriticality after a severe accident and a characterization of the corium. It appears that to date no one has considered the eigenvalue problem, though for the source problem several approaches have been proposed. The mathematical formulation of a random medium may be approached in different ways. Based on the review of the literature, we can draw three basic conclusions. The problem of static, random perturbations has been solved. The static case is tractable by the Monte Carlo method. There is a specific time dependent case for which the average flux is given as a series expansion.
Molecular random tilings as glasses
Garrahan, Juan P.; Stannard, Andrew; Blunt, Matthew O.; Beton, Peter H.
2009-01-01
We have recently shown that p-terphenyl-3,5,3′,5′-tetracarboxylic acid adsorbed on graphite self-assembles into a two-dimensional rhombus random tiling. This tiling is close to ideal, displaying long-range correlations punctuated by sparse localized tiling defects. In this article we explore the analogy between dynamic arrest in this type of random tilings and that of structural glasses. We show that the structural relaxation of these systems is via the propagation–reaction of tiling defects, giving rise to dynamic heterogeneity. We study the scaling properties of the dynamics and discuss connections with kinetically constrained models of glasses. PMID:19720990
Synchronizability of random rectangular graphs
Estrada, Ernesto Chen, Guanrong
2015-08-15
Random rectangular graphs (RRGs) represent a generalization of the random geometric graphs in which the nodes are embedded into hyperrectangles instead of on hypercubes. The synchronizability of RRG model is studied. Both upper and lower bounds of the eigenratio of the network Laplacian matrix are determined analytically. It is proven that as the rectangular network is more elongated, the network becomes harder to synchronize. The synchronization processing behavior of a RRG network of chaotic Lorenz system nodes is numerically investigated, showing complete consistence with the theoretical results.
Urine sample collection protocols for bioassay samples
MacLellan, J.A.; McFadden, K.M.
1992-11-01
In vitro radiobioassay analyses are used to measure the amount of radioactive material excreted by personnel exposed to the potential intake of radioactive material. The analytical results are then used with various metabolic models to estimate the amount of radioactive material in the subject`s body and the original intake of radioactive material. Proper application of these metabolic models requires knowledge of the excretion period. It is normal practice to design the bioassay program based on a 24-hour excretion sample. The Hanford bioassay program simulates a total 24-hour urine excretion sample with urine collection periods lasting from one-half hour before retiring to one-half hour after rising on two consecutive days. Urine passed during the specified periods is collected in three 1-L bottles. Because the daily excretion volume given in Publication 23 of the International Commission on Radiological Protection (ICRP 1975, p. 354) for Reference Man is 1.4 L, it was proposed to use only two 1-L bottles as a cost-saving measure. This raised the broader question of what should be the design capacity of a 24-hour urine sample kit.
Urine sample collection protocols for bioassay samples
MacLellan, J.A.; McFadden, K.M.
1992-11-01
In vitro radiobioassay analyses are used to measure the amount of radioactive material excreted by personnel exposed to the potential intake of radioactive material. The analytical results are then used with various metabolic models to estimate the amount of radioactive material in the subject's body and the original intake of radioactive material. Proper application of these metabolic models requires knowledge of the excretion period. It is normal practice to design the bioassay program based on a 24-hour excretion sample. The Hanford bioassay program simulates a total 24-hour urine excretion sample with urine collection periods lasting from one-half hour before retiring to one-half hour after rising on two consecutive days. Urine passed during the specified periods is collected in three 1-L bottles. Because the daily excretion volume given in Publication 23 of the International Commission on Radiological Protection (ICRP 1975, p. 354) for Reference Man is 1.4 L, it was proposed to use only two 1-L bottles as a cost-saving measure. This raised the broader question of what should be the design capacity of a 24-hour urine sample kit.
Soil Sampling Techniques For Alabama Grain Fields
NASA Technical Reports Server (NTRS)
Thompson, A. N.; Shaw, J. N.; Mask, P. L.; Touchton, J. T.; Rickman, D.
2003-01-01
Characterizing the spatial variability of nutrients facilitates precision soil sampling. Questions exist regarding the best technique for directed soil sampling based on a priori knowledge of soil and crop patterns. The objective of this study was to evaluate zone delineation techniques for Alabama grain fields to determine which method best minimized the soil test variability. Site one (25.8 ha) and site three (20.0 ha) were located in the Tennessee Valley region, and site two (24.2 ha) was located in the Coastal Plain region of Alabama. Tennessee Valley soils ranged from well drained Rhodic and Typic Paleudults to somewhat poorly drained Aquic Paleudults and Fluventic Dystrudepts. Coastal Plain s o i l s ranged from coarse-loamy Rhodic Kandiudults to loamy Arenic Kandiudults. Soils were sampled by grid soil sampling methods (grid sizes of 0.40 ha and 1 ha) consisting of: 1) twenty composited cores collected randomly throughout each grid (grid-cell sampling) and, 2) six composited cores collected randomly from a -3x3 m area at the center of each grid (grid-point sampling). Zones were established from 1) an Order 1 Soil Survey, 2) corn (Zea mays L.) yield maps, and 3) airborne remote sensing images. All soil properties were moderately to strongly spatially dependent as per semivariogram analyses. Differences in grid-point and grid-cell soil test values suggested grid-point sampling does not accurately represent grid values. Zones created by soil survey, yield data, and remote sensing images displayed lower coefficient of variations (8CV) for soil test values than overall field values, suggesting these techniques group soil test variability. However, few differences were observed between the three zone delineation techniques. Results suggest directed sampling using zone delineation techniques outlined in this paper would result in more efficient soil sampling for these Alabama grain fields.
Sampling considerations for disease surveillance in wildlife populations
Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.
2008-01-01
Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.
Equilibrium Molecular Thermodynamics from Kirkwood Sampling
2015-01-01
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys.2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide. PMID:25915525
Equilibrium molecular thermodynamics from Kirkwood sampling.
Somani, Sandeep; Okamoto, Yuko; Ballard, Andrew J; Wales, David J
2015-05-21
We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys. 2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, where Kirkwood sampling is used for generating trial Monte Carlo moves. Using this method, equilibrium distributions corresponding to different temperatures and potential energy functions can be generated from a given set of low-order correlations. Since Kirkwood samples are generated independently, this method is ideally suited for massively parallel distributed computing. The second approach is a variant of reservoir replica exchange, where Kirkwood sampling is used to construct a reservoir of conformations, which exchanges conformations with the replicas performing equilibrium sampling corresponding to different thermodynamic states. Coupling with the Kirkwood reservoir enhances sampling by facilitating global jumps in the conformational space. The efficiency of both methods depends on the overlap of the Kirkwood distribution with the target equilibrium distribution. We present proof-of-concept results for a model nine-atom linear molecule and alanine dipeptide.
A number of articles have investigated the impact of sampling design on remotely sensed landcover accuracy estimates. Gong and Howarth (1990) found significant differences for Kappa accuracy values when comparing purepixel sampling, stratified random sampling, and stratified sys...
Sampling Variability and Axioms of Classical Test Theory
ERIC Educational Resources Information Center
Zimmerman, Donald W.
2011-01-01
Many well-known equations in classical test theory are mathematical identities in populations of individuals but not in random samples from those populations. First, test scores are subject to the same sampling error that is familiar in statistical estimation and hypothesis testing. Second, the assumptions made in derivation of formulas in test…
An efficient sampling protocol for sagebrush/grassland monitoring
Technology Transfer Automated Retrieval System (TEKTRAN)
Monitoring the health and condition of rangeland vegetation can be very time consuming and costly. An efficiency but rigorous sampling protocol is needed for monitoring sagebrush/grassland vegetation. A randomized sampling protocol was presented for geo-referenced, nadir photographs acquired using...
Sampling the Experience of Chronically Aggressive Psychiatric Inpatients.
ERIC Educational Resources Information Center
Waite, Bradley M.
1994-01-01
Studies the application of the Experience Sampling Method (ESM) to chronically aggressive psychiatric inpatients. ESM allows for the sampling of behavior, thoughts, and feelings of persons across time and situations by signalling subjects to record these aspects using a questionnaire at random times. (JPS)
Rational Variability in Children's Causal Inferences: The Sampling Hypothesis
ERIC Educational Resources Information Center
Denison, Stephanie; Bonawitz, Elizabeth; Gopnik, Alison; Griffiths, Thomas L.
2013-01-01
We present a proposal--"The Sampling Hypothesis"--suggesting that the variability in young children's responses may be part of a rational strategy for inductive inference. In particular, we argue that young learners may be randomly sampling from the set of possible hypotheses that explain the observed data, producing different hypotheses with…
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Pecans in the... sample shall consist of 100 pecans. The individual sample shall be drawn at random from a...
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PRODUCTS 1 2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Pecans in the... sample shall consist of 100 pecans. The individual sample shall be drawn at random from a...
Wright, T.
1993-03-01
When attributes are rare and few or none are observed in the selected sample from a finite universe, sampling statisticians are increasingly being challenged to use whatever methods are available to declare with high probability or confidence that the universe is near or completely attribute-free. This is especially true when the attribute is undesirable. Approximations such as those based on normal theory are frequently inadequate with rare attributes. For simple random sampling without replacement, an appropriate probability distribution for statistical inference is the hypergeometric distribution. But even with the hypergeometric distribution, the investigator is limited from making claims of attribute-free with high confidence unless the sample size is quite large using nonrandomized techniques. In the hypergeometric setting with rare attributes, exact randomized tests of hypothesis a,re investigated to determine the effect on power of how one specifies the null hypothesis. In particular, specifying the null hypothesis as zero attributes does not always yield maximum possible power. We also consider the hypothesis specification question under complex sampling designs including stratified random sampling and two-stage cluster sampling (one case involves random selection at first stage and another case involves probability proportional to size without replacement selection at first stage). Also under simple random sampling, this article defines and presents a simple algorithm for the construction of exact randomized'' upper confidence bounds which permit one to possibly report tighter bounds than those exact bounds obtained using nonrandomized'' methods.
Wright, T.
1993-03-01
When attributes are rare and few or none are observed in the selected sample from a finite universe, sampling statisticians are increasingly being challenged to use whatever methods are available to declare with high probability or confidence that the universe is near or completely attribute-free. This is especially true when the attribute is undesirable. Approximations such as those based on normal theory are frequently inadequate with rare attributes. For simple random sampling without replacement, an appropriate probability distribution for statistical inference is the hypergeometric distribution. But even with the hypergeometric distribution, the investigator is limited from making claims of attribute-free with high confidence unless the sample size is quite large using nonrandomized techniques. In the hypergeometric setting with rare attributes, exact randomized tests of hypothesis a,re investigated to determine the effect on power of how one specifies the null hypothesis. In particular, specifying the null hypothesis as zero attributes does not always yield maximum possible power. We also consider the hypothesis specification question under complex sampling designs including stratified random sampling and two-stage cluster sampling (one case involves random selection at first stage and another case involves probability proportional to size without replacement selection at first stage). Also under simple random sampling, this article defines and presents a simple algorithm for the construction of exact ``randomized`` upper confidence bounds which permit one to possibly report tighter bounds than those exact bounds obtained using ``nonrandomized`` methods.
SAS procedures for designing and analyzing sample surveys
Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.
2003-01-01
Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).
Neither fixed nor random: weighted least squares meta-analysis.
Stanley, T D; Doucouliagos, Hristos
2015-06-15
This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects.
Random potentials and cosmological attractors
NASA Astrophysics Data System (ADS)
Linde, Andrei
2017-02-01
I show that the problem of realizing inflation in theories with random potentials of a limited number of fields can be solved, and agreement with the observational data can be naturally achieved if at least one of these fields has a non-minimal kinetic term of the type used in the theory of cosmological α-attractors.
Structure of random monodisperse foam
NASA Astrophysics Data System (ADS)
Kraynik, Andrew M.; Reinelt, Douglas A.; van Swol, Frank
2003-03-01
The Surface Evolver was used to calculate the equilibrium microstructure of random monodisperse soap froth, starting from Voronoi partitions of randomly packed spheres. The sphere packing has a strong influence on foam properties, such as E (surface free energy) and
Plated wire random access memories
NASA Technical Reports Server (NTRS)
Gouldin, L. D.
1975-01-01
A program was conducted to construct 4096-work by 18-bit random access, NDRO-plated wire memory units. The memory units were subjected to comprehensive functional and environmental tests at the end-item level to verify comformance with the specified requirements. A technical description of the unit is given, along with acceptance test data sheets.
Common Randomness Principles of Secrecy
ERIC Educational Resources Information Center
Tyagi, Himanshu
2013-01-01
This dissertation concerns the secure processing of distributed data by multiple terminals, using interactive public communication among themselves, in order to accomplish a given computational task. In the setting of a probabilistic multiterminal source model in which several terminals observe correlated random signals, we analyze secure…
NASA Astrophysics Data System (ADS)
Redding, B.; Liew, S. F.; Sarma, R.; Cao, H.
2014-05-01
Spectrometers are widely used tools in chemical and biological sensing, material analysis, and light source characterization. The development of a high-resolution on-chip spectrometer could enable compact, low-cost spectroscopy for portable sensing as well as increasing lab-on-a-chip functionality. However, the spectral resolution of traditional grating-based spectrometers scales with the optical pathlength, which translates to the linear dimension or footprint of the system, which is limited on-chip. In this work, we utilize multiple scattering in a random photonic structure fabricated on a silicon chip to fold the optical path, making the effective pathlength much longer than the linear dimension of the system and enabling high spectral resolution with a small footprint. Of course, the random spectrometer also requires a different operating paradigm, since different wavelengths are not spatially separated by the random structure, as they would be by a grating. Instead, light transmitted through the random structure produces a wavelengthdependent speckle pattern which can be used as a fingerprint to identify the input spectra after calibration. In practice, these wavelength-dependent speckle patterns are experimentally measured and stored in a transmission matrix, which describes the spectral-to-spatial mapping of the spectrometer. After calibrating the transmission matrix, an arbitrary input spectrum can be reconstructed from its speckle pattern. We achieved sub-nm resolution with 25 nm bandwidth at a wavelength of 1500 nm using a scattering medium with largest dimension of merely 50 μm.
Sampling functions for geophysics
NASA Technical Reports Server (NTRS)
Giacaglia, G. E. O.; Lunquist, C. A.
1972-01-01
A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.
Sampling in Qualitative Research
LUBORSKY, MARK R.; RUBINSTEIN, ROBERT L.
2011-01-01
In gerontology the most recognized and elaborate discourse about sampling is generally thought to be in quantitative research associated with survey research and medical research. But sampling has long been a central concern in the social and humanistic inquiry, albeit in a different guise suited to the different goals. There is a need for more explicit discussion of qualitative sampling issues. This article will outline the guiding principles and rationales, features, and practices of sampling in qualitative research. It then describes common questions about sampling in qualitative research. In conclusion it proposes the concept of qualitative clarity as a set of principles (analogous to statistical power) to guide assessments of qualitative sampling in a particular study or proposal. PMID:22058580
Using Compton scattering for random coincidence rejection
NASA Astrophysics Data System (ADS)
Kolstein, M.; Chmeissani, M.
2016-12-01
The Voxel Imaging PET (VIP) project presents a new approach for the design of nuclear medicine imaging devices by using highly segmented pixel CdTe sensors. CdTe detectors can achieve an energy resolution of ≈ 1% FWHM at 511 keV and can be easily segmented into submillimeter sized voxels for optimal spatial resolution. These features help in rejecting a large part of the scattered events from the PET coincidence sample in order to obtain high quality images. Another contribution to the background are random events, i.e., hits caused by two independent gammas without a common origin. Given that 60% of 511 keV photons undergo Compton scattering in CdTe (i.e. 84% of all coincidence events have at least one Compton scattering gamma), we present a simulation study on the possibility to use the Compton scattering information of at least one of the coincident gammas within the detector to reject random coincidences. The idea uses the fact that if a gamma undergoes Compton scattering in the detector, it will cause two hits in the pixel detectors. The first hit corresponds to the Compton scattering process. The second hit shall correspond to the photoelectric absorption of the remaining energy of the gamma. With the energy deposition of the first hit, one can calculate the Compton scattering angle. By measuring the hit location of the coincident gamma, we can construct the geometric angle, under the assumption that both gammas come from the same origin. Using the difference between the Compton scattering angle and the geometric angle, random events can be rejected.
Sample positioning in microgravity
NASA Technical Reports Server (NTRS)
Sridharan, Govind (Inventor)
1993-01-01
Repulsion forces arising from laser beams are provided to produce mild positioning forces on a sample in microgravity vacuum environments. The system of the preferred embodiment positions samples using a plurality of pulsed lasers providing opposing repulsion forces. The lasers are positioned around the periphery of a confinement area and expanded to create a confinement zone. The grouped laser configuration, in coordination with position sensing devices, creates a feedback servo whereby stable position control of a sample within microgravity environment can be achieved.
Statistical distribution sampling
NASA Technical Reports Server (NTRS)
Johnson, E. S.
1975-01-01
Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.
Assessing the Generalizability of Randomized Trial Results to Target Populations
Stuart, Elizabeth A.; Bradshaw, Catherine P.; Leaf, Philip J.
2014-01-01
Recent years have seen increasing interest in and attention to evidence-based practices, where the “evidence” generally comes from well-conducted randomized trials. However, while those trials yield accurate estimates of the effect of the intervention for the participants in the trial (known as “internal validity”), they do not always yield relevant information about the effects in a particular target population (known as “external validity”). This may be due to a lack of specification of a target population when designing the trial, difficulties recruiting a sample that is representative of a pre-specified target population, or to interest in considering a target population somewhat different from the population directly targeted by the trial. This paper first provides an overview of existing design and analysis methods for assessing and enhancing the ability of a randomized trial to estimate treatment effects in a target population. It then provides a case study using one particular method, which weights the subjects in a randomized trial to match the population on a set of observed characteristics. The case study uses data from a randomized trial of School-wide Positive Behavioral Interventions and Supports (PBIS); our interest is in generalizing the results to the state of Maryland. In the case of PBIS, after weighting, estimated effects in the target population were similar to those observed in the randomized trial. The paper illustrates that statistical methods can be used to assess and enhance the external validity of randomized trials, making the results more applicable to policy and clinical questions. However, there are also many open research questions; future research should focus on questions of treatment effect heterogeneity and further developing these methods for enhancing external validity. Researchers should think carefully about the external validity of randomized trials and be cautious about extrapolating results to specific
A Mixed Effects Randomized Item Response Model
ERIC Educational Resources Information Center
Fox, J.-P.; Wyrick, Cheryl
2008-01-01
The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…
Ultra-fast Quantum Random Number Generator
NASA Astrophysics Data System (ADS)
Yicheng, Shi
We describe a series of Randomness Extractors for removing bias and residual correlations in random numbers generated from measurements on noisy physical systems. The structures of the randomness extractors are based on Linear Feedback Shift Registers (LFSR). This leads to a significant simplification in the implementation of randomness extractors.
Spatial coherence of random laser emission
NASA Astrophysics Data System (ADS)
Redding, B.; Choma, M. A.; Cao, H.
2011-09-01
Lasing action in disordered media has been studied extensively in recent years and many of its properties are well understood. However, few studies have considered the spatial coherence in these systems, despite initial observations indicating that random lasers exhibit much lower spatial coherence than conventional lasers. We performed a systematic, experimental investigation of the spatial coherence of random laser emission as a function of the scattering mean free path and the excitation volume. Lasing was achieved under optical excitation and spatial coherence was characterized by imaging the emission spot onto a Young's double slit and collecting the interference fringes in the far field. We observed dramatic differences in the spatial coherence within our parameter space. Specifically, we found that samples with a shorter mean free path relative to the excitation volume exhibited reduced spatial coherence. We provide a qualitative explanation of our experimental observations in terms of the number of excited modes and their spatial orientation. This work provides a means to realize intense, spatially incoherent laser emission for applications in which speckle or spatial cross talk limits performance.
Frisch, Matthias; Melchinger, Albrecht E
2008-01-01
Random intermating of F2 populations has been suggested for obtaining precise estimates of recombination frequencies between tightly linked loci. In a simulation study, sampling effects due to small population sizes in the intermating generations were found to abolish the advantages of random intermating that were reported in previous theoretical studies considering an infinite population size. We propose a mating scheme for intermating with planned crosses that yields more precise estimates than those under random intermating.
Régnier, Mireille; Chassignet, Philippe
2016-01-01
Repetitive patterns in genomic sequences have a great biological significance and also algorithmic implications. Analytic combinatorics allow to derive formula for the expected length of repetitions in a random sequence. Asymptotic results, which generalize previous works on a binary alphabet, are easily computable. Simulations on random sequences show their accuracy. As an application, the sample case of Archaea genomes illustrates how biological sequences may differ from random sequences. PMID:27376057
Clifford Algebras, Random Graphs, and Quantum Random Variables
NASA Astrophysics Data System (ADS)
Schott, René; Staples, G. Stacey
2008-08-01
For fixed n > 0, the space of finite graphs on n vertices is canonically associated with an abelian, nilpotent-generated subalgebra of the Clifford algebra {C}l2n,2n which is canonically isomorphic to the 2n-particle fermion algebra. Using the generators of the subalgebra, an algebraic probability space of "Clifford adjacency matrices" associated with finite graphs is defined. Each Clifford adjacency matrix is a quantum random variable whose mth moment corresponds to the number of m-cycles in the graph G. Each matrix admits a canonical "quantum decomposition" into a sum of three algebraic random variables: a = aΔ + aΥ + aΛ, where aΔ is classical while aΥ and aΛ are quantum. Moreover, within the Clifford algebra context the NP problem of cycle enumeration is reduced to matrix multiplication, requiring no more than n4 Clifford (geo-metric) multiplications within the algebra.
Random amplified polymorphic DNA analysis of genetically modified organisms.
Yoke-Kqueen, Cheah; Radu, Son
2006-12-15
Randomly amplified polymorphic DNA (RAPD) was used to analyzed 78 samples comprises of certified reference materials (soya and maize powder), raw seeds (soybean and maize), processed food and animal feed. Combination assay of two arbitrary primers in the RAPD analysis enable to distinguish genetically modified organism (GMO) reference materials from the samples tested. Dendrogram analysis revealed 13 clusters at 45% similarity from the RAPD. RAPD analysis showed that the maize and soybean samples were clustered differently besides the GMO and non-GMO products.
Synchronized sampling improves fault location
Kezunovic, M.; Perunicic, B.
1995-04-01
Transmission line faults must be located accurately to allow maintenance crews to arrive at the scene and repair the faulted section as soon as possible. Rugged terrain and geographical layout cause some sections of power transmission lines to be difficult to reach. In the past, a variety of fault location algorithms were introduced as either an add-on feature in protective relays or stand-alone implementation in fault locators. In both cases, the measurements of current and voltages were taken at one terminal of a transmission line only. Under such conditions, it may become difficult to determine the fault location accurately, since data from other transmission line ends are required for more precise computations. In the absence of data from the other end, existing algorithms have accuracy problems under several circumstances, such as varying switching and loading conditions, fault infeed from the other end, and random value of fault resistance. Most of the one-end algorithms were based on estimation of voltage and current phasors. The need to estimate phasors introduces additional difficulty in high-speed tripping situations where the algorithms may not be fast enough in determining fault location accurately before the current signals disappear due to the relay operation and breaker opening. This article introduces a unique concept of high-speed fault location that can be implemented either as a simple add-on to the digital fault recorders (DFRs) or as a stand-alone new relaying function. This advanced concept is based on the use of voltage and current samples that are synchronously taken at both ends of a transmission line. This sampling technique can be made readily available in some new DFR designs incorporating receivers for accurate sampling clock synchronization using the satellite Global Positioning System (GPS).
Non-random patterns in viral diversity.
Anthony, Simon J; Islam, Ariful; Johnson, Christine; Navarrete-Macias, Isamara; Liang, Eliza; Jain, Komal; Hitchens, Peta L; Che, Xiaoyu; Soloyvov, Alexander; Hicks, Allison L; Ojeda-Flores, Rafael; Zambrana-Torrelio, Carlos; Ulrich, Werner; Rostal, Melinda K; Petrosov, Alexandra; Garcia, Joel; Haider, Najmul; Wolfe, Nathan; Goldstein, Tracey; Morse, Stephen S; Rahman, Mahmudur; Epstein, Jonathan H; Mazet, Jonna K; Daszak, Peter; Lipkin, W Ian
2015-09-22
It is currently unclear whether changes in viral communities will ever be predictable. Here we investigate whether viral communities in wildlife are inherently structured (inferring predictability) by looking at whether communities are assembled through deterministic (often predictable) or stochastic (not predictable) processes. We sample macaque faeces across nine sites in Bangladesh and use consensus PCR and sequencing to discover 184 viruses from 14 viral families. We then use network modelling and statistical null-hypothesis testing to show the presence of non-random deterministic patterns at different scales, between sites and within individuals. We show that the effects of determinism are not absolute however, as stochastic patterns are also observed. In showing that determinism is an important process in viral community assembly we conclude that it should be possible to forecast changes to some portion of a viral community, however there will always be some portion for which prediction will be unlikely.
Non-random patterns in viral diversity
Anthony, Simon J.; Islam, Ariful; Johnson, Christine; Navarrete-Macias, Isamara; Liang, Eliza; Jain, Komal; Hitchens, Peta L.; Che, Xiaoyu; Soloyvov, Alexander; Hicks, Allison L.; Ojeda-Flores, Rafael; Zambrana-Torrelio, Carlos; Ulrich, Werner; Rostal, Melinda K.; Petrosov, Alexandra; Garcia, Joel; Haider, Najmul; Wolfe, Nathan; Goldstein, Tracey; Morse, Stephen S.; Rahman, Mahmudur; Epstein, Jonathan H.; Mazet, Jonna K.; Daszak, Peter; Lipkin, W. Ian
2015-01-01
It is currently unclear whether changes in viral communities will ever be predictable. Here we investigate whether viral communities in wildlife are inherently structured (inferring predictability) by looking at whether communities are assembled through deterministic (often predictable) or stochastic (not predictable) processes. We sample macaque faeces across nine sites in Bangladesh and use consensus PCR and sequencing to discover 184 viruses from 14 viral families. We then use network modelling and statistical null-hypothesis testing to show the presence of non-random deterministic patterns at different scales, between sites and within individuals. We show that the effects of determinism are not absolute however, as stochastic patterns are also observed. In showing that determinism is an important process in viral community assembly we conclude that it should be possible to forecast changes to some portion of a viral community, however there will always be some portion for which prediction will be unlikely. PMID:26391192
Discriminative parameter estimation for random walks segmentation.
Baudin, Pierre-Yves; Goodman, Danny; Kumrnar, Puneet; Azzabou, Noura; Carlier, Pierre G; Paragios, Nikos; Kumar, M Pawan
2013-01-01
The Random Walks (RW) algorithm is one of the most efficient and easy-to-use probabilistic segmentation methods. By combining contrast terms with prior terms, it provides accurate segmentations of medical images in a fully automated manner. However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned. we propose a novel discriminative learning framework that estimates the parameters using a training dataset. The main challenge we face is that the training samples are not fully supervised. Specifically, they provide a hard segmentation of the images, instead of a probabilistic segmentation. We overcome this challenge by treating the optimal probabilistic segmentation that is compatible with the given hard segmentation as a latent variable. This allows us to employ the latent support vector machine formulation for parameter estimation. We show that our approach significantly outperforms the baseline methods on a challenging dataset consisting of real clinical 3D MRI volumes of skeletal muscles.
Random walk with random resetting to the maximum position
NASA Astrophysics Data System (ADS)
Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2015-11-01
We study analytically a simple random walk model on a one-dimensional lattice, where at each time step the walker resets to the maximum of the already visited positions (to the rightmost visited site) with a probability r , and with probability (1 -r ) , it undergoes symmetric random walk, i.e., it hops to one of its neighboring sites, with equal probability (1 -r )/2 . For r =0 , it reduces to a standard random walk whose typical distance grows as √{n } for large n . In the presence of a nonzero resetting rate 0
Toward Digital Staining using Imaging Mass Spectrometry and Random Forests
Hanselmann, Michael; Köthe, Ullrich; Kirchner, Marc; Renard, Bernhard Y.; Amstalden, Erika R.; Glunde, Kristine; Heeren, Ron M. A.; Hamprecht, Fred A.
2009-01-01
We show on Imaging Mass Spectrometry (IMS) data that the Random Forest classifier can be used for automated tissue classification and that it results in predictions with high sensitivities and positive predictive values, even when inter-sample variability is present in the data. We further demonstrate how Markov Random Fields and vector-valued median filtering can be applied to reduce noise effects to further improve the classification results in a post-hoc smoothing step. Our study gives clear evidence that digital staining by means of IMS constitutes a promising complement to chemical staining techniques. PMID:19469555
Validation of Statistical Sampling Algorithms in Visual Sample Plan (VSP): Summary Report
Nuffer, Lisa L; Sego, Landon H.; Wilson, John E.; Hassig, Nancy L.; Pulsipher, Brent A.; Matzke, Brett D.
2009-02-18
probability based, meaning samples are located randomly (or on a randomly placed grid) so no bias enters into the placement of samples, and the number of samples is calculated such that IF the amount and spatial extent of contamination exceeds levels of concern, at least one of the samples would be taken from a contaminated area, at least X% of the time. Hence, "validation" of the statistical sampling algorithms is defined herein to mean ensuring that the "X%" (confidence) is actually met.
National Sample Assessment Protocols
ERIC Educational Resources Information Center
Ministerial Council on Education, Employment, Training and Youth Affairs (NJ1), 2012
2012-01-01
These protocols represent a working guide for planning and implementing national sample assessments in connection with the national Key Performance Measures (KPMs). The protocols are intended for agencies involved in planning or conducting national sample assessments and personnel responsible for administering associated tenders or contracts,…
Decker, David L.; Lyles, Brad F.; Purcell, Richard G.; Hershey, Ronald Lee
2013-04-16
The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.
Implementing Teacher Work Sampling
ERIC Educational Resources Information Center
Kinne, Lenore J.; Watson, Dwight C.
2005-01-01
This article describes how the teacher work sample methodology of the Renaissance Partnership for Improving Teacher Quality was implemented within the teacher education program at a small liberal arts college. Resulting program improvements are described, as well as on-going challenges. The adapted teacher work sample prompt and scoring rubric are…
Extraterrestrial Samples at JSC
NASA Technical Reports Server (NTRS)
Allen, Carlton C.
2007-01-01
A viewgraph presentation on the curation of extraterrestrial samples at NASA Johnson Space Center is shown. The topics include: 1) Apollo lunar samples; 2) Meteorites from Antarctica; 3) Cosmic dust from the stratosphere; 4) Genesis solar wind ions; 5) Stardust comet and interstellar grains; and 5) Space-Exposed Hardware.
A "clean-catch" urine sample is performed by collecting the sample of urine in midstream. Men or boys should wipe clean the head ... water and rinse well. A small amount of urine should initially fall into the toilet bowl before ...
Judgment sampling: a health care improvement perspective.
Perla, Rocco J; Provost, Lloyd P
2012-01-01
Sampling plays a major role in quality improvement work. Random sampling (assumed by most traditional statistical methods) is the exception in improvement situations. In most cases, some type of "judgment sample" is used to collect data from a system. Unfortunately, judgment sampling is not well understood. Judgment sampling relies upon those with process and subject matter knowledge to select useful samples for learning about process performance and the impact of changes over time. It many cases, where the goal is to learn about or improve a specific process or system, judgment samples are not merely the most convenient and economical approach, they are technically and conceptually the most appropriate approach. This is because improvement work is done in the real world in complex situations involving specific areas of concern and focus; in these situations, the assumptions of classical measurement theory neither can be met nor should an attempt be made to meet them. The purpose of this article is to describe judgment sampling and its importance in quality improvement work and studies with a focus on health care settings.
Murphy, Gloria A.
2010-09-07
A biological sample collector is adapted to a collect several biological samples in a plurality of filter wells. A biological sample collector may comprise a manifold plate for mounting a filter plate thereon, the filter plate having a plurality of filter wells therein; a hollow slider for engaging and positioning a tube that slides therethrough; and a slide case within which the hollow slider travels to allow the tube to be aligned with a selected filter well of the plurality of filter wells, wherein when the tube is aligned with the selected filter well, the tube is pushed through the hollow slider and into the selected filter well to sealingly engage the selected filter well and to allow the tube to deposit a biological sample onto a filter in the bottom of the selected filter well. The biological sample collector may be portable.
Data-Division-Specific Robustness and Power of Randomization Tests for ABAB Designs
ERIC Educational Resources Information Center
Manolov, Rumen; Solanas, Antonio; Bulte, Isis; Onghena, Patrick
2010-01-01
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each…
Fernique-type inequalities and moduli of continuity for anisotropic Gaussian random fields
Meerschaert, Mark M.; Wang, Wensheng; Xiao, Yimin
2013-01-01
This paper is concerned with sample path properties of anisotropic Gaussian random fields. We establish Fernique-type inequalities and utilize them to study the global and local moduli of continuity for anisotropic Gaussian random fields. Applications to fractional Brownian sheets and to the solutions of stochastic partial differential equations are investigated. PMID:24825922
ERIC Educational Resources Information Center
Le, Huynh-Nhu; Perry, Deborah F.; Stuart, Elizabeth A.
2011-01-01
Objective: A randomized controlled trial was conducted to evaluate the efficacy of a cognitive-behavioral (CBT) intervention to prevent perinatal depression in high-risk Latinas. Method: A sample of 217 participants, predominantly low-income Central American immigrants who met demographic and depression risk criteria, were randomized into usual…
Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?
ERIC Educational Resources Information Center
Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.
2005-01-01
Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…
The Probability of Small Schedule Values and Preference for Random-Interval Schedules
ERIC Educational Resources Information Center
Soreth, Michelle Ennis; Hineline, Philip N.
2009-01-01
Preference for working on variable schedules and temporal discrimination were simultaneously examined in two experiments using a discrete-trial, concurrent-chains arrangement with fixed interval (FI) and random interval (RI) terminal links. The random schedule was generated by first sampling a probability distribution after the programmed delay to…
Toward a Principled Sampling Theory for Quasi-Orders
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601
Toward a Principled Sampling Theory for Quasi-Orders.
Ünlü, Ali; Schrepp, Martin
2016-01-01
Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.
Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine
2017-02-23
According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model.
NASA Technical Reports Server (NTRS)
Peters, Gregory
2010-01-01
A field-deployable, battery-powered Rapid Active Sampling Package (RASP), originally designed for sampling strong materials during lunar and planetary missions, shows strong utility for terrestrial geological use. The technology is proving to be simple and effective for sampling and processing materials of strength. Although this originally was intended for planetary and lunar applications, the RASP is very useful as a powered hand tool for geologists and the mining industry to quickly sample and process rocks in the field on Earth. The RASP allows geologists to surgically acquire samples of rock for later laboratory analysis. This tool, roughly the size of a wrench, allows the user to cut away swaths of weathering rinds, revealing pristine rock surfaces for observation and subsequent sampling with the same tool. RASPing deeper (.3.5 cm) exposes single rock strata in-situ. Where a geologist fs hammer can only expose unweathered layers of rock, the RASP can do the same, and then has the added ability to capture and process samples into powder with particle sizes less than 150 microns, making it easier for XRD/XRF (x-ray diffraction/x-ray fluorescence). The tool uses a rotating rasp bit (or two counter-rotating bits) that resides inside or above the catch container. The container has an open slot to allow the bit to extend outside the container and to allow cuttings to enter and be caught. When the slot and rasp bit are in contact with a substrate, the bit is plunged into it in a matter of seconds to reach pristine rock. A user in the field may sample a rock multiple times at multiple depths in minutes, instead of having to cut out huge, heavy rock samples for transport back to a lab for analysis. Because of the speed and accuracy of the RASP, hundreds of samples can be taken in one day. RASP-acquired samples are small and easily carried. A user can characterize more area in less time than by using conventional methods. The field-deployable RASP used a Ni
Shapes of randomly placed droplets
NASA Astrophysics Data System (ADS)
Panchagnula, Mahesh; Janardan, Nachiketa; Deevi, Sri Vallabha
2016-11-01
Surface characterization is essential for many industrial applications. Surface defects result in a range of contact angles, which lead to Contact Angle Hysteresis (CAH). We use shapes of randomly shaped drops on surfaces to study the family of shapes that may result from CAH. We image the triple line from these drops and extract additional information related to local contact angles as well as curvatures from these images. We perform a generalized extreme value analysis (GEV) on this microscopic contact angle data. From this analysis, we predict a range for extreme contact angles that are possible for a sessile drop. We have also measured the macroscopic advancing and receding contact angles using a Goniometer. From the extreme values of the contact line curvature, we estimate the pinning stress distribution responsible for the random shapes. It is seen that this range follows the same trend as the macroscopic CAH measured using a Goniometer, and can be used as a method of characterizing the surface.
Optimal randomized scheduling by replacement
Saias, I.
1996-05-01
In the replacement scheduling problem, a system is composed of n processors drawn from a pool of p. The processors can become faulty while in operation and faulty processors never recover. A report is issued whenever a fault occurs. This report states only the existence of a fault but does not indicate its location. Based on this report, the scheduler can reconfigure the system and choose another set of n processors. The system operates satisfactorily as long as, upon report of a fault, the scheduler chooses n non-faulty processors. We provide a randomized protocol maximizing the expected number of faults the system can sustain before the occurrence of a crash. The optimality of the protocol is established by considering a closely related dual optimization problem. The game-theoretic technical difficulties that we solve in this paper are very general and encountered whenever proving the optimality of a randomized algorithm in parallel and distributed computation.
Propensity Score Matching: Retrospective Randomization?
Jupiter, Daniel C
Randomized controlled trials are viewed as the optimal study design. In this commentary, we explore the strength of this design and its complexity. We also discuss some situations in which these trials are not possible, or not ethical, or not economical. In such situations, specifically, in retrospective studies, we should make every effort to recapitulate the rigor and strength of the randomized trial. However, we could be faced with an inherent indication bias in such a setting. Thus, we consider the tools available to address that bias. Specifically, we examine matching and introduce and explore a new tool: propensity score matching. This tool allows us to group subjects according to their propensity to be in a particular treatment group and, in so doing, to account for the indication bias.
LCD3: Three-parameter limb darkening coefficient sampling
NASA Astrophysics Data System (ADS)
Kipping, David M.
2015-11-01
LDC3 samples physically permissible limb darkening coefficients for the Sing et al. (2009) three-parameter law. It defines the physically permissible intensity profile as being everywhere-positive, monotonically decreasing from center to limb and having a curl at the limb. The approximate sampling method is analytic and thus very fast, reproducing physically permissible samples in 97.3% of random draws (high validity) and encompassing 94.4% of the physically permissible parameter volume (high completeness).
[FTIR and classification study on trueborn tuber dioscoreae samples].
Sun, Su-qin; Tang, Jun-ming; Yuan, Zi-min; Bai, Yan
2003-04-01
To identify the origin of tuber dioscoreae, 45 samples were studied by soft independent modeling of class analogy (SIMCA) in this paper. The combination of Fourier transform infrared spectroscopy (FTIR) with mathematic method was used to classify the trueborn and non-trueborn samples. The samples were chosen randomly as modeling group and predicting group. The correctness of classification was 70%. This approach was proved to be a reliable and practicable method for trueborn quality analysis of tuber dioscoreae.
On the importance of incorporating sampling weights in ...
Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h
Waste classification sampling plan
Landsman, S.D.
1998-05-27
The purpose of this sampling is to explain the method used to collect and analyze data necessary to verify and/or determine the radionuclide content of the B-Cell decontamination and decommissioning waste stream so that the correct waste classification for the waste stream can be made, and to collect samples for studies of decontamination methods that could be used to remove fixed contamination present on the waste. The scope of this plan is to establish the technical basis for collecting samples and compiling quantitative data on the radioactive constituents present in waste generated during deactivation activities in B-Cell. Sampling and radioisotopic analysis will be performed on the fixed layers of contamination present on structural material and internal surfaces of process piping and tanks. In addition, dose rate measurements on existing waste material will be performed to determine the fraction of dose rate attributable to both removable and fixed contamination. Samples will also be collected to support studies of decontamination methods that are effective in removing the fixed contamination present on the waste. Sampling performed under this plan will meet criteria established in BNF-2596, Data Quality Objectives for the B-Cell Waste Stream Classification Sampling, J. M. Barnett, May 1998.
On Combinations of Random Loads
1980-01-01
NPS55-80-006 NAVAL POSTGRADUATE SCHOOL NM ’Monterey, California 00 •2• • TD -E E C AN : JUN 16 1980 i ON COMBINATIONS OF RANDOM LOADS by D. P. Gaver...of MKn is close to that of Mn for K large. PROPOSITION (3.3). Let F and G be as in (3.5), and u be such that (un)-c L(un) n as n ÷ (3.6) Then lim HKn
Random Variate Generation: A Survey.
1980-06-01
Lawrance and Lewis (1977, 1978), Jacobs and Lewis (1977) and Schmeiser and Lal (1979) consider time series having gamma marginal distributions. Price...random variables from probability distributions," Proceedings of the Winter Simulation Confgrnce, 269-280. Lawrance . A.J. and P.A.W. Lewis (1977). "An...exponential moving-average sequence and point process (EMAI)," J. Appl. Prob., 14, 98-113. Lawrance , A.J. and P.A.W. Lewis (1978), "An exponential
Random drift and culture change.
Bentley, R. Alexander; Hahn, Matthew W.; Shennan, Stephen J.
2004-01-01
We show that the frequency distributions of cultural variants, in three different real-world examples--first names, archaeological pottery and applications for technology patents--follow power laws that can be explained by a simple model of random drift. We conclude that cultural and economic choices often reflect a decision process that is value-neutral; this result has far-reaching testable implications for social-science research. PMID:15306315
Correlated randomness and switching phenomena
NASA Astrophysics Data System (ADS)
Stanley, H. E.; Buldyrev, S. V.; Franzese, G.; Havlin, S.; Mallamace, F.; Kumar, P.; Plerou, V.; Preis, T.
2010-08-01
One challenge of biology, medicine, and economics is that the systems treated by these serious scientific disciplines have no perfect metronome in time and no perfect spatial architecture-crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. Further, many of these processes and structures have the remarkable feature of “switching” from one behavior to another as if by magic. The past century has, philosophically, been concerned with placing aside the human tendency to see the universe as a fine-tuned machine. Here we will address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at some of the many spatial and temporal patterns in biology, medicine, and economics and even begin to characterize the switching phenomena that enables a system to pass from one state to another. Inspired by principles developed by A. Nihat Berker and scores of other statistical physicists in recent years, we discuss some applications of correlated randomness to understand switching phenomena in various fields. Specifically, we present evidence from experiments and from computer simulations supporting the hypothesis that water’s anomalies are related to a switching point (which is not unlike the “tipping point” immortalized by Malcolm Gladwell), and that the bubbles in economic phenomena that occur on all scales are not “outliers” (another Gladwell immortalization). Though more speculative, we support the idea of disease as arising from some kind of yet-to-be-understood complex switching phenomenon, by discussing data on selected examples, including heart disease and Alzheimer disease.
Instructions for borehole sampling
Reynolds, K.D.; Lindsey, K.A.
1994-11-11
Geologic systems generally are complex with physical properties and trends that can be difficult to predict. Subsurface geology exerts a fundamental control on groundwater flow and contaminant transport. The primary source for direct observation of subsurface geologic information is a borehole. However, direct observations from a borehole essentially are limited to the diameter and spacing of boreholes and the quality of the information derived from the drilling. Because it is impractical to drill a borehole every few feet to obtain data, it is necessary to maximize the data gathered during limited drilling operations. A technically defensible balance between the customer`s data quality objectives and control of drilling costs through limited drilling can be achieved with proper conduct of operations. This report presents the minimum criteria for geologic and hydrologic characterization and sampling that must be met during drilling. It outlines the sampling goals that need to be addressed when drilling boreholes, and the types of drilling techniques that work best to achieve these goals under the geologic conditions found at Hanford. This report provides general guidelines for: (1) how sampling methods are controlled by data needs, (2) how minimum sampling requirements change as knowledge and needs change, and (3) when drilling and sampling parameters need to be closely controlled with respect to the specific data needs. Consequently, the report is divided into two sections that center on: (1) a discussion of basic categories of subsurface characterization, sampling, and sampling techniques, and (2) guidelines for determining which drilling and sampling techniques meet required characterization and sampling objectives.
Ramsey, Charles A; Wagner, Claas
2015-01-01
The concept of Sample Quality Criteria (SQC) is the initial step in the scientific approach to representative sampling. It includes the establishment of sampling objectives, Decision Unit (DU), and confidence. Once fully defined, these criteria serve as input, in addition to material properties, to the Theory of Sampling for developing a representative sampling protocol. The first component of the SQC establishes these questions: What is the analyte(s) of concern? What is the concentration level of interest of the analyte(s)? How will inference(s) be made from the analytical data to the DU? The second component of the SQC establishes the DU, i.e., the scale at which decisions are to be made. On a large scale, a DU could be a ship or rail car; examples for small-scale DUs are individual beans, seeds, or kernels. A well-defined DU is critical because it defines the spatial and temporal boundaries of sample collection. SQC are not limited to a single DU; they can also include multiple DUs. The third SQC component, the confidence, establishes the desired probability that a correct inference (decision) can be made. The confidence level should typically correlate to the potential consequences of an incorrect decision (e.g., health or economic). The magnitude of combined errors in the sampling, sample processing and analytical protocols determines the likelihood of an incorrect decision. Thus, controlling error to a greater extent increases the probability of a correct decision. The required confidence level directly affects the sampling effort and QC measures.
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Resolution analysis by random probing
NASA Astrophysics Data System (ADS)
Simutė, S.; Fichtner, A.; van Leeuwen, T.
2015-12-01
We develop and apply methods for resolution analysis in tomography, based on stochastic probing of the Hessian or resolution operators. Key properties of our methods are (i) low algorithmic complexity and easy implementation, (ii) applicability to any tomographic technique, including full-waveform inversion and linearized ray tomography, (iii) applicability in any spatial dimension and to inversions with a large number of model parameters, (iv) low computational costs that are mostly a fraction of those required for synthetic recovery tests, and (v) the ability to quantify both spatial resolution and inter-parameter trade-offs. Using synthetic full-waveform inversions as benchmarks, we demonstrate that auto-correlations of random-model applications to the Hessian yield various resolution measures, including direction- and position-dependent resolution lengths, and the strength of inter-parameter mappings. We observe that the required number of random test models is around 5 in one, two and three dimensions. This means that the proposed resolution analyses are not only more meaningful than recovery tests but also computationally less expensive. We demonstrate the applicability of our method in 3D real-data full-waveform inversions for the western Mediterranean and Japan. In addition to tomographic problems, resolution analysis by random probing may be used in other inverse methods that constrain continuously distributed properties, including electromagnetic and potential-field inversions, as well as recently emerging geodynamic data assimilation.
Transport on randomly evolving trees
NASA Astrophysics Data System (ADS)
Pál, L.
2005-11-01
The time process of transport on randomly evolving trees is investigated. By introducing the notions of living and dead nodes, a model of random tree evolution is constructed which describes the spreading in time of objects corresponding to nodes. It is assumed that at t=0 the tree consists of a single living node (root), from which the evolution may begin. At a certain time instant τ⩾0 , the root produces ν⩾0 living nodes connected by lines to the root which becomes dead at the moment of the offspring production. In the evolution process each of the new living nodes evolves further like a root independently of the others. By using the methods of the age-dependent branching processes we derive the joint distribution function of the numbers of living and dead nodes, and determine the correlation between these node numbers as a function of time. It is proved that the correlation function converges to 3/2 independently of the distributions of ν and τ when q1→1 and t→∞ . Also analyzed are the stochastic properties of the end nodes; and the correlation between the numbers of living and dead end nodes is shown to change its character suddenly at the very beginning of the evolution process. The survival probability of random trees is investigated and expressions are derived for this probability.
Enhanced hyperuniformity from random reorganization.
Hexner, Daniel; Chaikin, Paul M; Levine, Dov
2017-04-10
Diffusion relaxes density fluctuations toward a uniform random state whose variance in regions of volume [Formula: see text] scales as [Formula: see text] Systems whose fluctuations decay faster, [Formula: see text] with [Formula: see text], are called hyperuniform. The larger [Formula: see text], the more uniform, with systems like crystals achieving the maximum value: [Formula: see text] Although finite temperature equilibrium dynamics will not yield hyperuniform states, driven, nonequilibrium dynamics may. Such is the case, for example, in a simple model where overlapping particles are each given a small random displacement. Above a critical particle density [Formula: see text], the system evolves forever, never finding a configuration where no particles overlap. Below [Formula: see text], however, it eventually finds such a state, and stops evolving. This "absorbing state" is hyperuniform up to a length scale [Formula: see text], which diverges at [Formula: see text] An important question is whether hyperuniformity survives noise and thermal fluctuations. We find that hyperuniformity of the absorbing state is not only robust against noise, diffusion, or activity, but that such perturbations reduce fluctuations toward their limiting behavior, [Formula: see text], a uniformity similar to random close packing and early universe fluctuations, but with arbitrary controllable density.
Transport on randomly evolving trees.
Pál, L
2005-11-01
The time process of transport on randomly evolving trees is investigated. By introducing the notions of living and dead nodes, a model of random tree evolution is constructed which describes the spreading in time of objects corresponding to nodes. It is assumed that at t=0 the tree consists of a single living node (root), from which the evolution may begin. At a certain time instant tau> or =0, the root produces v> or =0 living nodes connected by lines to the root which becomes dead at the moment of the offspring production. In the evolution process each of the new living nodes evolves further like a root independently of the others. By using the methods of the age-dependent branching processes we derive the joint distribution function of the numbers of living and dead nodes, and determine the correlation between these node numbers as a function of time. It is proved that the correlation function converges to square root of 3/2 independently of the distributions of v and tau when q1-->1 and t-->infinity. Also analyzed are the stochastic properties of the end nodes; and the correlation between the numbers of living and dead end nodes is shown to change its character suddenly at the very beginning of the evolution process. The survival probability of random trees is investigated and expressions are derived for this probability.
Sampling video compression system
NASA Technical Reports Server (NTRS)
Matsumoto, Y.; Lum, H. (Inventor)
1977-01-01
A system for transmitting video signal of compressed bandwidth is described. The transmitting station is provided with circuitry for dividing a picture to be transmitted into a plurality of blocks containing a checkerboard pattern of picture elements. Video signals along corresponding diagonal rows of picture elements in the respective blocks are regularly sampled. A transmitter responsive to the output of the sampling circuitry is included for transmitting the sampled video signals of one frame at a reduced bandwidth over a communication channel. The receiving station is provided with a frame memory for temporarily storing transmitted video signals of one frame at the original high bandwidth frequency.
Sample Size Determination for Clustered Count Data
Amatya, A.; Bhaumik, D.; Gibbons, R.D.
2013-01-01
We consider the problem of sample size determination for count data. Such data arise naturally in the context of multi-center (or cluster) randomized clinical trials, where patients are nested within research centers. We consider cluster-specific and population-average estimators (maximum likelihood based on generalized mixed-effects regression and generalized estimating equations respectively) for subject-level and cluster-level randomized designs respectively. We provide simple expressions for calculating number of clusters when comparing event rates of two groups in cross-sectional studies. The expressions we derive have closed form solutions and are based on either between-cluster variation or inter-cluster correlation for cross-sectional studies. We provide both theoretical and numerical comparisons of our methods with other existing methods. We specifically show that the performance of the proposed method is better for subject-level randomized designs, whereas the comparative performance depends on the rate ratio for the cluster-level randomized designs. We also provide a versatile method for longitudinal studies. Results are illustrated by three real data examples. PMID:23589228
Multilattice sampling strategies for region of interest dynamic MRI.
Rilling, Gabriel; Tao, Yuehui; Marshall, Ian; Davies, Mike E
2013-08-01
A multilattice sampling approach is proposed for dynamic MRI with Cartesian trajectories. It relies on the use of sampling patterns composed of several different lattices and exploits an image model where only some parts of the image are dynamic, whereas the rest is assumed static. Given the parameters of such an image model, the methodology followed for the design of a multilattice sampling pattern adapted to the model is described. The multi-lattice approach is compared to single-lattice sampling, as used by traditional acceleration methods such as UNFOLD (UNaliasing by Fourier-Encoding the Overlaps using the temporal Dimension) or k-t BLAST, and random sampling used by modern compressed sensing-based methods. On the considered image model, it allows more flexibility and higher accelerations than lattice sampling and better performance than random sampling. The method is illustrated on a phase-contrast carotid blood velocity mapping MR experiment. Combining the multilattice approach with the KEYHOLE technique allows up to 12× acceleration factors. Simulation and in vivo undersampling results validate the method. Compared to lattice and random sampling, multilattice sampling provides significant gains at high acceleration factors.