Sample records for random sampling design

  1. Methods for sample size determination in cluster randomized trials

    PubMed Central

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-01-01

    Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515

  2. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  3. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    PubMed

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  4. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness.

    PubMed

    Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko

    2012-01-01

    To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.

  5. Methods and analysis of realizing randomized grouping.

    PubMed

    Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi

    2011-07-01

    Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.

  6. Assessment of wadeable stream resources in the driftless area ecoregion in Western Wisconsin using a probabilistic sampling design.

    PubMed

    Miller, Michael A; Colby, Alison C C; Kanehl, Paul D; Blocksom, Karen

    2009-03-01

    The Wisconsin Department of Natural Resources (WDNR), with support from the U.S. EPA, conducted an assessment of wadeable streams in the Driftless Area ecoregion in western Wisconsin using a probabilistic sampling design. This ecoregion encompasses 20% of Wisconsin's land area and contains 8,800 miles of perennial streams. Randomly-selected stream sites (n = 60) equally distributed among stream orders 1-4 were sampled. Watershed land use, riparian and in-stream habitat, water chemistry, macroinvertebrate, and fish assemblage data were collected at each true random site and an associated "modified-random" site on each stream that was accessed via a road crossing nearest to the true random site. Targeted least-disturbed reference sites (n = 22) were also sampled to develop reference conditions for various physical, chemical, and biological measures. Cumulative distribution function plots of various measures collected at the true random sites evaluated with reference condition thresholds, indicate that high proportions of the random sites (and by inference the entire Driftless Area wadeable stream population) show some level of degradation. Study results show no statistically significant differences between the true random and modified-random sample sites for any of the nine physical habitat, 11 water chemistry, seven macroinvertebrate, or eight fish metrics analyzed. In Wisconsin's Driftless Area, 79% of wadeable stream lengths were accessible via road crossings. While further evaluation of the statistical rigor of using a modified-random sampling design is warranted, sampling randomly-selected stream sites accessed via the nearest road crossing may provide a more economical way to apply probabilistic sampling in stream monitoring programs.

  7. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  8. Statistical methods for efficient design of community surveys of response to noise: Random coefficients regression models

    NASA Technical Reports Server (NTRS)

    Tomberlin, T. J.

    1985-01-01

    Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.

  9. A comparison of two sampling designs for fish assemblage assessment in a large river

    USGS Publications Warehouse

    Kiraly, Ian A.; Coghlan, Stephen M.; Zydlewski, Joseph D.; Hayes, Daniel

    2014-01-01

    We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.

  10. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES

    PubMed Central

    van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.

    2014-01-01

    SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298

  12. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  13. A Comparison of Single Sample and Bootstrap Methods to Assess Mediation in Cluster Randomized Trials

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Stapleton, Laura M.; Kang, Joo Youn

    2006-01-01

    A Monte Carlo study examined the statistical performance of single sample and bootstrap methods that can be used to test and form confidence interval estimates of indirect effects in two cluster randomized experimental designs. The designs were similar in that they featured random assignment of clusters to one of two treatment conditions and…

  14. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  15. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  16. A nonparametric method to generate synthetic populations to adjust for complex sampling design features.

    PubMed

    Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E

    2014-06-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.

  17. A nonparametric method to generate synthetic populations to adjust for complex sampling design features

    PubMed Central

    Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608

  18. HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA

    EPA Science Inventory

    Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...

  19. Systematic versus random sampling in stereological studies.

    PubMed

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  20. Introductory Statistics Students' Conceptual Understanding of Study Design and Conclusions

    NASA Astrophysics Data System (ADS)

    Fry, Elizabeth Brondos

    Recommended learning goals for students in introductory statistics courses include the ability to recognize and explain the key role of randomness in designing studies and in drawing conclusions from those studies involving generalizations to a population or causal claims (GAISE College Report ASA Revision Committee, 2016). The purpose of this study was to explore introductory statistics students' understanding of the distinct roles that random sampling and random assignment play in study design and the conclusions that can be made from each. A study design unit lasting two and a half weeks was designed and implemented in four sections of an undergraduate introductory statistics course based on modeling and simulation. The research question that this study attempted to answer is: How does introductory statistics students' conceptual understanding of study design and conclusions (in particular, unbiased estimation and establishing causation) change after participating in a learning intervention designed to promote conceptual change in these areas? In order to answer this research question, a forced-choice assessment called the Inferences from Design Assessment (IDEA) was developed as a pretest and posttest, along with two open-ended assignments, a group quiz and a lab assignment. Quantitative analysis of IDEA results and qualitative analysis of the group quiz and lab assignment revealed that overall, students' mastery of study design concepts significantly increased after the unit, and the great majority of students successfully made the appropriate connections between random sampling and generalization, and between random assignment and causal claims. However, a small, but noticeable portion of students continued to demonstrate misunderstandings, such as confusion between random sampling and random assignment.

  1. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli.

    PubMed

    Westfall, Jacob; Kenny, David A; Judd, Charles M

    2014-10-01

    Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.

  2. Optimal sampling design for estimating spatial distribution and abundance of a freshwater mussel population

    USGS Publications Warehouse

    Pooler, P.S.; Smith, D.R.

    2005-01-01

    We compared the ability of simple random sampling (SRS) and a variety of systematic sampling (SYS) designs to estimate abundance, quantify spatial clustering, and predict spatial distribution of freshwater mussels. Sampling simulations were conducted using data obtained from a census of freshwater mussels in a 40 X 33 m section of the Cacapon River near Capon Bridge, West Virginia, and from a simulated spatially random population generated to have the same abundance as the real population. Sampling units that were 0.25 m 2 gave more accurate and precise abundance estimates and generally better spatial predictions than 1-m2 sampling units. Systematic sampling with ???2 random starts was more efficient than SRS. Estimates of abundance based on SYS were more accurate when the distance between sampling units across the stream was less than or equal to the distance between sampling units along the stream. Three measures for quantifying spatial clustering were examined: Hopkins Statistic, the Clumping Index, and Morisita's Index. Morisita's Index was the most reliable, and the Hopkins Statistic was prone to false rejection of complete spatial randomness. SYS designs with units spaced equally across and up stream provided the most accurate predictions when estimating the spatial distribution by kriging. Our research indicates that SYS designs with sampling units equally spaced both across and along the stream would be appropriate for sampling freshwater mussels even if no information about the true underlying spatial distribution of the population were available to guide the design choice. ?? 2005 by The North American Benthological Society.

  3. RECAL: A Computer Program for Selecting Sample Days for Recreation Use Estimation

    Treesearch

    D.L. Erickson; C.J. Liu; H. Ken Cordell; W.L. Chen

    1980-01-01

    Recreation Calendar (RECAL) is a computer program in PL/I for drawing a sample of days for estimating recreation use. With RECAL, a sampling period of any length may be chosen; simple random, stratified random, and factorial designs can be accommodated. The program randomly allocates days to strata and locations.

  4. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  5. Testing how voluntary participation requirements in an environmental study affect the planned random sample design outcomes: implications for the predictions of values and their uncertainty.

    NASA Astrophysics Data System (ADS)

    Ander, Louise; Lark, Murray; Smedley, Pauline; Watts, Michael; Hamilton, Elliott; Fletcher, Tony; Crabbe, Helen; Close, Rebecca; Studden, Mike; Leonardi, Giovanni

    2015-04-01

    Random sampling design is optimal in order to be able to assess outcomes, such as the mean of a given variable across an area. However, this optimal sampling design may be compromised to an unknown extent by unavoidable real-world factors: the extent to which the study design can still be considered random, and the influence this may have on the choice of appropriate statistical data analysis is examined in this work. We take a study which relied on voluntary participation for the sampling of private water tap chemical composition in England, UK. This study was designed and implemented as a categorical, randomised study. The local geological classes were grouped into 10 types, which were considered to be most important in likely effects on groundwater chemistry (the source of all the tap waters sampled). Locations of the users of private water supplies were made available to the study group from the Local Authority in the area. These were then assigned, based on location, to geological groups 1 to 10 and randomised within each group. However, the permission to collect samples then required active, voluntary participation by householders and thus, unlike many environmental studies, could not always follow the initial sample design. Impediments to participation ranged from 'willing but not available' during the designated sampling period, to a lack of response to requests to sample (assumed to be wholly unwilling or unable to participate). Additionally, a small number of unplanned samples were collected via new participants making themselves known to the sampling teams, during the sampling period. Here we examine the impact this has on the 'random' nature of the resulting data distribution, by comparison with the non-participating known supplies. We consider the implications this has on choice of statistical analysis methods to predict values and uncertainty at un-sampled locations.

  6. GEOSTATISTICAL SAMPLING DESIGNS FOR HAZARDOUS WASTE SITES

    EPA Science Inventory

    This chapter discusses field sampling design for environmental sites and hazardous waste sites with respect to random variable sampling theory, Gy's sampling theory, and geostatistical (kriging) sampling theory. The literature often presents these sampling methods as an adversari...

  7. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    PubMed

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  8. A Semantic Differential Evaluation of Attitudinal Outcomes of Introductory Physical Science.

    ERIC Educational Resources Information Center

    Hecht, Alfred Roland

    This study was designed to assess the attitudinal outcomes of Introductory Physical Science (IPS) curriculum materials used in schools. Random samples of 240 students receiving IPS instruction and 240 non-science students were assigned to separate Solomon four-group designs with non-equivalent control groups. Random samples of 60 traditional…

  9. Using GIS to generate spatially balanced random survey designs for natural resource applications.

    PubMed

    Theobald, David M; Stevens, Don L; White, Denis; Urquhart, N Scott; Olsen, Anthony R; Norman, John B

    2007-07-01

    Sampling of a population is frequently required to understand trends and patterns in natural resource management because financial and time constraints preclude a complete census. A rigorous probability-based survey design specifies where to sample so that inferences from the sample apply to the entire population. Probability survey designs should be used in natural resource and environmental management situations because they provide the mathematical foundation for statistical inference. Development of long-term monitoring designs demand survey designs that achieve statistical rigor and are efficient but remain flexible to inevitable logistical or practical constraints during field data collection. Here we describe an approach to probability-based survey design, called the Reversed Randomized Quadrant-Recursive Raster, based on the concept of spatially balanced sampling and implemented in a geographic information system. This provides environmental managers a practical tool to generate flexible and efficient survey designs for natural resource applications. Factors commonly used to modify sampling intensity, such as categories, gradients, or accessibility, can be readily incorporated into the spatially balanced sample design.

  10. From Planning to Implementation: An Examination of Changes in the Research Design, Sample Size, and Precision of Group Randomized Trials Launched by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica

    2013-01-01

    This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…

  11. Using variance components to estimate power in a hierarchically nested sampling design improving monitoring of larval Devils Hole pupfish

    USGS Publications Warehouse

    Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey

    2013-01-01

    We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.

  12. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    PubMed Central

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible non-parametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. PMID:24633656

  13. Extending cluster lot quality assurance sampling designs for surveillance programs.

    PubMed

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Sample size re-estimation and other midcourse adjustments with sequential parallel comparison design.

    PubMed

    Silverman, Rachel K; Ivanova, Anastasia

    2017-01-01

    Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.

  15. Estimates of Intraclass Correlation for Variables Related to Behavioral HIV/STD Prevention in a Predominantly African American and Hispanic Sample of Young Women

    ERIC Educational Resources Information Center

    Pals, Sherri L.; Beaty, Brenda L.; Posner, Samuel F.; Bull, Sheana S.

    2009-01-01

    Studies designed to evaluate HIV and STD prevention interventions often involve random assignment of groups such as neighborhoods or communities to study conditions (e.g., to intervention or control). Investigators who design group-randomized trials (GRTs) must take the expected intraclass correlation coefficient (ICC) into account in sample size…

  16. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  17. Sampling designs for HIV molecular epidemiology with application to Honduras.

    PubMed

    Shepherd, Bryan E; Rossini, Anthony J; Soto, Ramon Jeremias; De Rivera, Ivette Lorenzana; Mullins, James I

    2005-11-01

    Proper sampling is essential to characterize the molecular epidemiology of human immunodeficiency virus (HIV). HIV sampling frames are difficult to identify, so most studies use convenience samples. We discuss statistically valid and feasible sampling techniques that overcome some of the potential for bias due to convenience sampling and ensure better representation of the study population. We employ a sampling design called stratified cluster sampling. This first divides the population into geographical and/or social strata. Within each stratum, a population of clusters is chosen from groups, locations, or facilities where HIV-positive individuals might be found. Some clusters are randomly selected within strata and individuals are randomly selected within clusters. Variation and cost help determine the number of clusters and the number of individuals within clusters that are to be sampled. We illustrate the approach through a study designed to survey the heterogeneity of subtype B strains in Honduras.

  18. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  19. Revisiting sample size: are big trials the answer?

    PubMed

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  20. On the importance of incorporating sampling weights in ...

    EPA Pesticide Factsheets

    Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h

  1. Evaluating effectiveness of down-sampling for stratified designs and unbalanced prevalence in Random Forest models of tree species distributions in Nevada

    Treesearch

    Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino

    2012-01-01

    Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...

  2. Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod

    USGS Publications Warehouse

    Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.

    2008-01-01

    To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.

  3. Sample Size Calculations for Micro-randomized Trials in mHealth

    PubMed Central

    Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A.

    2015-01-01

    The use and development of mobile interventions are experiencing rapid growth. In “just-in-time” mobile interventions, treatments are provided via a mobile device and they are intended to help an individual make healthy decisions “in the moment,” and thus have a proximal, near future impact. Currently the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a “micro-randomized” trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. PMID:26707831

  4. A primer on stand and forest inventory designs

    Treesearch

    H. Gyde Lund; Charles E. Thomas

    1989-01-01

    Covers designs for the inventory of stands and forests in detail and with worked-out examples. For stands, random sampling, line transects, ricochet plot, systematic sampling, single plot, cluster, subjective sampling and complete enumeration are discussed. For forests inventory, the main categories are subjective sampling, inventories without prior stand mapping,...

  5. Accuracy Sampling Design Bias on Coarse Spatial Resolution Land Cover Data in the Great Lakes Region (United States and Canada)

    EPA Science Inventory

    A number of articles have investigated the impact of sampling design on remotely sensed landcover accuracy estimates. Gong and Howarth (1990) found significant differences for Kappa accuracy values when comparing purepixel sampling, stratified random sampling, and stratified sys...

  6. Sample size calculations for stepped wedge and cluster randomised trials: a unified approach

    PubMed Central

    Hemming, Karla; Taljaard, Monica

    2016-01-01

    Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808

  7. On the design of random metasurface based devices.

    PubMed

    Dupré, Matthieu; Hsu, Liyi; Kanté, Boubacar

    2018-05-08

    Metasurfaces are generally designed by placing scatterers in periodic or pseudo-periodic grids. We propose and discuss design rules for functional metasurfaces with randomly placed anisotropic elements that randomly sample a well-defined phase function. By analyzing the focusing performance of random metasurface lenses as a function of their density and the density of the phase-maps used to design them, we find that the performance of 1D metasurfaces is mostly governed by their density while 2D metasurfaces strongly depend on both the density and the near-field coupling configuration of the surface. The proposed approach is used to design all-polarization random metalenses at near infrared frequencies. Challenges, as well as opportunities of random metasurfaces compared to periodic ones are discussed. Our results pave the way to new approaches in the design of nanophotonic structures and devices from lenses to solar energy concentrators.

  8. Data-Division-Specific Robustness and Power of Randomization Tests for ABAB Designs

    ERIC Educational Resources Information Center

    Manolov, Rumen; Solanas, Antonio; Bulte, Isis; Onghena, Patrick

    2010-01-01

    This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each…

  9. A Facility Specialist Model for Improving Retention of Nursing Home Staff: Results from a Randomized, Controlled Study

    ERIC Educational Resources Information Center

    Pillemer, Karl; Meador, Rhoda; Henderson, Charles, Jr.; Robison, Julie; Hegeman, Carol; Graham, Edwin; Schultz, Leslie

    2008-01-01

    Purpose: This article reports on a randomized, controlled intervention study designed to reduce employee turnover by creating a retention specialist position in nursing homes. Design and Methods: We collected data three times over a 1-year period in 30 nursing homes, sampled in stratified random manner from facilities in New York State and…

  10. Survival distributions impact the power of randomized placebo-phase design and parallel groups randomized clinical trials.

    PubMed

    Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M

    2011-03-01

    The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Influences of sampling size and pattern on the uncertainty of correlation estimation between soil water content and its influencing factors

    NASA Astrophysics Data System (ADS)

    Lai, Xiaoming; Zhu, Qing; Zhou, Zhiwen; Liao, Kaihua

    2017-12-01

    In this study, seven random combination sampling strategies were applied to investigate the uncertainties in estimating the hillslope mean soil water content (SWC) and correlation coefficients between the SWC and soil/terrain properties on a tea + bamboo hillslope. One of the sampling strategies is the global random sampling and the other six are the stratified random sampling on the top, middle, toe, top + mid, top + toe and mid + toe slope positions. When each sampling strategy was applied, sample sizes were gradually reduced and each sampling size contained 3000 replicates. Under each sampling size of each sampling strategy, the relative errors (REs) and coefficients of variation (CVs) of the estimated hillslope mean SWC and correlation coefficients between the SWC and soil/terrain properties were calculated to quantify the accuracy and uncertainty. The results showed that the uncertainty of the estimations decreased as the sampling size increasing. However, larger sample sizes were required to reduce the uncertainty in correlation coefficient estimation than in hillslope mean SWC estimation. Under global random sampling, 12 randomly sampled sites on this hillslope were adequate to estimate the hillslope mean SWC with RE and CV ≤10%. However, at least 72 randomly sampled sites were needed to ensure the estimated correlation coefficients with REs and CVs ≤10%. Comparing with all sampling strategies, reducing sampling sites on the middle slope had the least influence on the estimation of hillslope mean SWC and correlation coefficients. Under this strategy, 60 sites (10 on the middle slope and 50 on the top and toe slopes) were enough to ensure the estimated correlation coefficients with REs and CVs ≤10%. This suggested that when designing the SWC sampling, the proportion of sites on the middle slope can be reduced to 16.7% of the total number of sites. Findings of this study will be useful for the optimal SWC sampling design.

  12. Linking species richness curves from non-contiguous sampling to contiguous-nested SAR: An empirical study

    NASA Astrophysics Data System (ADS)

    Lazarina, Maria; Kallimanis, Athanasios S.; Pantis, John D.; Sgardelis, Stefanos P.

    2014-11-01

    The species-area relationship (SAR) is one of the few generalizations in ecology. However, many different relationships are denoted as SARs. Here, we empirically evaluated the differences between SARs derived from nested-contiguous and non-contiguous sampling designs, using plants, birds and butterflies datasets from Great Britain, Greece, Massachusetts, New York and San Diego. The shape of SAR depends on the sampling scheme, but there is little empirical documentation on the magnitude of the deviation between different types of SARs and the factors affecting it. We implemented a strictly nested sampling design to construct nested-contiguous SAR (SACR), and systematic nested but non-contiguous, and random designs to construct non-contiguous species richness curves (SASRs for systematic and SACs for random designs) per dataset. The SACR lay below any SASR and most of the SACs. The deviation between them was related to the exponent f of the power law relationship between sampled area and extent. The lower the exponent f, the higher was the deviation between the curves. We linked SACR to SASR and SAC through the concept of "effective" area (Ae), i.e. the nested-contiguous area containing equal number of species with the accumulated sampled area (AS) of a non-contiguous sampling. The relationship between effective and sampled area was modeled as log(Ae) = klog(AS). A Generalized Linear Model was used to estimate the values of k from sampling design and dataset properties. The parameter k increased with the average distance between samples and with beta diversity, while k decreased with f. For both systematic and random sampling, the model performed well in predicting effective area in both the training set and in the test set which was totally independent from the training one. Through effective area, we can link different types of species richness curves based on sampling design properties, sampling effort, spatial scale and beta diversity patterns.

  13. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  14. Methodology Series Module 5: Sampling Strategies.

    PubMed

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  15. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    PubMed

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with a second differently generated set of spatial point populations, ν₈ and ν(W) again being the best performers in the longer-range autocorrelated populations. However, no systematic variance estimators tested were free from bias. On balance, systematic designs bring more narrow confidence intervals in clustered populations, while random designs permit unbiased estimates of (often wider) confidence interval. The search continues for better estimators of sampling variance for the systematic survey mean.

  16. Methodology Series Module 5: Sampling Strategies

    PubMed Central

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  17. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments

    PubMed Central

    2013-01-01

    Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725

  18. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    PubMed

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  19. Efficient design of cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances.

    PubMed

    van Breukelen, Gerard J P; Candel, Math J J M

    2018-06-10

    Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  20. Survey of rural, private wells. Statistical design

    USGS Publications Warehouse

    Mehnert, Edward; Schock, Susan C.; ,

    1991-01-01

    Half of Illinois' 38 million acres were planted in corn and soybeans in 1988. On the 19 million acres planted in corn and soybeans, approximately 1 million tons of nitrogen fertilizer and 50 million pounds of pesticides were applied. Because groundwater is the water supply for over 90 percent of rural Illinois, the occurrence of agricultural chemicals in groundwater in Illinois is of interest to the agricultural community, the public, and regulatory agencies. The occurrence of agricultural chemicals in groundwater is well documented. However, the extent of this contamination still needs to be defined. This can be done by randomly sampling wells across a geographic area. Key elements of a random, water-well sampling program for regional groundwater quality include the overall statistical design of the program, definition of the sample population, selection of wells to be sampled, and analysis of survey results. These elements must be consistent with the purpose for conducting the program; otherwise, the program will not provide the desired information. The need to carefully design and conduct a sampling program becomes readily apparent when one considers the high cost of collecting and analyzing a sample. For a random sampling program conducted in Illinois, the key elements, as well as the limitations imposed by available information, are described.

  1. Optimal Design in Three-Level Block Randomized Designs with Two Levels of Nesting: An ANOVA Framework with Random Effects

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2013-01-01

    Large-scale experiments that involve nested structures may assign treatment conditions either to subgroups such as classrooms or to individuals such as students within subgroups. Key aspects of the design of such experiments include knowledge of the variance structure in higher levels and the sample sizes necessary to reach sufficient power to…

  2. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.

  3. Designing Studies That Would Address the Multilayered Nature of Health Care

    PubMed Central

    Pennell, Michael; Rhoda, Dale; Hade, Erinn M.; Paskett, Electra D.

    2010-01-01

    We review design and analytic methods available for multilevel interventions in cancer research with particular attention to study design, sample size requirements, and potential to provide statistical evidence for causal inference. The most appropriate methods will depend on the stage of development of the research and whether randomization is possible. Early on, fractional factorial designs may be used to screen intervention components, particularly when randomization of individuals is possible. Quasi-experimental designs, including time-series and multiple baseline designs, can be useful once the intervention is designed because they require few sites and can provide the preliminary evidence to plan efficacy studies. In efficacy and effectiveness studies, group-randomized trials are preferred when randomization is possible and regression discontinuity designs are preferred otherwise if assignment based on a quantitative score is possible. Quasi-experimental designs may be used, especially when combined with recent developments in analytic methods to reduce bias in effect estimates. PMID:20386057

  4. Designing clinical trials for amblyopia

    PubMed Central

    Holmes, Jonathan M.

    2015-01-01

    Randomized clinical trial (RCT) study design leads to one of the highest levels of evidence, and is a preferred study design over cohort studies, because randomization reduces bias and maximizes the chance that even unknown confounding factors will be balanced between treatment groups. Recent randomized clinical trials and observational studies in amblyopia can be taken together to formulate an evidence-based approach to amblyopia treatment, which is presented in this review. When designing future clinical studies of amblyopia treatment, issues such as regression to the mean, sample size and trial duration must be considered, since each may impact study results and conclusions. PMID:25752747

  5. Urn models for response-adaptive randomized designs: a simulation study based on a non-adaptive randomized trial.

    PubMed

    Ghiglietti, Andrea; Scarale, Maria Giovanna; Miceli, Rosalba; Ieva, Francesca; Mariani, Luigi; Gavazzi, Cecilia; Paganoni, Anna Maria; Edefonti, Valeria

    2018-03-22

    Recently, response-adaptive designs have been proposed in randomized clinical trials to achieve ethical and/or cost advantages by using sequential accrual information collected during the trial to dynamically update the probabilities of treatment assignments. In this context, urn models-where the probability to assign patients to treatments is interpreted as the proportion of balls of different colors available in a virtual urn-have been used as response-adaptive randomization rules. We propose the use of Randomly Reinforced Urn (RRU) models in a simulation study based on a published randomized clinical trial on the efficacy of home enteral nutrition in cancer patients after major gastrointestinal surgery. We compare results with the RRU design with those previously published with the non-adaptive approach. We also provide a code written with the R software to implement the RRU design in practice. In detail, we simulate 10,000 trials based on the RRU model in three set-ups of different total sample sizes. We report information on the number of patients allocated to the inferior treatment and on the empirical power of the t-test for the treatment coefficient in the ANOVA model. We carry out a sensitivity analysis to assess the effect of different urn compositions. For each sample size, in approximately 75% of the simulation runs, the number of patients allocated to the inferior treatment by the RRU design is lower, as compared to the non-adaptive design. The empirical power of the t-test for the treatment effect is similar in the two designs.

  6. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  7. Statistical Power and Optimum Sample Allocation Ratio for Treatment and Control Having Unequal Costs Per Unit of Randomization

    ERIC Educational Resources Information Center

    Liu, Xiaofeng

    2003-01-01

    This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…

  8. A Practical Methodology for Quantifying Random and Systematic Components of Unexplained Variance in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.

    2012-01-01

    This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.

  9. Representative Sampling: Follow-up of Spring 1972 and Spring 1973 Students. TEX-SIS FOLLOW-UP SC3.

    ERIC Educational Resources Information Center

    Wilkinson, Larry; And Others

    This report presents the findings of a research study, conducted by the College of the Mainland (COM) as a subcontractor for Project FOLLOW-UP, designed to test the accuracy of random sampling and to measure non-response bias in mail surveys. In 1975, a computer-generated random sample of 500 students was drawn from a population of 1,256 students…

  10. Sample size considerations when groups are the appropriate unit of analyses

    PubMed Central

    Sadler, Georgia Robins; Ko, Celine Marie; Alisangco, Jennifer; Rosbrook, Bradley P.; Miller, Eric; Fullerton, Judith

    2007-01-01

    This paper discusses issues to be considered by nurse researchers when groups should be used as a unit of randomization. Advantages and disadvantages are presented, with statistical calculations needed to determine effective sample size. Examples of these concepts are presented using data from the Black Cosmetologists Promoting Health Program. Different hypothetical scenarios and their impact on sample size are presented. Given the complexity of calculating sample size when using groups as a unit of randomization, it’s advantageous for researchers to work closely with statisticians when designing and implementing studies that anticipate the use of groups as the unit of randomization. PMID:17693219

  11. Network Sampling with Memory: A proposal for more efficient sampling from social networks.

    PubMed

    Mouw, Ted; Verdery, Ashton M

    2012-08-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS.

  12. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    PubMed Central

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  13. The Evaluation of Bias of the Weighted Random Effects Model Estimators. Research Report. ETS RR-11-13

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner,…

  14. How Generalizable Is Your Experiment? An Index for Comparing Samples and Populations

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2013-01-01

    Recent research on the design of social experiments has highlighted the effects of different design choices on research findings. Since experiments rarely collect their samples using random selection, in order to address these external validity problems and design choices, recent research has focused on two areas. The first area is on methods for…

  15. Under-sampling trajectory design for compressed sensing based DCE-MRI.

    PubMed

    Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting

    2013-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.

  16. Statistical inferences for data from studies conducted with an aggregated multivariate outcome-dependent sample design

    PubMed Central

    Lu, Tsui-Shan; Longnecker, Matthew P.; Zhou, Haibo

    2016-01-01

    Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data and the general ODS design for a continuous response. While substantial work has been done for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome dependent sampling (Multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the Multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the Multivariate-ODS or the estimator from a simple random sample with the same sample size. The Multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of PCB exposure to hearing loss in children born to the Collaborative Perinatal Study. PMID:27966260

  17. Generalized optimal design for two-arm, randomized phase II clinical trials with endpoints from the exponential dispersion family.

    PubMed

    Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S

    2016-11-01

    For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  19. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  20. A two-way enriched clinical trial design: combining advantages of placebo lead-in and randomized withdrawal.

    PubMed

    Ivanova, Anastasia; Tamura, Roy N

    2015-12-01

    A new clinical trial design, designated the two-way enriched design (TED), is introduced, which augments the standard randomized placebo-controlled trial with second-stage enrichment designs in placebo non-responders and drug responders. The trial is run in two stages. In the first stage, patients are randomized between drug and placebo. In the second stage, placebo non-responders are re-randomized between drug and placebo and drug responders are re-randomized between drug and placebo. All first-stage data, and second-stage data from first-stage placebo non-responders and first-stage drug responders, are utilized in the efficacy analysis. The authors developed one, two and three degrees of freedom score tests for treatment effect in the TED and give formulae for asymptotic power and for sample size computations. The authors compute the optimal allocation ratio between drug and placebo in the first stage for the TED and compare the operating characteristics of the design to the standard parallel clinical trial, placebo lead-in and randomized withdrawal designs. Two motivating examples from different disease areas are presented to illustrate the possible design considerations. © The Author(s) 2011.

  1. USING GIS TO GENERATE SPATIALLY-BALANCED RANDOM SURVEY DESIGNS FOR NATURAL RESOURCE APPLICATIONS

    EPA Science Inventory

    Sampling of a population is frequently required to understand trends and patterns in natural resource management because financial and time constraints preclude a complete census. A rigorous probability-based survey design specifies where to sample so that inferences from the sam...

  2. R. A. Fisher and his advocacy of randomization.

    PubMed

    Hall, Nancy S

    2007-01-01

    The requirement of randomization in experimental design was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimental design were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods.

  3. Modeling of Academic Achievement of Primary School Students in Ethiopia Using Bayesian Multilevel Approach

    ERIC Educational Resources Information Center

    Sebro, Negusse Yohannes; Goshu, Ayele Taye

    2017-01-01

    This study aims to explore Bayesian multilevel modeling to investigate variations of average academic achievement of grade eight school students. A sample of 636 students is randomly selected from 26 private and government schools by a two-stage stratified sampling design. Bayesian method is used to estimate the fixed and random effects. Input and…

  4. Stanford GEMS phase 2 obesity prevention trial for low-income African-American girls: design and sample baseline characteristics.

    PubMed

    Robinson, Thomas N; Kraemer, Helena C; Matheson, Donna M; Obarzanek, Eva; Wilson, Darrell M; Haskell, William L; Pruitt, Leslie A; Thompson, Nikko S; Haydel, K Farish; Fujimoto, Michelle; Varady, Ann; McCarthy, Sally; Watanabe, Connie; Killen, Joel D

    2008-01-01

    African-American girls and women are at high risk of obesity and its associated morbidities. Few studies have tested obesity prevention strategies specifically designed for African-American girls. This report describes the design and baseline findings of the Stanford GEMS (Girls health Enrichment Multi-site Studies) trial to test the effect of a two-year community- and family-based intervention to reduce weight gain in low-income, pre-adolescent African-American girls. Randomized controlled trial with measurements scheduled in girls' homes at baseline, 6, 12, 18 and 24 month post-randomization. Low-income areas of Oakland, CA. Eight, nine and ten year old African-American girls and their parents/caregivers. Girls are randomized to a culturally-tailored after-school dance program and a home/family-based intervention to reduce screen media use versus an information-based community health education Active-Placebo Comparison intervention. Interventions last for 2 years for each participant. Change in body mass index over the two-year study. Recruitment and enrollment successfully produced a predominately low-socioeconomic status sample. Two-hundred sixty one (261) families were randomized. One girl per family is randomly chosen for the analysis sample. Randomization produced comparable experimental groups with only a few statistically significant differences. The sample had a mean body mass index (BMI) at the 74 th percentile on the 2000 CDC BMI reference, and one-third of the analysis sample had a BMI at the 95th percentile or above. Average fasting total cholesterol and LDL cholesterol were above NCEP thresholds for borderline high classifications. Girls averaged low levels of moderate to vigorous physical activity, more than 3 h per day of screen media use, and diets high in energy from fat. The Stanford GEMS trial is testing the benefits of culturally-tailored after-school dance and screen-time reduction interventions for obesity prevention in low-income, pre-adolescent African-American girls.

  5. Stemflow estimation in a redwood forest using model-based stratified random sampling

    Treesearch

    Jack Lewis

    2003-01-01

    Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...

  6. Study Design Rigor in Animal-Experimental Research Published in Anesthesia Journals.

    PubMed

    Hoerauf, Janine M; Moss, Angela F; Fernandez-Bustamante, Ana; Bartels, Karsten

    2018-01-01

    Lack of reproducibility of preclinical studies has been identified as an impediment for translation of basic mechanistic research into effective clinical therapies. Indeed, the National Institutes of Health has revised its grant application process to require more rigorous study design, including sample size calculations, blinding procedures, and randomization steps. We hypothesized that the reporting of such metrics of study design rigor has increased over time for animal-experimental research published in anesthesia journals. PubMed was searched for animal-experimental studies published in 2005, 2010, and 2015 in primarily English-language anesthesia journals. A total of 1466 publications were graded on the performance of sample size estimation, randomization, and blinding. Cochran-Armitage test was used to assess linear trends over time for the primary outcome of whether or not a metric was reported. Interrater agreement for each of the 3 metrics (power, randomization, and blinding) was assessed using the weighted κ coefficient in a 10% random sample of articles rerated by a second investigator blinded to the ratings of the first investigator. A total of 1466 manuscripts were analyzed. Reporting for all 3 metrics of experimental design rigor increased over time (2005 to 2010 to 2015): for power analysis, from 5% (27/516), to 12% (59/485), to 17% (77/465); for randomization, from 41% (213/516), to 50% (243/485), to 54% (253/465); and for blinding, from 26% (135/516), to 38% (186/485), to 47% (217/465). The weighted κ coefficients and 98.3% confidence interval indicate almost perfect agreement between the 2 raters beyond that which occurs by chance alone (power, 0.93 [0.85, 1.0], randomization, 0.91 [0.85, 0.98], and blinding, 0.90 [0.84, 0.96]). Our hypothesis that reported metrics of rigor in animal-experimental studies in anesthesia journals have increased during the past decade was confirmed. More consistent reporting, or explicit justification for absence, of sample size calculations, blinding techniques, and randomization procedures could better enable readers to evaluate potential sources of bias in animal-experimental research manuscripts. Future studies should assess whether such steps lead to improved translation of animal-experimental anesthesia research into successful clinical trials.

  7. Reweighting of the primary sampling units in the National Automotive Sampling System

    DOT National Transportation Integrated Search

    1997-09-01

    The original design of hte National Automotive Sampling System - formerly the National Accident Sampling System - called for 75 PSUs randomly selected from PSUs which were grouped into various strata across the U.S. The implementation of the PSU samp...

  8. An Assessment on Awareness and Acceptability of Child Adoption in Edo State

    ERIC Educational Resources Information Center

    Aluyor, P.; Salami, L. I.

    2017-01-01

    The study examines the awareness and acceptability of child adoption in Edo State. The design used for the study was survey design. The population for the study is made up of adults male and female in Esan West Local Government Area. One hundred respondents were randomly selected using random sampling techniques. The validity was ascertained by…

  9. An Experimental Study of Interventions for the Acquisition and Retention of Motivational Interviewing Skills among Probation Officers

    ERIC Educational Resources Information Center

    Asteris, Mark M., Jr.

    2012-01-01

    This study was designed to investigate the differences in Motivational Interviewing (MI) skill acquisition and retention among probation officers. This study had a randomized, experimental, pretest-posttest control group design using the MITI 3.1.1 and the VASE-R to measure MI skill acquisition and retention. A random sample (n = 24) of probation…

  10. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs.

    PubMed

    Petscher, Yaacov; Schatschneider, Christopher

    2011-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the simple difference and covariance-adjusted scores were more or less powerful to detect treatment effects when relaxing certain assumptions made by Huck and McLean (1975). Four factors were manipulated in the design including sample size, normality of the pretest and posttest distributions, the correlation between pretest and posttest, and posttest variance. A 5 × 5 × 4 × 3 mostly crossed design was run with 1,000 replications per condition, resulting in 226,000 unique samples. The gain score was nearly as powerful as the covariance-adjusted score when pretest and posttest variances were equal, and as powerful in fan-spread growth conditions; thus, under certain circumstances the gain score could be used in two-wave randomized experimental designs.

  11. A Simulation Study on the Performance of the Simple Difference and Covariance-Adjusted Scores in Randomized Experimental Designs

    PubMed Central

    Petscher, Yaacov; Schatschneider, Christopher

    2015-01-01

    Research by Huck and McLean (1975) demonstrated that the covariance-adjusted score is more powerful than the simple difference score, yet recent reviews indicate researchers are equally likely to use either score type in two-wave randomized experimental designs. A Monte Carlo simulation was conducted to examine the conditions under which the simple difference and covariance-adjusted scores were more or less powerful to detect treatment effects when relaxing certain assumptions made by Huck and McLean (1975). Four factors were manipulated in the design including sample size, normality of the pretest and posttest distributions, the correlation between pretest and posttest, and posttest variance. A 5 × 5 × 4 × 3 mostly crossed design was run with 1,000 replications per condition, resulting in 226,000 unique samples. The gain score was nearly as powerful as the covariance-adjusted score when pretest and posttest variances were equal, and as powerful in fan-spread growth conditions; thus, under certain circumstances the gain score could be used in two-wave randomized experimental designs. PMID:26379310

  12. Designing clinical trials to test disease-modifying agents: application to the treatment trials of Alzheimer's disease.

    PubMed

    Xiong, Chengjie; van Belle, Gerald; Miller, J Philip; Morris, John C

    2011-02-01

    Therapeutic trials of disease-modifying agents on Alzheimer's disease (AD) require novel designs and analyses involving switch of treatments for at least a portion of subjects enrolled. Randomized start and randomized withdrawal designs are two examples of such designs. Crucial design parameters such as sample size and the time of treatment switch are important to understand in designing such clinical trials. The purpose of this article is to provide methods to determine sample sizes and time of treatment switch as well as optimum statistical tests of treatment efficacy for clinical trials of disease-modifying agents on AD. A general linear mixed effects model is proposed to test the disease-modifying efficacy of novel therapeutic agents on AD. This model links the longitudinal growth from both the placebo arm and the treatment arm at the time of treatment switch for these in the delayed treatment arm or early withdrawal arm and incorporates the potential correlation on the rate of cognitive change before and after the treatment switch. Sample sizes and the optimum time for treatment switch of such trials as well as optimum test statistic for the treatment efficacy are determined according to the model. Assuming an evenly spaced longitudinal design over a fixed duration, the optimum treatment switching time in a randomized start or a randomized withdrawal trial is half way through the trial. With the optimum test statistic for the treatment efficacy and over a wide spectrum of model parameters, the optimum sample size allocations are fairly close to the simplest design with a sample size ratio of 1:1:1 among the treatment arm, the delayed treatment or early withdrawal arm, and the placebo arm. The application of the proposed methodology to AD provides evidence that much larger sample sizes are required to adequately power disease-modifying trials when compared with those for symptomatic agents, even when the treatment switch time and efficacy test are optimally chosen. The proposed method assumes that the only and immediate effect of treatment switch is on the rate of cognitive change. Crucial design parameters for the clinical trials of disease-modifying agents on AD can be optimally chosen. Government and industry officials as well as academia researchers should consider the optimum use of the clinical trials design for disease-modifying agents on AD in their effort to search for the treatments with the potential to modify the underlying pathophysiology of AD.

  13. Accounting for selection bias in association studies with complex survey data.

    PubMed

    Wirth, Kathleen E; Tchetgen Tchetgen, Eric J

    2014-05-01

    Obtaining representative information from hidden and hard-to-reach populations is fundamental to describe the epidemiology of many sexually transmitted diseases, including HIV. Unfortunately, simple random sampling is impractical in these settings, as no registry of names exists from which to sample the population at random. However, complex sampling designs can be used, as members of these populations tend to congregate at known locations, which can be enumerated and sampled at random. For example, female sex workers may be found at brothels and street corners, whereas injection drug users often come together at shooting galleries. Despite the logistical appeal, complex sampling schemes lead to unequal probabilities of selection, and failure to account for this differential selection can result in biased estimates of population averages and relative risks. However, standard techniques to account for selection can lead to substantial losses in efficiency. Consequently, researchers implement a variety of strategies in an effort to balance validity and efficiency. Some researchers fully or partially account for the survey design, whereas others do nothing and treat the sample as a realization of the population of interest. We use directed acyclic graphs to show how certain survey sampling designs, combined with subject-matter considerations unique to individual exposure-outcome associations, can induce selection bias. Finally, we present a novel yet simple maximum likelihood approach for analyzing complex survey data; this approach optimizes statistical efficiency at no cost to validity. We use simulated data to illustrate this method and compare it with other analytic techniques.

  14. Random versus fixed-site sampling when monitoring relative abundance of fishes in headwater streams of the upper Colorado River basin

    USGS Publications Warehouse

    Quist, M.C.; Gerow, K.G.; Bower, M.R.; Hubert, W.A.

    2006-01-01

    Native fishes of the upper Colorado River basin (UCRB) have declined in distribution and abundance due to habitat degradation and interactions with normative fishes. Consequently, monitoring populations of both native and nonnative fishes is important for conservation of native species. We used data collected from Muddy Creek, Wyoming (2003-2004), to compare sample size estimates using a random and a fixed-site sampling design to monitor changes in catch per unit effort (CPUE) of native bluehead suckers Catostomus discobolus, flannelmouth suckers C. latipinnis, roundtail chub Gila robusta, and speckled dace Rhinichthys osculus, as well as nonnative creek chub Semotilus atromaculatus and white suckers C. commersonii. When one-pass backpack electrofishing was used, detection of 10% or 25% changes in CPUE (fish/100 m) at 60% statistical power required 50-1,000 randomly sampled reaches among species regardless of sampling design. However, use of a fixed-site sampling design with 25-50 reaches greatly enhanced the ability to detect changes in CPUE. The addition of seining did not appreciably reduce required effort. When detection of 25-50% changes in CPUE of native and nonnative fishes is acceptable, we recommend establishment of 25-50 fixed reaches sampled by one-pass electrofishing in Muddy Creek. Because Muddy Creek has habitat and fish assemblages characteristic of other headwater streams in the UCRB, our results are likely to apply to many other streams in the basin. ?? Copyright by the American Fisheries Society 2006.

  15. Statistical inferences for data from studies conducted with an aggregated multivariate outcome-dependent sample design.

    PubMed

    Lu, Tsui-Shan; Longnecker, Matthew P; Zhou, Haibo

    2017-03-15

    Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data, and the general ODS design for a continuous response. While substantial work has been carried out for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome-dependent sampling (multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the multivariate-ODS or the estimator from a simple random sample with the same sample size. The multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of polychlorinated biphenyl exposure to hearing loss in children born to the Collaborative Perinatal Study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Guided transect sampling - a new design combining prior information and field surveying

    Treesearch

    Anna Ringvall; Goran Stahl; Tomas Lamas

    2000-01-01

    Guided transect sampling is a two-stage sampling design in which prior information is used to guide the field survey in the second stage. In the first stage, broad strips are randomly selected and divided into grid-cells. For each cell a covariate value is estimated from remote sensing data, for example. The covariate is the basis for subsampling of a transect through...

  17. Motivational Factors and Teachers Commitment in Public Secondary Schools in Mbale Municipality

    ERIC Educational Resources Information Center

    Olurotimi, Ogunlade Joseph; Asad, Kamonges Wahab; Abdulrauf, Abdulkadir

    2015-01-01

    The study investigated the influence of motivational factors on teachers' commitment in public Secondary School in Mbale Municipality. The study employed Cross-sectional survey design. The sampling technique used to select was simple random sampling technique. The instrument used to collect data was a self designed questionnaire. The data…

  18. Early morning urine collection to improve urinary lateral flow LAM assay sensitivity in hospitalised patients with HIV-TB co-infection.

    PubMed

    Gina, Phindile; Randall, Philippa J; Muchinga, Tapuwa E; Pooran, Anil; Meldau, Richard; Peter, Jonny G; Dheda, Keertan

    2017-05-12

    Urine LAM testing has been approved by the WHO for use in hospitalised patients with advanced immunosuppression. However, sensitivity remains suboptimal. We therefore examined the incremental diagnostic sensitivity of early morning urine (EMU) versus random urine sampling using the Determine® lateral flow lipoarabinomannan assay (LF-LAM) in HIV-TB co-infected patients. Consenting HIV-infected inpatients, screened as part of a larger prospective randomized controlled trial, that were treated for TB, and could donate matched random and EMU samples were included. Thus paired sample were collected from the same patient, LF-LAM was graded using the pre-January 2014, with grade 1 and 2 manufacturer-designated cut-points (the latter designated grade 1 after January 2014). Single sputum Xpert-MTB/RIF and/or TB culture positivity served as the reference standard (definite TB). Those treated for TB but not meeting this standard were designated probable TB. 123 HIV-infected patients commenced anti-TB treatment and provided matched random and EMU samples. 33% (41/123) and 67% (82/123) had definite and probable TB, respectively. Amongst those with definite TB LF-LAM sensitivity (95%CI), using the grade 2 cut-point, increased from 12% (5-24; 5/43) to 39% (26-54; 16/41) with random versus EMU, respectively (p = 0.005). Similarly, amongst probable TB, LF-LAM sensitivity increased from 10% (5-17; 8/83) to 24% (16-34; 20/82) (p = 0.001). LF-LAM specificity was not determined. This proof of concept study indicates that EMU could improve the sensitivity of LF-LAM in hospitalised TB-HIV co-infected patients. These data have implications for clinical practice.

  19. Urine sampling techniques in symptomatic primary-care patients: a diagnostic accuracy review.

    PubMed

    Holm, Anne; Aabenhus, Rune

    2016-06-08

    Choice of urine sampling technique in urinary tract infection may impact diagnostic accuracy and thus lead to possible over- or undertreatment. Currently no evidencebased consensus exists regarding correct sampling technique of urine from women with symptoms of urinary tract infection in primary care. The aim of this study was to determine the accuracy of urine culture from different sampling-techniques in symptomatic non-pregnant women in primary care. A systematic review was conducted by searching Medline and Embase for clinical studies conducted in primary care using a randomized or paired design to compare the result of urine culture obtained with two or more collection techniques in adult, female, non-pregnant patients with symptoms of urinary tract infection. We evaluated quality of the studies and compared accuracy based on dichotomized outcomes. We included seven studies investigating urine sampling technique in 1062 symptomatic patients in primary care. Mid-stream-clean-catch had a positive predictive value of 0.79 to 0.95 and a negative predictive value close to 1 compared to sterile techniques. Two randomized controlled trials found no difference in infection rate between mid-stream-clean-catch, mid-stream-urine and random samples. At present, no evidence suggests that sampling technique affects the accuracy of the microbiological diagnosis in non-pregnant women with symptoms of urinary tract infection in primary care. However, the evidence presented is in-direct and the difference between mid-stream-clean-catch, mid-stream-urine and random samples remains to be investigated in a paired design to verify the present findings.

  20. Improving Classroom Learning Environments by Cultivating Awareness and Resilience in Education (CARE): Results of a Randomized Controlled Trial

    ERIC Educational Resources Information Center

    Jennings, Patricia A.; Frank, Jennifer L.; Snowberg, Karin E.; Coccia, Michael A.; Greenberg, Mark T.

    2013-01-01

    Cultivating Awareness and Resilience in Education (CARE for Teachers) is a mindfulness-based professional development program designed to reduce stress and improve teachers' performance and classroom learning environments. A randomized controlled trial examined program efficacy and acceptability among a sample of 50 teachers randomly assigned to…

  1. Adapted random sampling patterns for accelerated MRI.

    PubMed

    Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf

    2011-02-01

    Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.

  2. Bayesian adaptive trials offer advantages in comparative effectiveness trials: an example in status epilepticus.

    PubMed

    Connor, Jason T; Elm, Jordan J; Broglio, Kristine R

    2013-08-01

    We present a novel Bayesian adaptive comparative effectiveness trial comparing three treatments for status epilepticus that uses adaptive randomization with potential early stopping. The trial will enroll 720 unique patients in emergency departments and uses a Bayesian adaptive design. The trial design is compared to a trial without adaptive randomization and produces an efficient trial in which a higher proportion of patients are likely to be randomized to the most effective treatment arm while generally using fewer total patients and offers higher power than an analogous trial with fixed randomization when identifying a superior treatment. When one treatment is superior to the other two, the trial design provides better patient care, higher power, and a lower expected sample size. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Using re-randomization to increase the recruitment rate in clinical trials - an assessment of three clinical areas.

    PubMed

    Kahan, Brennan C

    2016-12-13

    Patient recruitment in clinical trials is often challenging, and as a result, many trials are stopped early due to insufficient recruitment. The re-randomization design allows patients to be re-enrolled and re-randomized for each new treatment episode that they experience. Because it allows multiple enrollments for each patient, this design has been proposed as a way to increase the recruitment rate in clinical trials. However, it is unknown to what extent recruitment could be increased in practice. We modelled the expected recruitment rate for parallel-group and re-randomization trials in different settings based on estimates from real trials and datasets. We considered three clinical areas: in vitro fertilization, severe asthma exacerbations, and acute sickle cell pain crises. We compared the two designs in terms of the expected time to complete recruitment, and the sample size recruited over a fixed recruitment period. Across the different scenarios we considered, we estimated that re-randomization could reduce the expected time to complete recruitment by between 4 and 22 months (relative reductions of 19% and 45%), or increase the sample size recruited over a fixed recruitment period by between 29% and 171%. Re-randomization can increase recruitment most for trials with a short follow-up period, a long trial recruitment duration, and patients with high rates of treatment episodes. Re-randomization has the potential to increase the recruitment rate in certain settings, and could lead to quicker and more efficient trials in these scenarios.

  4. Implementing collaborative primary care for depression and posttraumatic stress disorder: design and sample for a randomized trial in the U.S. military health system.

    PubMed

    Engel, Charles C; Bray, Robert M; Jaycox, Lisa H; Freed, Michael C; Zatzick, Doug; Lane, Marian E; Brambilla, Donald; Rae Olmsted, Kristine; Vandermaas-Peeler, Russ; Litz, Brett; Tanielian, Terri; Belsher, Bradley E; Evatt, Daniel P; Novak, Laura A; Unützer, Jürgen; Katon, Wayne J

    2014-11-01

    War-related trauma, posttraumatic stress disorder (PTSD), depression and suicide are common in US military members. Often, those affected do not seek treatment due to stigma and barriers to care. When care is sought, it often fails to meet quality standards. A randomized trial is assessing whether collaborative primary care improves quality and outcomes of PTSD and depression care in the US military health system. The aim of this study is to describe the design and sample for a randomized effectiveness trial of collaborative care for PTSD and depression in military members attending primary care. The STEPS-UP Trial (STepped Enhancement of PTSD Services Using Primary Care) is a 6 installation (18 clinic) randomized effectiveness trial in the US military health system. Study rationale, design, enrollment and sample characteristics are summarized. Military members attending primary care with suspected PTSD, depression or both were referred to care management and recruited for the trial (2592), and 1041 gave permission to contact for research participation. Of those, 666 (64%) met eligibility criteria, completed baseline assessments, and were randomized to 12 months of usual collaborative primary care versus STEPS-UP collaborative care. Implementation was locally managed for usual collaborative care and centrally managed for STEPS-UP. Research reassessments occurred at 3-, 6-, and 12-months. Baseline characteristics were similar across the two intervention groups. STEPS-UP will be the first large scale randomized effectiveness trial completed in the US military health system, assessing how an implementation model affects collaborative care impact on mental health outcomes. It promises lessons for health system change. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Sample size calculations for the design of cluster randomized trials: A summary of methodology.

    PubMed

    Gao, Fei; Earnest, Arul; Matchar, David B; Campbell, Michael J; Machin, David

    2015-05-01

    Cluster randomized trial designs are growing in popularity in, for example, cardiovascular medicine research and other clinical areas and parallel statistical developments concerned with the design and analysis of these trials have been stimulated. Nevertheless, reviews suggest that design issues associated with cluster randomized trials are often poorly appreciated and there remain inadequacies in, for example, describing how the trial size is determined and the associated results are presented. In this paper, our aim is to provide pragmatic guidance for researchers on the methods of calculating sample sizes. We focus attention on designs with the primary purpose of comparing two interventions with respect to continuous, binary, ordered categorical, incidence rate and time-to-event outcome variables. Issues of aggregate and non-aggregate cluster trials, adjustment for variation in cluster size and the effect size are detailed. The problem of establishing the anticipated magnitude of between- and within-cluster variation to enable planning values of the intra-cluster correlation coefficient and the coefficient of variation are also described. Illustrative examples of calculations of trial sizes for each endpoint type are included. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. 75 FR 43172 - Maternal, Infant, and Early Childhood Home Visiting Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-23

    ... the evaluation results have been published in a peer-reviewed journal; or (bb) quasi-experimental... design (i.e. randomized controlled trial [RCT] or quasi-experimental design [QED]), level of attrition... a quasi-experimental design as a study design in which sample members are selected for the program...

  7. Sparse sampling and reconstruction for electron and scanning probe microscope imaging

    DOEpatents

    Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.

    2015-07-28

    Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.

  8. Assessing map accuracy in a remotely sensed, ecoregion-scale cover map

    USGS Publications Warehouse

    Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.

    1998-01-01

    Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.

  9. Robust reliable sampled-data control for switched systems with application to flight control

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Joby, Maya; Shi, P.; Mathiyalagan, K.

    2016-11-01

    This paper addresses the robust reliable stabilisation problem for a class of uncertain switched systems with random delays and norm bounded uncertainties. The main aim of this paper is to obtain the reliable robust sampled-data control design which involves random time delay with an appropriate gain control matrix for achieving the robust exponential stabilisation for uncertain switched system against actuator failures. In particular, the involved delays are assumed to be randomly time-varying which obeys certain mutually uncorrelated Bernoulli distributed white noise sequences. By constructing an appropriate Lyapunov-Krasovskii functional (LKF) and employing an average-dwell time approach, a new set of criteria is derived for ensuring the robust exponential stability of the closed-loop switched system. More precisely, the Schur complement and Jensen's integral inequality are used in derivation of stabilisation criteria. By considering the relationship among the random time-varying delay and its lower and upper bounds, a new set of sufficient condition is established for the existence of reliable robust sampled-data control in terms of solution to linear matrix inequalities (LMIs). Finally, an illustrative example based on the F-18 aircraft model is provided to show the effectiveness of the proposed design procedures.

  10. Sample design effects in landscape genetics

    USGS Publications Warehouse

    Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.

    2012-01-01

    An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.

  11. ARTS: automated randomization of multiple traits for study design.

    PubMed

    Maienschein-Cline, Mark; Lei, Zhengdeng; Gardeux, Vincent; Abbasi, Taimur; Machado, Roberto F; Gordeuk, Victor; Desai, Ankit A; Saraf, Santosh; Bahroos, Neil; Lussier, Yves

    2014-06-01

    Collecting data from large studies on high-throughput platforms, such as microarray or next-generation sequencing, typically requires processing samples in batches. There are often systematic but unpredictable biases from batch-to-batch, so proper randomization of biologically relevant traits across batches is crucial for distinguishing true biological differences from experimental artifacts. When a large number of traits are biologically relevant, as is common for clinical studies of patients with varying sex, age, genotype and medical background, proper randomization can be extremely difficult to prepare by hand, especially because traits may affect biological inferences, such as differential expression, in a combinatorial manner. Here we present ARTS (automated randomization of multiple traits for study design), which aids researchers in study design by automatically optimizing batch assignment for any number of samples, any number of traits and any batch size. ARTS is implemented in Perl and is available at github.com/mmaiensc/ARTS. ARTS is also available in the Galaxy Tool Shed, and can be used at the Galaxy installation hosted by the UIC Center for Research Informatics (CRI) at galaxy.cri.uic.edu. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  13. Head-to-head randomized trials are mostly industry sponsored and almost always favor the industry sponsor.

    PubMed

    Flacco, Maria Elena; Manzoli, Lamberto; Boccia, Stefania; Capasso, Lorenzo; Aleksovska, Katina; Rosso, Annalisa; Scaioli, Giacomo; De Vito, Corrado; Siliquini, Roberta; Villari, Paolo; Ioannidis, John P A

    2015-07-01

    To map the current status of head-to-head comparative randomized evidence and to assess whether funding may impact on trial design and results. From a 50% random sample of the randomized controlled trials (RCTs) published in journals indexed in PubMed during 2011, we selected the trials with ≥ 100 participants, evaluating the efficacy and safety of drugs, biologics, and medical devices through a head-to-head comparison. We analyzed 319 trials. Overall, 238,386 of the 289,718 randomized subjects (82.3%) were included in the 182 trials funded by companies. Of the 182 industry-sponsored trials, only 23 had two industry sponsors and only three involved truly antagonistic comparisons. Industry-sponsored trials were larger, more commonly registered, used more frequently noninferiority/equivalence designs, had higher citation impact, and were more likely to have "favorable" results (superiority or noninferiority/equivalence for the experimental treatment) than nonindustry-sponsored trials. Industry funding [odds ratio (OR) 2.8; 95% confidence interval (CI): 1.6, 4.7] and noninferiority/equivalence designs (OR 3.2; 95% CI: 1.5, 6.6), but not sample size, were strongly associated with "favorable" findings. Fifty-five of the 57 (96.5%) industry-funded noninferiority/equivalence trials got desirable "favorable" results. The literature of head-to-head RCTs is dominated by the industry. Industry-sponsored comparative assessments systematically yield favorable results for the sponsors, even more so when noninferiority designs are involved. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    PubMed

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  15. Sampling-Based Stochastic Sensitivity Analysis Using Score Functions for RBDO Problems with Correlated Random Variables

    DTIC Science & Technology

    2010-08-01

    a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables

  16. The Effect of Cluster Sampling Design in Survey Research on the Standard Error Statistic.

    ERIC Educational Resources Information Center

    Wang, Lin; Fan, Xitao

    Standard statistical methods are used to analyze data that is assumed to be collected using a simple random sampling scheme. These methods, however, tend to underestimate variance when the data is collected with a cluster design, which is often found in educational survey research. The purposes of this paper are to demonstrate how a cluster design…

  17. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    PubMed

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  18. Influence function based variance estimation and missing data issues in case-cohort studies.

    PubMed

    Mark, S D; Katki, H

    2001-12-01

    Recognizing that the efficiency in relative risk estimation for the Cox proportional hazards model is largely constrained by the total number of cases, Prentice (1986) proposed the case-cohort design in which covariates are measured on all cases and on a random sample of the cohort. Subsequent to Prentice, other methods of estimation and sampling have been proposed for these designs. We formalize an approach to variance estimation suggested by Barlow (1994), and derive a robust variance estimator based on the influence function. We consider the applicability of the variance estimator to all the proposed case-cohort estimators, and derive the influence function when known sampling probabilities in the estimators are replaced by observed sampling fractions. We discuss the modifications required when cases are missing covariate information. The missingness may occur by chance, and be completely at random; or may occur as part of the sampling design, and depend upon other observed covariates. We provide an adaptation of S-plus code that allows estimating influence function variances in the presence of such missing covariates. Using examples from our current case-cohort studies on esophageal and gastric cancer, we illustrate how our results our useful in solving design and analytic issues that arise in practice.

  19. [Design of the National Surveillance of Nutritional Indicators (MONIN), Peru 2007-2010].

    PubMed

    Campos-Sánchez, Miguel; Ricaldi-Sueldo, Rita; Miranda-Cuadros, Marianella

    2011-06-01

    To describe the design and methods of the national surveillance of nutritional indicators (MONIN) 2007-2010, carried out by INS/CENAN. MONIN was designed as a continuous (repeated cross-sectional) survey, with stratified multi-stage random sampling, considering the universe as all under five children and pregnant women residing in Peru, divided into 5 geographical strata and 6 trimesters (randomly permuted weeks, about 78% of the time between November 19, 2007 and April 2, 2010). The total sample was 3,827 children in 361 completed clusters. The dropout rate was 8.4% in clusters, 1.8% in houses, and 13.2% in households. Dropout was also 4.2, 13.3, 21.2, 55% and 29% in anthropometry, hemoglobin, food intake, retinol and ioduria measurements, respectively. The MONIN design is feasible and useful for the estimation of indicators of childhood malnutrition.

  20. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    PubMed

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  1. Single-Phase Mail Survey Design for Rare Population Subgroups

    ERIC Educational Resources Information Center

    Brick, J. Michael; Andrews, William R.; Mathiowetz, Nancy A.

    2016-01-01

    Although using random digit dialing (RDD) telephone samples was the preferred method for conducting surveys of households for many years, declining response and coverage rates have led researchers to explore alternative approaches. The use of address-based sampling (ABS) has been examined for sampling the general population and subgroups, most…

  2. What Are Probability Surveys used by the National Aquatic Resource Surveys?

    EPA Pesticide Factsheets

    The National Aquatic Resource Surveys (NARS) use probability-survey designs to assess the condition of the nation’s waters. In probability surveys (also known as sample-surveys or statistical surveys), sampling sites are selected randomly.

  3. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  5. Linear combinations come alive in crossover designs.

    PubMed

    Shuster, Jonathan J

    2017-10-30

    Before learning anything about statistical inference in beginning service courses in biostatistics, students learn how to calculate the mean and variance of linear combinations of random variables. Practical precalculus examples of the importance of these exercises can be helpful for instructors, the target audience of this paper. We shall present applications to the "1-sample" and "2-sample" methods for randomized short-term 2-treatment crossover studies, where patients experience both treatments in random order with a "washout" between the active treatment periods. First, we show that the 2-sample method is preferred as it eliminates "conditional bias" when sample sizes by order differ and produces a smaller variance. We also demonstrate that it is usually advisable to use the differences in posttests (ignoring baseline and post washout values) rather than the differences between the changes in treatment from the start of the period to the end of the period ("delta of delta"). Although the intent is not to provide a definitive discussion of crossover designs, we provide a section and references to excellent alternative methods, where instructors can provide motivation to students to explore the topic in greater detail in future readings or courses. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Intraclass Correlation Values for Planning Group-Randomized Trials in Education

    ERIC Educational Resources Information Center

    Hedges, Larry V.; Hedberg, E. C.

    2007-01-01

    Experiments that assign intact groups to treatment conditions are increasingly common in social research. In educational research, the groups assigned are often schools. The design of group-randomized experiments requires knowledge of the intraclass correlation structure to compute statistical power and sample sizes required to achieve adequate…

  7. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Treesearch

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  8. Optimal Design for Two-Level Random Assignment and Regression Discontinuity Studies

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.; Dye, Charles

    2016-01-01

    An important concern when planning research studies is to obtain maximum precision of an estimate of a treatment effect given a budget constraint. When research designs have a "multilevel" or "hierarchical" structure changes in sample size at different levels of the design will impact precision differently. Furthermore, there…

  9. Rigorously Assessing Whether the Data Backs the Back School

    PubMed Central

    Vinh, David T.; Johnson, Craig W.; Phelps, Cynthia L.

    2003-01-01

    A rigorous between-subjects methodology employing independent random samples and having broad clinical applicability was designed and implemented to evaluate the effectiveness of back safety and patient transfer training interventions for both hospital nurses and nursing assistants. Effects upon self-efficacy, cognitive, and affective measures are assessed for each of three back safety procedures. The design solves the problem of obtaining randomly assigned independent controls where all experimental subjects must participate in the training interventions. PMID:14728544

  10. Adaptive sampling in research on risk-related behaviors.

    PubMed

    Thompson, Steven K; Collins, Linda M

    2002-11-01

    This article introduces adaptive sampling designs to substance use researchers. Adaptive sampling is particularly useful when the population of interest is rare, unevenly distributed, hidden, or hard to reach. Examples of such populations are injection drug users, individuals at high risk for HIV/AIDS, and young adolescents who are nicotine dependent. In conventional sampling, the sampling design is based entirely on a priori information, and is fixed before the study begins. By contrast, in adaptive sampling, the sampling design adapts based on observations made during the survey; for example, drug users may be asked to refer other drug users to the researcher. In the present article several adaptive sampling designs are discussed. Link-tracing designs such as snowball sampling, random walk methods, and network sampling are described, along with adaptive allocation and adaptive cluster sampling. It is stressed that special estimation procedures taking the sampling design into account are needed when adaptive sampling has been used. These procedures yield estimates that are considerably better than conventional estimates. For rare and clustered populations adaptive designs can give substantial gains in efficiency over conventional designs, and for hidden populations link-tracing and other adaptive procedures may provide the only practical way to obtain a sample large enough for the study objectives.

  11. Design and analysis of group-randomized trials in cancer: A review of current practices.

    PubMed

    Murray, David M; Pals, Sherri L; George, Stephanie M; Kuzmichev, Andrey; Lai, Gabriel Y; Lee, Jocelyn A; Myles, Ranell L; Nelson, Shakira M

    2018-06-01

    The purpose of this paper is to summarize current practices for the design and analysis of group-randomized trials involving cancer-related risk factors or outcomes and to offer recommendations to improve future trials. We searched for group-randomized trials involving cancer-related risk factors or outcomes that were published or online in peer-reviewed journals in 2011-15. During 2016-17, in Bethesda MD, we reviewed 123 articles from 76 journals to characterize their design and their methods for sample size estimation and data analysis. Only 66 (53.7%) of the articles reported appropriate methods for sample size estimation. Only 63 (51.2%) reported exclusively appropriate methods for analysis. These findings suggest that many investigators do not adequately attend to the methodological challenges inherent in group-randomized trials. These practices can lead to underpowered studies, to an inflated type 1 error rate, and to inferences that mislead readers. Investigators should work with biostatisticians or other methodologists familiar with these issues. Funders and editors should ensure careful methodological review of applications and manuscripts. Reviewers should ensure that studies are properly planned and analyzed. These steps are needed to improve the rigor and reproducibility of group-randomized trials. The Office of Disease Prevention (ODP) at the National Institutes of Health (NIH) has taken several steps to address these issues. ODP offers an online course on the design and analysis of group-randomized trials. ODP is working to increase the number of methodologists who serve on grant review panels. ODP has developed standard language for the Application Guide and the Review Criteria to draw investigators' attention to these issues. Finally, ODP has created a new Research Methods Resources website to help investigators, reviewers, and NIH staff better understand these issues. Published by Elsevier Inc.

  12. Strategies for Improving Power in School-Randomized Studies of Professional Development.

    PubMed

    Kelcey, Ben; Phelps, Geoffrey

    2013-12-01

    Group-randomized designs are well suited for studies of professional development because they can accommodate programs that are delivered to intact groups (e.g., schools), the collaborative nature of professional development, and extant teacher/school assignments. Though group designs may be theoretically favorable, prior evidence has suggested that they may be challenging to conduct in professional development studies because well-powered designs will typically require large sample sizes or expect large effect sizes. Using teacher knowledge outcomes in mathematics, we investigated when and the extent to which there is evidence that covariance adjustment on a pretest, teacher certification, or demographic covariates can reduce the sample size necessary to achieve reasonable power. Our analyses drew on multilevel models and outcomes in five different content areas for over 4,000 teachers and 2,000 schools. Using these estimates, we assessed the minimum detectable effect sizes for several school-randomized designs with and without covariance adjustment. The analyses suggested that teachers' knowledge is substantially clustered within schools in each of the five content areas and that covariance adjustment for a pretest or, to a lesser extent, teacher certification, has the potential to transform designs that are unreasonably large for professional development studies into viable studies. © The Author(s) 2014.

  13. Optimal sampling with prior information of the image geometry in microfluidic MRI.

    PubMed

    Han, S H; Cho, H; Paulsen, J L

    2015-03-01

    Recent advances in MRI acquisition for microscopic flows enable unprecedented sensitivity and speed in a portable NMR/MRI microfluidic analysis platform. However, the application of MRI to microfluidics usually suffers from prolonged acquisition times owing to the combination of the required high resolution and wide field of view necessary to resolve details within microfluidic channels. When prior knowledge of the image geometry is available as a binarized image, such as for microfluidic MRI, it is possible to reduce sampling requirements by incorporating this information into the reconstruction algorithm. The current approach to the design of the partial weighted random sampling schemes is to bias toward the high signal energy portions of the binarized image geometry after Fourier transformation (i.e. in its k-space representation). Although this sampling prescription is frequently effective, it can be far from optimal in certain limiting cases, such as for a 1D channel, or more generally yield inefficient sampling schemes at low degrees of sub-sampling. This work explores the tradeoff between signal acquisition and incoherent sampling on image reconstruction quality given prior knowledge of the image geometry for weighted random sampling schemes, finding that optimal distribution is not robustly determined by maximizing the acquired signal but from interpreting its marginal change with respect to the sub-sampling rate. We develop a corresponding sampling design methodology that deterministically yields a near optimal sampling distribution for image reconstructions incorporating knowledge of the image geometry. The technique robustly identifies optimal weighted random sampling schemes and provides improved reconstruction fidelity for multiple 1D and 2D images, when compared to prior techniques for sampling optimization given knowledge of the image geometry. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Investigating Test Equating Methods in Small Samples through Various Factors

    ERIC Educational Resources Information Center

    Asiret, Semih; Sünbül, Seçil Ömür

    2016-01-01

    In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…

  15. Convenience Samples and Caregiving Research: How Generalizable Are the Findings?

    ERIC Educational Resources Information Center

    Pruchno, Rachel A.; Brill, Jonathan E.; Shands, Yvonne; Gordon, Judith R.; Genderson, Maureen Wilson; Rose, Miriam; Cartwright, Francine

    2008-01-01

    Purpose: We contrast characteristics of respondents recruited using convenience strategies with those of respondents recruited by random digit dial (RDD) methods. We compare sample variances, means, and interrelationships among variables generated from the convenience and RDD samples. Design and Methods: Women aged 50 to 64 who work full time and…

  16. Sampling design for spatially distributed hydrogeologic and environmental processes

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1992-01-01

    A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.

  17. Misrepresenting random sampling? A systematic review of research papers in the Journal of Advanced Nursing.

    PubMed

    Williamson, Graham R

    2003-11-01

    This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.

  18. Adiposity and Quality of Life: A Case Study from an Urban Center in Nigeria

    ERIC Educational Resources Information Center

    Akinpelu, Aderonke O.; Akinola, Odunayo T.; Gbiri, Caleb A.

    2009-01-01

    Objective: To determine relationship between adiposity indices and quality of life (QOL) of residents of a housing estate in Lagos, Nigeria. Design: Cross-sectional survey employing multistep random sampling method. Setting: Urban residential estate. Participants: This study involved 900 randomly selected residents of Abesan Housing Estate, Lagos,…

  19. A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.

    ERIC Educational Resources Information Center

    Kraemer, Helena Chmura; Thiemann, Sue

    1989-01-01

    Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…

  20. The Role of Religiosity in Influencing Adolescent and Adult Alcohol Use in Trinidad

    ERIC Educational Resources Information Center

    Rollocks, Steve C. T.; Dass, Natasha; Seepersad, Randy; Mohammed, Linda

    2008-01-01

    This study examined the role of religiosity among adolescents' and adults' alcohol use in Trinidad. A stratified random sample design of 369 adolescents and 210 adult parents belonging to the various religious groups in Trinidad was employed. Participants were randomly selected from various educational districts across Trinidad. Adolescent…

  1. Teachers' Attitude towards Implementation of Learner-Centered Methodology in Science Education in Kenya

    ERIC Educational Resources Information Center

    Ndirangu, Caroline

    2017-01-01

    This study aims to evaluate teachers' attitude towards implementation of learner-centered methodology in science education in Kenya. The study used a survey design methodology, adopting the purposive, stratified random and simple random sampling procedures and hypothesised that there was no significant relationship between the head teachers'…

  2. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  3. Intra-class correlation estimates for assessment of vitamin A intake in children.

    PubMed

    Agarwal, Girdhar G; Awasthi, Shally; Walter, Stephen D

    2005-03-01

    In many community-based surveys, multi-level sampling is inherent in the design. In the design of these studies, especially to calculate the appropriate sample size, investigators need good estimates of intra-class correlation coefficient (ICC), along with the cluster size, to adjust for variation inflation due to clustering at each level. The present study used data on the assessment of clinical vitamin A deficiency and intake of vitamin A-rich food in children in a district in India. For the survey, 16 households were sampled from 200 villages nested within eight randomly-selected blocks of the district. ICCs and components of variances were estimated from a three-level hierarchical random effects analysis of variance model. Estimates of ICCs and variance components were obtained at village and block levels. Between-cluster variation was evident at each level of clustering. In these estimates, ICCs were inversely related to cluster size, but the design effect could be substantial for large clusters. At the block level, most ICC estimates were below 0.07. At the village level, many ICC estimates ranged from 0.014 to 0.45. These estimates may provide useful information for the design of epidemiological studies in which the sampled (or allocated) units range in size from households to large administrative zones.

  4. Designing a national soil erosion monitoring network for England and Wales

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Rawlins, Barry; Anderson, Karen; Evans, Martin; Farrow, Luke; Glendell, Miriam; James, Mike; Rickson, Jane; Quine, Timothy; Quinton, John; Brazier, Richard

    2014-05-01

    Although soil erosion is recognised as a significant threat to sustainable land use and may be a priority for action in any forthcoming EU Soil Framework Directive, those responsible for setting national policy with respect to erosion are constrained by a lack of robust, representative, data at large spatial scales. This reflects the process-orientated nature of much soil erosion research. Recognising this limitation, The UK Department for Environment, Food and Rural Affairs (Defra) established a project to pilot a cost-effective framework for monitoring of soil erosion in England and Wales (E&W). The pilot will compare different soil erosion monitoring methods at a site scale and provide statistical information for the final design of the full national monitoring network that will: provide unbiased estimates of the spatial mean of soil erosion rate across E&W (tonnes ha-1 yr-1) for each of three land-use classes - arable and horticultural grassland upland and semi-natural habitats quantify the uncertainty of these estimates with confidence intervals. Probability (design-based) sampling provides most efficient unbiased estimates of spatial means. In this study, a 16 hectare area (a square of 400 x 400 m) positioned at the centre of a 1-km grid cell, selected at random from mapped land use across E&W, provided the sampling support for measurement of erosion rates, with at least 94% of the support area corresponding to the target land use classes. Very small or zero erosion rates likely to be encountered at many sites reduce the sampling efficiency and make it difficult to compare different methods of soil erosion monitoring. Therefore, to increase the proportion of samples with larger erosion rates without biasing our estimates, we increased the inclusion probability density in areas where the erosion rate is likely to be large by using stratified random sampling. First, each sampling domain (land use class in E&W) was divided into strata; e.g. two sub-domains within which, respectively, small or no erosion rates, and moderate or larger erosion rates are expected. Each stratum was then sampled independently and at random. The sample density need not be equal in the two strata, but is known and is accounted for in the estimation of the mean and its standard error. To divide the domains into strata we used information on slope angle, previous interpretation of erosion susceptibility of the soil associations that correspond to the soil map of E&W at 1:250 000 (Soil Survey of England and Wales, 1983), and visual interpretation of evidence of erosion from aerial photography. While each domain could be stratified on the basis of the first two criteria, air photo interpretation across the whole country was not feasible. For this reason we used a two-phase random sampling for stratification (TPRS) design (de Gruijter et al., 2006). First, we formed an initial random sample of 1-km grid cells from the target domain. Second, each cell was then allocated to a stratum on the basis of the three criteria. A subset of the selected cells from each stratum were then selected for field survey at random, with a specified sampling density for each stratum so as to increase the proportion of cells where moderate or larger erosion rates were expected. Once measurements of erosion have been made, an estimate of the spatial mean of the erosion rate over the target domain, its standard error and associated uncertainty can be calculated by an expression which accounts for the estimated proportions of the two strata within the initial random sample. de Gruijter, J.J., Brus, D.J., Biekens, M.F.P. & Knotters, M. 2006. Sampling for Natural Resource Monitoring. Springer, Berlin. Soil Survey of England and Wales. 1983 National Soil Map NATMAP Vector 1:250,000. National Soil Research Institute, Cranfield University.

  5. Sampling considerations for disease surveillance in wildlife populations

    USGS Publications Warehouse

    Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.

    2008-01-01

    Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.

  6. SNP selection and classification of genome-wide SNP data using stratified sampling random forests.

    PubMed

    Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K

    2012-09-01

    For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.

  7. Basic Design, a Needed Foundation for Designing a Successful Garment: A Case Study of Dressmakers in the Ho Municipality, Volta Region, Ghana

    ERIC Educational Resources Information Center

    Gbetodeme, Selom; Amankwa, Joana; Dzegblor, Noble Komla

    2016-01-01

    To facilitate the design process in every art form, there are certain guidelines that all professional designers should use. These are known as elements and principles of design. This study is a survey carried out to assess the knowledge of dressmakers about basic design in the Ho Municipality of Ghana. Sixty dressmakers were randomly sampled for…

  8. Sparsely sampling the sky: Regular vs. random sampling

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.

    2015-09-01

    Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.

  9. Multilattice sampling strategies for region of interest dynamic MRI.

    PubMed

    Rilling, Gabriel; Tao, Yuehui; Marshall, Ian; Davies, Mike E

    2013-08-01

    A multilattice sampling approach is proposed for dynamic MRI with Cartesian trajectories. It relies on the use of sampling patterns composed of several different lattices and exploits an image model where only some parts of the image are dynamic, whereas the rest is assumed static. Given the parameters of such an image model, the methodology followed for the design of a multilattice sampling pattern adapted to the model is described. The multi-lattice approach is compared to single-lattice sampling, as used by traditional acceleration methods such as UNFOLD (UNaliasing by Fourier-Encoding the Overlaps using the temporal Dimension) or k-t BLAST, and random sampling used by modern compressed sensing-based methods. On the considered image model, it allows more flexibility and higher accelerations than lattice sampling and better performance than random sampling. The method is illustrated on a phase-contrast carotid blood velocity mapping MR experiment. Combining the multilattice approach with the KEYHOLE technique allows up to 12× acceleration factors. Simulation and in vivo undersampling results validate the method. Compared to lattice and random sampling, multilattice sampling provides significant gains at high acceleration factors. © 2012 Wiley Periodicals, Inc.

  10. Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.

    PubMed

    Ette, E I; Howie, C A; Kelman, A W; Whiting, B

    1995-05-01

    Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.

  11. Critical appraisal of arguments for the delayed-start design proposed as alternative to the parallel-group randomized clinical trial design in the field of rare disease.

    PubMed

    Spineli, Loukia M; Jenz, Eva; Großhennig, Anika; Koch, Armin

    2017-08-17

    A number of papers have proposed or evaluated the delayed-start design as an alternative to the standard two-arm parallel group randomized clinical trial (RCT) design in the field of rare disease. However the discussion is felt to lack a sufficient degree of consideration devoted to the true virtues of the delayed start design and the implications either in terms of required sample-size, overall information, or interpretation of the estimate in the context of small populations. To evaluate whether there are real advantages of the delayed-start design particularly in terms of overall efficacy and sample size requirements as a proposed alternative to the standard parallel group RCT in the field of rare disease. We used a real-life example to compare the delayed-start design with the standard RCT in terms of sample size requirements. Then, based on three scenarios regarding the development of the treatment effect over time, the advantages, limitations and potential costs of the delayed-start design are discussed. We clarify that delayed-start design is not suitable for drugs that establish an immediate treatment effect, but for drugs with effects developing over time, instead. In addition, the sample size will always increase as an implication for a reduced time on placebo resulting in a decreased treatment effect. A number of papers have repeated well-known arguments to justify the delayed-start design as appropriate alternative to the standard parallel group RCT in the field of rare disease and do not discuss the specific needs of research methodology in this field. The main point is that a limited time on placebo will result in an underestimated treatment effect and, in consequence, in larger sample size requirements compared to those expected under a standard parallel-group design. This also impacts on benefit-risk assessment.

  12. Designing efficient surveys: spatial arrangement of sample points for detection of invasive species

    Treesearch

    Ludek Berec; John M. Kean; Rebecca Epanchin-Niell; Andrew M. Liebhold; Robert G. Haight

    2015-01-01

    Effective surveillance is critical to managing biological invasions via early detection and eradication. The efficiency of surveillance systems may be affected by the spatial arrangement of sample locations. We investigate how the spatial arrangement of sample points, ranging from random to fixed grid arrangements, affects the probability of detecting a target...

  13. THE RELATIONSHIP BETWEEN TEMPERATURE, PHYSICAL HABITAT AND FISH ASSEMBLAGE DATA IN A STATE WIDE PROBABILITY SURVEY OF OREGON STREAMS

    EPA Science Inventory

    To assess the ecological condition of streams and rivers in Oregon, we sampled 146 sites
    in summer, 1997 as part of the U.S. EPA's Environmental Monitoring and Assessment Program.
    Sample reaches were selected using a systematic, randomized sample design from the blue-line n...

  14. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    ERIC Educational Resources Information Center

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  15. Adaptive cluster sampling: An efficient method for assessing inconspicuous species

    Treesearch

    Andrea M. Silletti; Joan Walker

    2003-01-01

    Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...

  16. TOC: Table of Contents Practices of Primary Journals--Recommendations for Monolingual, Multilingual and International Journals.

    ERIC Educational Resources Information Center

    Juhasz, Stephen; And Others

    Table of contents (TOC) practices of some 120 primary journals were analyzed. The journals were randomly selected. The method of randomization is described. The samples were selected from a university library with a holding of approximately 12,000 titles published worldwide. A questionnaire was designed. Purpose was to find uniformity and…

  17. Outcomes of an HIV Prevention Peer Group Intervention for Rural Adults in Malawi

    ERIC Educational Resources Information Center

    Kaponda, Chrissie P. N.; Norr, Kathleen F.; Crittenden, Kathleen S.; Norr, James L.; McCreary, Linda L.; Kachingwe, Sitingawawo I.; Mbeba, Mary M.; Jere, Diana L. N.; Dancy, Barbara L.

    2011-01-01

    This study used a quasi-experimental design to evaluate a six-session peer group intervention for HIV prevention among rural adults in Malawi. Two rural districts were randomly assigned to intervention and control conditions. Independent random samples of community adults compared the districts at baseline and at 6 and 18 months postintervention.…

  18. Reform-Based-Instructional Method and Learning Styles on Students' Achievement and Retention in Mathematics: Administrative Implications

    ERIC Educational Resources Information Center

    Modebelu, M. N.; Ogbonna, C. C.

    2014-01-01

    This study aimed at determining the effect of reform-based-instructional method learning styles on students' achievement and retention in mathematics. A sample size of 119 students was randomly selected. The quasiexperimental design comprising pre-test, post-test, and randomized control group were employed. The Collin Rose learning styles…

  19. Modeling the Stress Complexities of Teaching and Learning of School Physics in Nigeria

    ERIC Educational Resources Information Center

    Emetere, Moses E.

    2014-01-01

    This study was designed to investigate the validity of the stress complexity model (SCM) to teaching and learning of school physics in Abuja municipal area council of Abuja, North. About two hundred students were randomly selected by a simple random sampling technique from some schools within the Abuja municipal area council. A survey research…

  20. 1979 Reserve Force Studies Surveys: Survey Design, Sample Design and Administrative Procedures,

    DTIC Science & Technology

    1981-08-01

    three factors: the need for a statistically significant number of usable questionnaires from different groups within the random sampls and from...Because of the multipurpose nature of these surveys and the large number of questions needed to fully address some of the topics covered, we...varies. Collection of data at the unit level is needed to accurately estimate actual reserve compensation and benefits and their possible role in both

  1. Are there Benefits to Combining Regional Probabalistic Survey and Historic Targeted Environmental Monitoring Data to Improve Our Understanding of Overall Regional Estuary Environmental Status?

    NASA Astrophysics Data System (ADS)

    Dasher, D. H.; Lomax, T. J.; Bethe, A.; Jewett, S.; Hoberg, M.

    2016-02-01

    A regional probabilistic survey of 20 randomly selected stations, where water and sediments were sampled, was conducted over an area of Simpson Lagoon and Gwydyr Bay in the Beaufort Sea adjacent Prudhoe Bay, Alaska, in 2014. Sampling parameters included water column for temperature, salinity, dissolved oxygen, chlorophyll a, nutrients and sediments for macroinvertebrates, chemistry, i.e., trace metals and hydrocarbons, and grain size. The 2014 probabilistic survey design allows for inferences to be made of environmental status, for instance the spatial or aerial distribution of sediment trace metals within the design area sampled. Historically, since the 1970's a number of monitoring studies have been conducted in this estuary area using a targeted rather than regional probabilistic design. Targeted non-random designs were utilized to assess specific points of interest and cannot be used to make inferences to distributions of environmental parameters. Due to differences in the environmental monitoring objectives between probabilistic and targeted designs there has been limited assessment see if benefits exist to combining the two approaches. This study evaluates if a combined approach using the 2014 probabilistic survey sediment trace metal and macroinvertebrate results and historical targeted monitoring data can provide a new perspective on better understanding the environmental status of these estuaries.

  2. Statistical inference for the additive hazards model under outcome-dependent sampling.

    PubMed

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo

    2015-09-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.

  3. Statistical inference for the additive hazards model under outcome-dependent sampling

    PubMed Central

    Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo

    2015-01-01

    Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363

  4. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    PubMed

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  5. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    PubMed Central

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742

  6. Practical characteristics of adaptive design in phase 2 and 3 clinical trials.

    PubMed

    Sato, A; Shimura, M; Gosho, M

    2018-04-01

    Adaptive design methods are expected to be ethical, reflect real medical practice, increase the likelihood of research and development success and reduce the allocation of patients into ineffective treatment groups by the early termination of clinical trials. However, the comprehensive details regarding which types of clinical trials will include adaptive designs remain unclear. We examined the practical characteristics of adaptive design used in clinical trials. We conducted a literature search of adaptive design clinical trials published from 2012 to 2015 using PubMed, EMBASE, and the Cochrane Central Register of Controlled Trials, with common search terms related to adaptive design. We systematically assessed the types and characteristics of adaptive designs and disease areas employed in the adaptive design trials. Our survey identified 245 adaptive design clinical trials. The number of trials by the publication year increased from 2012 to 2013 and did not greatly change afterwards. The most frequently used adaptive design was group sequential design (n = 222, 90.6%), especially for neoplasm or cardiovascular disease trials. Among the other types of adaptive design, adaptive dose/treatment group selection (n = 21, 8.6%) and adaptive sample-size adjustment (n = 19, 7.8%) were frequently used. The adaptive randomization (n = 8, 3.3%) and adaptive seamless design (n = 6, 2.4%) were less frequent. Adaptive dose/treatment group selection and adaptive sample-size adjustment were frequently used (up to 23%) in "certain infectious and parasitic diseases," "diseases of nervous system," and "mental and behavioural disorders" in comparison with "neoplasms" (<6.6%). For "mental and behavioural disorders," adaptive randomization was used in two trials of eight trials in total (25%). Group sequential design and adaptive sample-size adjustment were used frequently in phase 3 trials or in trials where study phase was not specified, whereas the other types of adaptive designs were used more in phase 2 trials. Approximately 82% (202 of 245 trials) resulted in early termination at the interim analysis. Among the 202 trials, 132 (54% of 245 trials) had fewer randomized patients than initially planned. This result supports the motive to use adaptive design to make study durations shorter and include a smaller number of subjects. We found that adaptive designs have been applied to clinical trials in various therapeutic areas and interventions. The applications were frequently reported in neoplasm or cardiovascular clinical trials. The adaptive dose/treatment group selection and sample-size adjustment are increasingly common, and these adaptations generally follow the Food and Drug Administration's (FDA's) recommendations. © 2017 John Wiley & Sons Ltd.

  7. Validation of Statistical Sampling Algorithms in Visual Sample Plan (VSP): Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuffer, Lisa L; Sego, Landon H.; Wilson, John E.

    2009-02-18

    The U.S. Department of Homeland Security, Office of Technology Development (OTD) contracted with a set of U.S. Department of Energy national laboratories, including the Pacific Northwest National Laboratory (PNNL), to write a Remediation Guidance for Major Airports After a Chemical Attack. The report identifies key activities and issues that should be considered by a typical major airport following an incident involving release of a toxic chemical agent. Four experimental tasks were identified that would require further research in order to supplement the Remediation Guidance. One of the tasks, Task 4, OTD Chemical Remediation Statistical Sampling Design Validation, dealt with statisticalmore » sampling algorithm validation. This report documents the results of the sampling design validation conducted for Task 4. In 2005, the Government Accountability Office (GAO) performed a review of the past U.S. responses to Anthrax terrorist cases. Part of the motivation for this PNNL report was a major GAO finding that there was a lack of validated sampling strategies in the U.S. response to Anthrax cases. The report (GAO 2005) recommended that probability-based methods be used for sampling design in order to address confidence in the results, particularly when all sample results showed no remaining contamination. The GAO also expressed a desire that the methods be validated, which is the main purpose of this PNNL report. The objective of this study was to validate probability-based statistical sampling designs and the algorithms pertinent to within-building sampling that allow the user to prescribe or evaluate confidence levels of conclusions based on data collected as guided by the statistical sampling designs. Specifically, the designs found in the Visual Sample Plan (VSP) software were evaluated. VSP was used to calculate the number of samples and the sample location for a variety of sampling plans applied to an actual release site. Most of the sampling designs validated are probability based, meaning samples are located randomly (or on a randomly placed grid) so no bias enters into the placement of samples, and the number of samples is calculated such that IF the amount and spatial extent of contamination exceeds levels of concern, at least one of the samples would be taken from a contaminated area, at least X% of the time. Hence, "validation" of the statistical sampling algorithms is defined herein to mean ensuring that the "X%" (confidence) is actually met.« less

  8. 10 CFR 431.383 - Enforcement process for electric motors.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... general purpose electric motor of equivalent electrical design and enclosure rather than replacing the... equivalent electrical design and enclosure rather than machining and attaching an endshield. ... sample of up to 20 units will then be randomly selected from one or more subdivided groups within the...

  9. An in silico approach helped to identify the best experimental design, population, and outcome for future randomized clinical trials.

    PubMed

    Bajard, Agathe; Chabaud, Sylvie; Cornu, Catherine; Castellan, Anne-Charlotte; Malik, Salma; Kurbatova, Polina; Volpert, Vitaly; Eymard, Nathalie; Kassai, Behrouz; Nony, Patrice

    2016-01-01

    The main objective of our work was to compare different randomized clinical trial (RCT) experimental designs in terms of power, accuracy of the estimation of treatment effect, and number of patients receiving active treatment using in silico simulations. A virtual population of patients was simulated and randomized in potential clinical trials. Treatment effect was modeled using a dose-effect relation for quantitative or qualitative outcomes. Different experimental designs were considered, and performances between designs were compared. One thousand clinical trials were simulated for each design based on an example of modeled disease. According to simulation results, the number of patients needed to reach 80% power was 50 for crossover, 60 for parallel or randomized withdrawal, 65 for drop the loser (DL), and 70 for early escape or play the winner (PW). For a given sample size, each design had its own advantage: low duration (parallel, early escape), high statistical power and precision (crossover), and higher number of patients receiving the active treatment (PW and DL). Our approach can help to identify the best experimental design, population, and outcome for future RCTs. This may be particularly useful for drug development in rare diseases, theragnostic approaches, or personalized medicine. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Secondary outcome analysis for data from an outcome-dependent sampling design.

    PubMed

    Pan, Yinghao; Cai, Jianwen; Longnecker, Matthew P; Zhou, Haibo

    2018-04-22

    Outcome-dependent sampling (ODS) scheme is a cost-effective way to conduct a study. For a study with continuous primary outcome, an ODS scheme can be implemented where the expensive exposure is only measured on a simple random sample and supplemental samples selected from 2 tails of the primary outcome variable. With the tremendous cost invested in collecting the primary exposure information, investigators often would like to use the available data to study the relationship between a secondary outcome and the obtained exposure variable. This is referred as secondary analysis. Secondary analysis in ODS designs can be tricky, as the ODS sample is not a random sample from the general population. In this article, we use the inverse probability weighted and augmented inverse probability weighted estimating equations to analyze the secondary outcome for data obtained from the ODS design. We do not make any parametric assumptions on the primary and secondary outcome and only specify the form of the regression mean models, thus allow an arbitrary error distribution. Our approach is robust to second- and higher-order moment misspecification. It also leads to more precise estimates of the parameters by effectively using all the available participants. Through simulation studies, we show that the proposed estimator is consistent and asymptotically normal. Data from the Collaborative Perinatal Project are analyzed to illustrate our method. Copyright © 2018 John Wiley & Sons, Ltd.

  11. MID-ATLANTIC COASTAL STREAMS STUDY: STATISTICAL DESIGN FOR REGIONAL ASSESSMENT AND LANDSCAPE MODEL DEVELOPMENT

    EPA Science Inventory

    A network of stream-sampling sites was developed for the Mid-Atlantic Coastal Plain (New Jersey through North Carolina) a collaborative study between the U.S. Environmental Protection Agency and the U.S. Geological Survey. A stratified random sampling with unequal weighting was u...

  12. Employee Engagement and Performance of Lecturers in Nigerian Tertiary Institutions

    ERIC Educational Resources Information Center

    Agbionu, Uchenna Clementina; Anyalor, Maureen; Nwali, Anthony Chukwuma

    2018-01-01

    The study investigated employee engagement and performance of lecturers in Nigerian Tertiary Institutions. It employed descriptive and correlation research designs. Stratified random sampling was used to select three tertiary institutions in Nigeria and the sample size of 314 lecturers was obtained through Taro Yamane. Questionnaires were…

  13. MID-ATLANTIC COASTAL STREAMS STUDY: STATISTICAL DESIGN FOR REGIONAL ASSESSMENT AND LANDSCAPE MODEL DEVELOPMENT

    EPA Science Inventory

    A network of stream-sampling sites was developed for the Mid-Atlantic Coastal Plain (New Jersey through North Carolina) as part of collaborative research between the U.S. Environmental Protection Agency and the U.S. Geological Survey. A stratified random sampling with unequal wei...

  14. Resource Utilisation and Curriculum Implementation in Community Colleges in Kenya

    ERIC Educational Resources Information Center

    Kigwilu, Peter Changilwa; Akala, Winston Jumba

    2017-01-01

    The study investigated how Catholic-sponsored community colleges in Nairobi utilise the existing physical facilities and teaching and learning resources for effective implementation of Artisan and Craft curricula. The study adopted a mixed methods research design. Proportional stratified random sampling was used to sample 172 students and 18…

  15. A Retrospective Look at Website Accessibility over Time

    ERIC Educational Resources Information Center

    Hackett, Stephanie; Parmanto, Bambang; Zeng, Xiaoming

    2005-01-01

    Websites were retrospectively analysed to study the effects that technological advances in web design have had on accessibility for persons with disabilities. A random sample of general websites and a convenience sample of US government websites were studied and compared for the years 1997-2002. Web accessibility barrier (WAB) and complexity…

  16. Women Secondary Principals in Texas 1998 and 2011: Movement toward Equity

    ERIC Educational Resources Information Center

    Marczynski, Jean C.; Gates, Gordon S.

    2013-01-01

    Purpose: The purpose of this paper is to analyze data gathered in 1998 and 2011 from representative samples of women secondary school principals in Texas to identify differences in personal, professional, leadership, and school characteristics. Design/methodology/approach: Two proportionate, random samples were drawn of women secondary principals…

  17. Cluster Randomized Test-Negative Design (CR-TND) Trials: A Novel and Efficient Method to Assess the Efficacy of Community Level Dengue Interventions.

    PubMed

    Anders, Katherine L; Cutcher, Zoe; Kleinschmidt, Immo; Donnelly, Christl A; Ferguson, Neil M; Indriani, Citra; O'Neill, Scott L; Jewell, Nicholas P; Simmons, Cameron P

    2018-05-07

    Cluster randomized trials are the gold standard for assessing efficacy of community-level interventions, such as vector control strategies against dengue. We describe a novel cluster randomized trial methodology with a test-negative design, which offers advantages over traditional approaches. It utilizes outcome-based sampling of patients presenting with a syndrome consistent with the disease of interest, who are subsequently classified as test-positive cases or test-negative controls on the basis of diagnostic testing. We use simulations of a cluster trial to demonstrate validity of efficacy estimates under the test-negative approach. This demonstrates that, provided study arms are balanced for both test-negative and test-positive illness at baseline and that other test-negative design assumptions are met, the efficacy estimates closely match true efficacy. We also briefly discuss analytical considerations for an odds ratio-based effect estimate arising from clustered data, and outline potential approaches to analysis. We conclude that application of the test-negative design to certain cluster randomized trials could increase their efficiency and ease of implementation.

  18. Some design issues of strata-matched non-randomized studies with survival outcomes.

    PubMed

    Mazumdar, Madhu; Tu, Donsheng; Zhou, Xi Kathy

    2006-12-15

    Non-randomized studies for the evaluation of a medical intervention are useful for quantitative hypothesis generation before the initiation of a randomized trial and also when randomized clinical trials are difficult to conduct. A strata-matched non-randomized design is often utilized where subjects treated by a test intervention are matched to a fixed number of subjects treated by a standard intervention within covariate based strata. In this paper, we consider the issue of sample size calculation for this design. Based on the asymptotic formula for the power of a stratified log-rank test, we derive a formula to calculate the minimum number of subjects in the test intervention group that is required to detect a given relative risk between the test and standard interventions. When this minimum number of subjects in the test intervention group is available, an equation is also derived to find the multiple that determines the number of subjects in the standard intervention group within each stratum. The methodology developed is applied to two illustrative examples in gastric cancer and sarcoma.

  19. Application of random effects to the study of resource selection by animals

    USGS Publications Warehouse

    Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.

    2006-01-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  20. Application of random effects to the study of resource selection by animals.

    PubMed

    Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L

    2006-07-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.

  1. Course Shopping in Urban Community Colleges: An Analysis of Student Drop and Add Activities

    ERIC Educational Resources Information Center

    Hagedorn, Linda Serra; Maxwell, William E.; Cypers, Scott; Moon, Hye Sun; Lester, Jaime

    2007-01-01

    This study examined the course shopping behaviors among a sample of approximately 5,000 community college students enrolled across nine campuses of a large urban district. The sample was purposely designed as an analytic, rather than a random, sample that sought to obtain adequate numbers of students in course areas that were of theoretical and of…

  2. Unbiased Estimates of Variance Components with Bootstrap Procedures

    ERIC Educational Resources Information Center

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  3. Training in Japan: The Use of Instructional Systems Design.

    ERIC Educational Resources Information Center

    Taguchi, Mina; Keller, John M.

    This study investigated the kinds of training conducted in Japanese companies and the degree to which instructional systems design (ISD) is implemented. A random sample of 12 Japanese companies in the banking, automobile manufacturing, electrical machinery, wholesale stores, insurance and securities, and transportation industries were surveyed; a…

  4. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    PubMed

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling procedure was reproducible with results comparable to the collected sample. However, the sampling procedure favoured sampling of large farms. Furthermore, both under-sampled and over-sampled areas were found using scan statistics. In conclusion, sampling conducted at abattoirs can provide a spatially representative sample. Hence it is a possible cost-effective alternative to simple random sampling. However, it is important to assess the properties of the resulting sample so that any potential selection bias can be addressed when reporting the findings. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Social network recruitment for Yo Puedo: an innovative sexual health intervention in an underserved urban neighborhood—sample and design implications.

    PubMed

    Minnis, Alexandra M; vanDommelen-Gonzalez, Evan; Luecke, Ellen; Cheng, Helen; Dow, William; Bautista-Arredondo, Sergio; Padian, Nancy S

    2015-02-01

    Most existing evidence-based sexual health interventions focus on individual-level behavior, even though there is substantial evidence that highlights the influential role of social environments in shaping adolescents' behaviors and reproductive health outcomes. We developed Yo Puedo, a combined conditional cash transfer and life skills intervention for youth to promote educational attainment, job training, and reproductive health wellness that we then evaluated for feasibility among 162 youth aged 16-21 years in a predominantly Latino community in San Francisco, CA. The intervention targeted youth's social networks and involved recruitment and randomization of small social network clusters. In this paper we describe the design of the feasibility study and report participants' baseline characteristics. Furthermore, we examined the sample and design implications of recruiting social network clusters as the unit of randomization. Baseline data provide evidence that we successfully enrolled high risk youth using a social network recruitment approach in community and school-based settings. Nearly all participants (95%) were high risk for adverse educational and reproductive health outcomes based on multiple measures of low socioeconomic status (81%) and/or reported high risk behaviors (e.g., gang affiliation, past pregnancy, recent unprotected sex, frequent substance use; 62%). We achieved variability in the study sample through heterogeneity in recruitment of the index participants, whereas the individuals within the small social networks of close friends demonstrated substantial homogeneity across sociodemographic and risk profile characteristics. Social networks recruitment was feasible and yielded a sample of high risk youth willing to enroll in a randomized study to evaluate a novel sexual health intervention.

  6. Estimates of Intraclass Correlation Coefficients from Longitudinal Group-Randomized Trials of Adolescent HIV/STI/Pregnancy Prevention Programs

    ERIC Educational Resources Information Center

    Glassman, Jill R.; Potter, Susan C.; Baumler, Elizabeth R.; Coyle, Karin K.

    2015-01-01

    Introduction: Group-randomized trials (GRTs) are one of the most rigorous methods for evaluating the effectiveness of group-based health risk prevention programs. Efficiently designing GRTs with a sample size that is sufficient for meeting the trial's power and precision goals while not wasting resources exceeding them requires estimates of the…

  7. Random sampling of elementary flux modes in large-scale metabolic networks.

    PubMed

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  8. Statistical design and analysis of environmental studies for plutonium and other transuranics at NAEG ''safety-shot'' sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, R.O.; Eberhardt, L.L.; Fowler, E.B.

    This paper is centered around the use of stratified random sampling for estimating the total amount (inventory) of $sup 239-240$Pu and uranium in surface soil at ten ''safety-shot'' sites on the Nevada Test Site (NTS) and Tonopah Test Range (TTR) that are currently being studied by the Nevada Applied Ecology Group (NAEG). The use of stratified random sampling has resulted in estimates of inventory at these desert study sites that have smaller standard errors than would have been the case had simple random sampling (no stratification) been used. Estimates of inventory are given for $sup 235$U, $sup 238$U, and $supmore » 239-240$Pu in soil at A Site of Area 11 on the NTS. Other results presented include average concentrations of one or more of these isotopes in soil and vegetation and in soil profile samples at depths to 25 cm. The regression relationship between soil and vegetation concentrations of $sup 235$U and $sup 238$U at adjacent sampling locations is also examined using three different models. The applicability of stratified random sampling to the estimation of concentration contours of $sup 239-240$Pu in surface soil using computer algorithms is also investigated. Estimates of such contours are obtained using several different methods. The planning of field sampling plans for estimating inventory and distribution is discussed. (auth)« less

  9. The Neighbourhood Effects on Health and Well-being (NEHW) study.

    PubMed

    O'Campo, Patricia; Wheaton, Blair; Nisenbaum, Rosane; Glazier, Richard H; Dunn, James R; Chambers, Catharine

    2015-01-01

    Many cross-sectional studies of neighbourhood effects on health do not employ strong study design elements. The Neighbourhood Effects on Health and Well-being (NEHW) study, a random sample of 2412 English-speaking Toronto residents (age 25-64), utilises strong design features for sampling neighbourhoods and individuals, characterising neighbourhoods using a variety of data sources, measuring a wide range of health outcomes, and for analysing cross-level interactions. We describe here methodological issues that shaped the design and analysis features of the NEHW study to ensure that, while a cross-sectional sample, it will advance the quality of evidence emerging from observational studies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Latent spatial models and sampling design for landscape genetics

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  11. Design, objectives, execution and reporting of published open-label extension studies.

    PubMed

    Megan, Bowers; Pickering, Ruth M; Weatherall, Mark

    2012-04-01

    Open-label extension (OLE) studies following blinded randomized controlled trials (RCTs) of pharmaceuticals are increasingly being carried out but do not conform to regulatory standards and questions surround the validity of their evidence. OLE studies are usually discussed as a homogenous group, yet substantial differences in study design still meet the definition of an OLE. We describe published papers reporting OLE studies focussing on stated objectives, design, conduct and reporting. A search of Embase and Medline databases for 1996 to July 2008 revealed 268 papers reporting OLE studies that met our eligibility criteria. A random sample of 50 was selected for detailed review. Over 80% of the studies had efficacy stated as an objective. The most common methods of allocation at the start of the OLE were for all RCT participants to switch to one active treatment or for only participants on the new drug to continue, but in three studies all participants were re-randomized at the start of the OLE. Eligibility criteria and other selection factors resulted in on average of 74% of participants in the preceding RCT(s) enrolling in the OLE and only 57% completed it. Published OLE studies do not form a homogenous group with respect to design or retention of participants, and thus the validity of evidence from an OLE should be judged on an individual basis. The term 'open label' suggests bias through lack of blinding, but slippage in relation to the sample randomized in the preceding RCT may be the more important threat to validity. © 2010 Blackwell Publishing Ltd.

  12. Examining Work and Family Conflict among Female Bankers in Accra Metropolis, Ghana

    ERIC Educational Resources Information Center

    Kissi-Abrokwah, Bernard; Andoh-Robertson, Theophilus; Tutu-Danquah, Cecilia; Agbesi, Catherine Selorm

    2015-01-01

    This study investigated the effects and solutions of work and family conflict among female bankers in Accra Metropolis. Using triangulatory mixed method design, a structured questionnaire was randomly administered to 300 female bankers and 15 female Bankers who were interviewed were also sampled by using convenient sampling technique. The…

  13. The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models

    ERIC Educational Resources Information Center

    Schoeneberger, Jason A.

    2016-01-01

    The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…

  14. Perceptions of Preservice Teachers regarding the Integration of Information and Communication Technologies in Turkish Education Faculties

    ERIC Educational Resources Information Center

    Akbulut, Yavuz; Odabasi, H. Ferhan; Kuzu, Abdullah

    2011-01-01

    This study explored the views of pre-service teachers regarding the indicators of information and communication technologies (ICT) at Turkish education faculties. A cross-sectional survey design was implemented with graduating students enrolled in Turkish education faculties. A combination of stratified random sampling and systematic sampling was…

  15. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  16. Value of information analysis optimizing future trial design from a pilot study on catheter securement devices.

    PubMed

    Tuffaha, Haitham W; Reynolds, Heather; Gordon, Louisa G; Rickard, Claire M; Scuffham, Paul A

    2014-12-01

    Value of information analysis has been proposed as an alternative to the standard hypothesis testing approach, which is based on type I and type II errors, in determining sample sizes for randomized clinical trials. However, in addition to sample size calculation, value of information analysis can optimize other aspects of research design such as possible comparator arms and alternative follow-up times, by considering trial designs that maximize the expected net benefit of research, which is the difference between the expected cost of the trial and the expected value of additional information. To apply value of information methods to the results of a pilot study on catheter securement devices to determine the optimal design of a future larger clinical trial. An economic evaluation was performed using data from a multi-arm randomized controlled pilot study comparing the efficacy of four types of catheter securement devices: standard polyurethane, tissue adhesive, bordered polyurethane and sutureless securement device. Probabilistic Monte Carlo simulation was used to characterize uncertainty surrounding the study results and to calculate the expected value of additional information. To guide the optimal future trial design, the expected costs and benefits of the alternative trial designs were estimated and compared. Analysis of the value of further information indicated that a randomized controlled trial on catheter securement devices is potentially worthwhile. Among the possible designs for the future trial, a four-arm study with 220 patients/arm would provide the highest expected net benefit corresponding to 130% return-on-investment. The initially considered design of 388 patients/arm, based on hypothesis testing calculations, would provide lower net benefit with return-on-investment of 79%. Cost-effectiveness and value of information analyses were based on the data from a single pilot trial which might affect the accuracy of our uncertainty estimation. Another limitation was that different follow-up durations for the larger trial were not evaluated. The value of information approach allows efficient trial design by maximizing the expected net benefit of additional research. This approach should be considered early in the design of randomized clinical trials. © The Author(s) 2014.

  17. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.

  18. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.

  19. Linking Teacher Competences to Organizational Citizenship Behaviour: The Role of Empowerment

    ERIC Educational Resources Information Center

    Kasekende, Francis; Munene, John C.; Otengei, Samson Omuudu; Ntayi, Joseph Mpeera

    2016-01-01

    Purpose: The purpose of this paper is to examine relationship between teacher competences and organizational citizenship behavior (OCB) with empowerment as a mediating factor. Design/methodology/approach: The study took a cross-sectional descriptive and analytical design. Using cluster and random sampling procedures, data were obtained from 383…

  20. The Power of the Test for Treatment Effects in Three-Level Block Randomized Designs

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2008-01-01

    Experiments that involve nested structures may assign treatment conditions either to subgroups (such as classrooms) or individuals within subgroups (such as students). The design of such experiments requires knowledge of the intraclass correlation structure to compute the sample sizes necessary to achieve adequate power to detect the treatment…

  1. Student Attitudes toward Bibliographic Instruction.

    ERIC Educational Resources Information Center

    Damko, Ellen E.

    This study was designed to determine what value, if any, college students place upon library use instruction. A survey conducted on a random sample of college and university students working at Cedar Point Amusement Park in Sandusky, Ohio, during the summer of 1990 was designed to determine the type and amount of library instruction each student…

  2. Designing with figer-reinforced plastics (planar random composites)

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1982-01-01

    The use of composite mechanics to predict the hygrothermomechanical behavior of planar random composites (PRC) is reviewed and described. These composites are usually made from chopped fiber reinforced resins (thermoplastics or thermosets). The hygrothermomechanical behavior includes mechanical properties, physical properties, thermal properties, fracture toughness, creep and creep rupture. Properties are presented in graphical form with sample calculations to illustrate their use. Concepts such as directional reinforcement and strip hybrids are described. Typical data that can be used for preliminary design for various PRCs are included. Several resins and molding compounds used to make PRCs are described briefly. Pertinent references are cited that cover analysis and design methods, materials, data, fabrication procedures and applications.

  3. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less

  4. Item Randomized-Response Models for Measuring Noncompliance: Risk-Return Perceptions, Social Influences, and Self-Protective Responses

    ERIC Educational Resources Information Center

    Bockenholt, Ulf; Van Der Heijden, Peter G. M.

    2007-01-01

    Randomized response (RR) is a well-known method for measuring sensitive behavior. Yet this method is not often applied because: (i) of its lower efficiency and the resulting need for larger sample sizes which make applications of RR costly; (ii) despite its privacy-protection mechanism the RR design may not be followed by every respondent; and…

  5. Randomized Controlled Trial to Increase Physical Activity among Insufficiently Active Women Following Their Participation in a Mass Event

    ERIC Educational Resources Information Center

    Lane, Aoife; Murphy, Niamh; Bauman, Adrian; Chey, Tien

    2010-01-01

    Objective: To assess the impact of a community based, low-contact intervention on the physical activity habits of insufficiently active women. Design: Randomized controlled trial. Participants: Inactive Irish women. Method: A population sample of women participating in a mass 10 km event were up followed at 2 and 6 months, and those who had…

  6. 78 FR 42079 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-15

    ... a three- year approval to continue the ERSG project. The ongoing evaluation employs a quasi-experimental/non-randomized design in which a convenience sample of participants in schools receiving universal...

  7. Income Transfers and Maternal Health: Evidence from a National Randomized Social Cash Transfer Program in Zambia.

    PubMed

    Handa, Sudhanshu; Peterman, Amber; Seidenfeld, David; Tembo, Gelson

    2016-02-01

    There is promising recent evidence that poverty-targeted social cash transfers have potential to improve maternal health outcomes; however, questions remain surrounding design features responsible for impacts. In addition, virtually no evidence exists from the African region. This study explores the impact of Zambia's Child Grant Program on a range of maternal health utilization outcomes using a randomized design and difference-in-differences multivariate regression from data collected over 24 months from 2010 to 2012. Results indicate that while there are no measurable program impacts among the main sample, there are heterogeneous impacts on skilled attendance at birth among a sample of women residing in households having better access to maternal health services. The latter result is particularly interesting because of the overall low level of health care availability in program areas suggesting that dedicated program design or matching supply-side interventions may be necessary to leverage unconditional cash transfers in similar settings to impact maternal health. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Income transfers and maternal health: Evidence from a national randomized social cash transfer program in Zambia

    PubMed Central

    Handa, Sudhanshu; Peterman, Amber; Seidenfeld, David; Tembo, Gelson

    2017-01-01

    There is promising recent evidence that poverty-targeted social cash transfers have potential to improve maternal health outcomes, however questions remain surrounding design features responsible for impacts. In addition, virtually no evidence exists from the African region. This study explores the impact of Zambia’s Child Grant Program on a range of maternal health utilization outcomes using a randomized design and difference-in-differences multivariate regression from data collected over 24 months from 2010 to 2012. Results indicate that while there are no measurable program impacts among the main sample, there are heterogeneous impacts on skilled attendance at birth among a sample of women residing in households having better access to maternal health services. The latter result is particularly interesting because of the overall low level of healthcare availability in program areas suggesting dedicated program design or matching supply-side interventions may be necessary to leverage unconditional cash transfers in similar settings to impact maternal health. PMID:25581062

  9. The Danish National Health Survey 2010. Study design and respondent characteristics.

    PubMed

    Christensen, Anne Illemann; Ekholm, Ola; Glümer, Charlotte; Andreasen, Anne Helms; Hvidberg, Michael Falk; Kristensen, Peter Lund; Larsen, Finn Breinholt; Ortiz, Britta; Juel, Knud

    2012-06-01

    In 2010 the five Danish regions and the National Institute of Public Health at the University of Southern Denmark conducted a national representative health survey among the adult population in Denmark. This paper describes the study design and the sample and study population as well as the content of the questionnaire. The survey was based on five regional stratified random samples and one national random sample. The samples were mutually exclusive. A total of 298,550 individuals (16 years or older) were invited to participate. Information was collected using a mixed mode approach (paper and web questionnaires). A questionnaire with a minimum of 52 core questions was used in all six subsamples. Calibrated weights were computed in order to take account of the complex survey design and reduce non-response bias. In all, 177,639 individuals completed the questionnaire (59.5%). The response rate varied from 52.3% in the Capital Region of Denmark sample to 65.5% in the North Denmark Region sample. The response rate was particularly low among young men, unmarried people and among individuals with a different ethnic background than Danish. The survey was a result of extensive national cooperation across sectors, which makes it unique in its field of application, e.g. health surveillance, planning and prioritizing public health initiatives and research. However, the low response rate in some subgroups of the study population can pose problems in generalizing data, and efforts to increase the response rate will be important in the forthcoming surveys.

  10. Using Bayesian Adaptive Trial Designs for Comparative Effectiveness Research: A Virtual Trial Execution.

    PubMed

    Luce, Bryan R; Connor, Jason T; Broglio, Kristine R; Mullins, C Daniel; Ishak, K Jack; Saunders, Elijah; Davis, Barry R

    2016-09-20

    Bayesian and adaptive clinical trial designs offer the potential for more efficient processes that result in lower sample sizes and shorter trial durations than traditional designs. To explore the use and potential benefits of Bayesian adaptive clinical trial designs in comparative effectiveness research. Virtual execution of ALLHAT (Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial) as if it had been done according to a Bayesian adaptive trial design. Comparative effectiveness trial of antihypertensive medications. Patient data sampled from the more than 42 000 patients enrolled in ALLHAT with publicly available data. Number of patients randomly assigned between groups, trial duration, observed numbers of events, and overall trial results and conclusions. The Bayesian adaptive approach and original design yielded similar overall trial conclusions. The Bayesian adaptive trial randomly assigned more patients to the better-performing group and would probably have ended slightly earlier. This virtual trial execution required limited resampling of ALLHAT patients for inclusion in RE-ADAPT (REsearch in ADAptive methods for Pragmatic Trials). Involvement of a data monitoring committee and other trial logistics were not considered. In a comparative effectiveness research trial, Bayesian adaptive trial designs are a feasible approach and potentially generate earlier results and allocate more patients to better-performing groups. National Heart, Lung, and Blood Institute.

  11. Demonstration of the Attributes of Multi-increment Sampling and Proper Sample Processing Protocols for the Characterization of Metals on DoD Facilities

    DTIC Science & Technology

    2013-06-01

    lenses of unconsolidated sand and rounded river gravel overlain by as much as 5 m of silt. Gravel consists mostly of quartz and metamorphic rock with...iii LIST OF FIGURES Page Figure 1. Example of multi-increment sampling using a systematic-random sampling design for collecting two separate...The small arms firing Range 16 Record berms at Fort Wainwright. .................... 25 Figure 9. Location of berms sampled using ISM and grab

  12. Analysis of Clinical Cohort Data Using Nested Case-control and Case-cohort Sampling Designs. A Powerful and Economical Tool.

    PubMed

    Ohneberg, K; Wolkewitz, M; Beyersmann, J; Palomar-Martinez, M; Olaechea-Astigarraga, P; Alvarez-Lerma, F; Schumacher, M

    2015-01-01

    Sampling from a large cohort in order to derive a subsample that would be sufficient for statistical analysis is a frequently used method for handling large data sets in epidemiological studies with limited resources for exposure measurement. For clinical studies however, when interest is in the influence of a potential risk factor, cohort studies are often the first choice with all individuals entering the analysis. Our aim is to close the gap between epidemiological and clinical studies with respect to design and power considerations. Schoenfeld's formula for the number of events required for a Cox' proportional hazards model is fundamental. Our objective is to compare the power of analyzing the full cohort and the power of a nested case-control and a case-cohort design. We compare formulas for power for sampling designs and cohort studies. In our data example we simultaneously apply a nested case-control design with a varying number of controls matched to each case, a case cohort design with varying subcohort size, a random subsample and a full cohort analysis. For each design we calculate the standard error for estimated regression coefficients and the mean number of distinct persons, for whom covariate information is required. The formula for the power of a nested case-control design and the power of a case-cohort design is directly connected to the power of a cohort study using the well known Schoenfeld formula. The loss in precision of parameter estimates is relatively small compared to the saving in resources. Nested case-control and case-cohort studies, but not random subsamples yield an attractive alternative for analyzing clinical studies in the situation of a low event rate. Power calculations can be conducted straightforwardly to quantify the loss of power compared to the savings in the num-ber of patients using a sampling design instead of analyzing the full cohort.

  13. Seven common mistakes in population genetics and how to avoid them.

    PubMed

    Meirmans, Patrick G

    2015-07-01

    As the data resulting from modern genotyping tools are astoundingly complex, genotyping studies require great care in the sampling design, genotyping, data analysis and interpretation. Such care is necessary because, with data sets containing thousands of loci, small biases can easily become strongly significant patterns. Such biases may already be present in routine tasks that are present in almost every genotyping study. Here, I discuss seven common mistakes that can be frequently encountered in the genotyping literature: (i) giving more attention to genotyping than to sampling, (ii) failing to perform or report experimental randomization in the laboratory, (iii) equating geopolitical borders with biological borders, (iv) testing significance of clustering output, (v) misinterpreting Mantel's r statistic, (vi) only interpreting a single value of k and (vii) forgetting that only a small portion of the genome will be associated with climate. For every of those issues, I give some suggestions how to avoid the mistake. Overall, I argue that genotyping studies would benefit from establishing a more rigorous experimental design, involving proper sampling design, randomization and better distinction of a priori hypotheses and exploratory analyses. © 2015 John Wiley & Sons Ltd.

  14. Enhancing local health department disaster response capacity with rapid community needs assessments: validation of a computerized program for binary attribute cluster sampling.

    PubMed

    Groenewold, Matthew R

    2006-01-01

    Local health departments are among the first agencies to respond to disasters or other mass emergencies. However, they often lack the ability to handle large-scale events. Plans including locally developed and deployed tools may enhance local response. Simplified cluster sampling methods can be useful in assessing community needs after a sudden-onset, short duration event. Using an adaptation of the methodology used by the World Health Organization Expanded Programme on Immunization (EPI), a Microsoft Access-based application for two-stage cluster sampling of residential addresses in Louisville/Jefferson County Metro, Kentucky was developed. The sampling frame was derived from geographically referenced data on residential addresses and political districts available through the Louisville/Jefferson County Information Consortium (LOJIC). The program randomly selected 30 clusters, defined as election precincts, from within the area of interest, and then, randomly selected 10 residential addresses from each cluster. The program, called the Rapid Assessment Tools Package (RATP), was tested in terms of accuracy and precision using data on a dichotomous characteristic of residential addresses available from the local tax assessor database. A series of 30 samples were produced and analyzed with respect to their precision and accuracy in estimating the prevalence of the study attribute. Point estimates with 95% confidence intervals were calculated by determining the proportion of the study attribute values in each of the samples and compared with the population proportion. To estimate the design effect, corresponding simple random samples of 300 addresses were taken after each of the 30 cluster samples. The sample proportion fell within +/-10 absolute percentage points of the true proportion in 80% of the samples. In 93.3% of the samples, the point estimate fell within +/-12.5%, and 96.7% fell within +/-15%. All of the point estimates fell within +/-20% of the true proportion. Estimates of the design effect ranged from 0.926 to 1.436 (mean = 1.157, median = 1.170) for the 30 samples. Although prospective evaluation of its performance in field trials or a real emergency is required to confirm its utility, this study suggests that the RATP, a locally designed and deployed tool, may provide population-based estimates of community needs or the extent of event-related consequences that are precise enough to serve as the basis for the initial post-event decisions regarding relief efforts.

  15. Binomial leap methods for simulating stochastic chemical kinetics.

    PubMed

    Tian, Tianhai; Burrage, Kevin

    2004-12-01

    This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.

  16. A pilot cluster randomized controlled trial of structured goal-setting following stroke.

    PubMed

    Taylor, William J; Brown, Melanie; William, Levack; McPherson, Kathryn M; Reed, Kirk; Dean, Sarah G; Weatherall, Mark

    2012-04-01

    To determine the feasibility, the cluster design effect and the variance and minimal clinical importance difference in the primary outcome in a pilot study of a structured approach to goal-setting. A cluster randomized controlled trial. Inpatient rehabilitation facilities. People who were admitted to inpatient rehabilitation following stroke who had sufficient cognition to engage in structured goal-setting and complete the primary outcome measure. Structured goal elicitation using the Canadian Occupational Performance Measure. Quality of life at 12 weeks using the Schedule for Individualised Quality of Life (SEIQOL-DW), Functional Independence Measure, Short Form 36 and Patient Perception of Rehabilitation (measuring satisfaction with rehabilitation). Assessors were blinded to the intervention. Four rehabilitation services and 41 patients were randomized. We found high values of the intraclass correlation for the outcome measures (ranging from 0.03 to 0.40) and high variance of the SEIQOL-DW (SD 19.6) in relation to the minimally importance difference of 2.1, leading to impractically large sample size requirements for a cluster randomized design. A cluster randomized design is not a practical means of avoiding contamination effects in studies of inpatient rehabilitation goal-setting. Other techniques for coping with contamination effects are necessary.

  17. Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa

    2018-01-01

    A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.

  18. Social network recruitment for Yo Puedo - an innovative sexual health intervention in an underserved urban neighborhood: sample and design implications

    PubMed Central

    Minnis, Alexandra M.; vanDommelen-Gonzalez, Evan; Luecke, Ellen; Cheng, Helen; Dow, William; Bautista-Arredondo, Sergio; Padian, Nancy S.

    2016-01-01

    Most existing evidence-based sexual health interventions focus on individual-level behavior, even though there is substantial evidence that highlights the influential role of social environments in shaping adolescents’ behaviors and reproductive health outcomes. We developed Yo Puedo, a combined conditional cash transfer (CCT) and life skills intervention for youth to promote educational attainment, job training, and reproductive health wellness that we then evaluated for feasibility among 162 youth aged 16–21 years in a predominantly Latino community in San Francisco, CA. The intervention targeted youth’s social networks and involved recruitment and randomization of small social network clusters. In this paper we describe the design of the feasibility study and report participants’ baseline characteristics. Furthermore, we examined the sample and design implications of recruiting social network clusters as the unit of randomization. Baseline data provide evidence that we successfully enrolled high risk youth using a social network recruitment approach in community and school-based settings. Nearly all participants (95%) were high risk for adverse educational and reproductive health outcomes based on multiple measures of low socioeconomic status (81%) and/or reported high risk behaviors (e.g., gang affiliation, past pregnancy, recent unprotected sex, frequent substance use) (62%). We achieved variability in the study sample through heterogeneity in recruitment of the index participants, whereas the individuals within the small social networks of close friends demonstrated substantial homogeneity across sociodemographic and risk profile characteristics. Social networks recruitment was feasible and yielded a sample of high risk youth willing to enroll in a randomized study to evaluate a novel sexual health intervention. PMID:25358834

  19. Does sampling using random digit dialling really cost more than sampling from telephone directories: Debunking the myths

    PubMed Central

    Yang, Baohui; Eyeson-Annan, Margo

    2006-01-01

    Background Computer assisted telephone interviewing (CATI) is widely used for health surveys. The advantages of CATI over face-to-face interviewing are timeliness and cost reduction to achieve the same sample size and geographical coverage. Two major CATI sampling procedures are used: sampling directly from the electronic white pages (EWP) telephone directory and list assisted random digit dialling (LA-RDD) sampling. EWP sampling covers telephone numbers of households listed in the printed white pages. LA-RDD sampling has a better coverage of households than EWP sampling but is considered to be more expensive due to interviewers dialling more out-of-scope numbers. Methods This study compared an EWP sample and a LA-RDD sample from the New South Wales Population Health Survey in 2003 on demographic profiles, health estimates, coefficients of variation in weights, design effects on estimates, and cost effectiveness, on the basis of achieving the same level of precision of estimates. Results The LA-RDD sample better represented the population than the EWP sample, with a coefficient of variation of weights of 1.03 for LA-RDD compared with 1.21 for EWP, and average design effects of 2.00 for LA-RDD compared with 2.38 for EWP. Also, a LA-RDD sample can save up to 14.2% in cost compared to an EWP sample to achieve the same precision for health estimates. Conclusion A LA-RDD sample better represents the population, which potentially leads to reduced bias in health estimates, and rather than costing more than EWP actually costs less. PMID:16504117

  20. A Bayesian Hierarchical Model for Large-Scale Educational Surveys: An Application to the National Assessment of Educational Progress. Research Report. ETS RR-04-38

    ERIC Educational Resources Information Center

    Johnson, Matthew S.; Jenkins, Frank

    2005-01-01

    Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…

  1. Estimation of infection prevalence and sensitivity in a stratified two-stage sampling design employing highly specific diagnostic tests when there is no gold standard.

    PubMed

    Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S

    2015-11-10

    In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.

  2. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  3. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Treesearch

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  4. Predictor sort sampling and one-sided confidence bounds on quantiles

    Treesearch

    Steve Verrill; Victoria L. Herian; David W. Green

    2002-01-01

    Predictor sort experiments attempt to make use of the correlation between a predictor that can be measured prior to the start of an experiment and the response variable that we are investigating. Properly designed and analyzed, they can reduce necessary sample sizes, increase statistical power, and reduce the lengths of confidence intervals. However, if the non- random...

  5. Health Outcomes in Adolescence: Associations with Family, Friends and School Engagement

    ERIC Educational Resources Information Center

    Carter, Melissa; McGee, Rob; Taylor, Barry; Williams, Sheila

    2007-01-01

    Aim: To examine the associations between connectedness to family and friends, and school engagement, and selected health compromising and health promoting behaviours in a sample of New Zealand adolescents. Methods: A web-based survey was designed and administered to a random sample of 652 Year 11 students aged 16 years from all Dunedin (NZ) high…

  6. Dimensions of Oppositional Defiant Disorder in 3-Year-Old Preschoolers

    ERIC Educational Resources Information Center

    Ezpeleta, Lourdes; Granero, Roser; de la Osa, Nuria; Penelo, Eva; Domenech, Josep M.

    2012-01-01

    Background: To test the factor structure of oppositional defiant disorder (ODD) symptoms and to study the relationships between the proposed dimensions and external variables in a community sample of preschool children. Method: A sample of 1,341 3-year-old preschoolers was randomly selected and screened for a double-phase design. In total, 622…

  7. Technology Instructional Package Mediated Instruction and Senior Secondary School Students' Academic Performance in Biology Concepts

    ERIC Educational Resources Information Center

    Yaki, Akawo Angwal; Babagana, Mohammed

    2016-01-01

    The paper examined the effects of a Technological Instructional Package (TIP) on secondary school students' performance in biology. The study adopted a pre-test, post-test experimental control group design. The sample size of the study was 80 students from Minna metropolis, Niger state, Nigeria; the samples were randomly assigned into treatment…

  8. Composition, biomass and structure of mangroves within the Zambezi River Delta

    Treesearch

    Carl C. Trettin; Christina E. Stringer; Stan Zarnoch

    2015-01-01

    We used a stratified random sampling design to inventory the mangrove vegetation within the Zambezi River Delta, Mozambique, to provide a basis for estimating biomass pools. We used canopy height, derived from remote sensing data, to stratify the inventory area, and then applied a spatial decision support system to objectively allocate sample plots among five...

  9. Use of Nutritional Information in Canada: National Trends between 2004 and 2008

    ERIC Educational Resources Information Center

    Goodman, Samantha; Hammond, David; Pillo-Blocka, Francy; Glanville, Theresa; Jenkins, Richard

    2011-01-01

    Objective: To examine longitudinal trends in use of nutrition information among Canadians. Design: Population-based telephone and Internet surveys. Setting and Participants: Representative samples of Canadian adults recruited with random-digit dialing sampling in 2004 (n = 2,405) and 2006 (n = 2,014) and an online commercial panel in 2008 (n =…

  10. On grey levels in random CAPTCHA generation

    NASA Astrophysics Data System (ADS)

    Newton, Fraser; Kouritzin, Michael A.

    2011-06-01

    A CAPTCHA is an automatically generated test designed to distinguish between humans and computer programs; specifically, they are designed to be easy for humans but difficult for computer programs to pass in order to prevent the abuse of resources by automated bots. They are commonly seen guarding webmail registration forms, online auction sites, and preventing brute force attacks on passwords. In the following, we address the question: How does adding a grey level to random CAPTCHA generation affect the utility of the CAPTCHA? We treat the problem of generating the random CAPTCHA as one of random field simulation: An initial state of background noise is evolved over time using Gibbs sampling and an efficient algorithm for generating correlated random variables. This approach has already been found to yield highly-readable yet difficult-to-crack CAPTCHAs. We detail how the requisite parameters for introducing grey levels are estimated and how we generate the random CAPTCHA. The resulting CAPTCHA will be evaluated in terms of human readability as well as its resistance to automated attacks in the forms of character segmentation and optical character recognition.

  11. Handwashing with soap or alcoholic solutions? A randomized clinical trial of its effectiveness.

    PubMed

    Zaragoza, M; Sallés, M; Gomez, J; Bayas, J M; Trilla, A

    1999-06-01

    The effectiveness of an alcoholic solution compared with the standard hygienic handwashing procedure during regular work in clinical wards and intensive care units of a large public university hospital in Barcelona was assessed. A prospective, randomized clinical trial with crossover design, paired data, and blind evaluation was done. Eligible health care workers (HCWs) included permanent and temporary HCWs of wards and intensive care units. From each category, a random sample of persons was selected. HCWs were randomly assigned to regular handwashing (liquid soap and water) or handwashing with the alcoholic solution by using a crossover design. The number of colony-forming units on agar plates from hands printing in 3 different samples was counted. A total of 47 HCWs were included. The average reduction in the number of colony-forming units from samples before handwashing to samples after handwashing was 49.6% for soap and water and 88.2% for the alcoholic solution. When both methods were compared, the average number of colony-forming units recovered after the procedure showed a statistically significant difference in favor of the alcoholic solution (P <.001). The alcoholic solution was well tolerated by HCWs. Overall acceptance rate was classified as "good" by 72% of HCWs after 2 weeks use. Of all HCWs included, 9.3% stated that the use of the alcoholic solution worsened minor pre-existing skin conditions. Although the regular use of hygienic soap and water handwashing procedures is the gold standard, the use of alcoholic solutions is effective and safe and deserves more attention, especially in situations in which the handwashing compliance rate is hampered by architectural problems (lack of sinks) or nursing work overload.

  12. Monte Carlo Sampling in Fractal Landscapes

    NASA Astrophysics Data System (ADS)

    Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.

    2013-05-01

    We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.

  13. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  14. Quality of different in-clinic test systems for feline immunodeficiency virus and feline leukaemia virus infection.

    PubMed

    Hartmann, Katrin; Griessmayr, Pascale; Schulz, Bianka; Greene, Craig E; Vidyashankar, Anand N; Jarrett, Os; Egberink, Herman F

    2007-12-01

    Many new diagnostic in-house tests for identification of feline immunodeficiency virus (FIV) and feline leukaemia virus (FeLV) infection have been licensed for use in veterinary practice, and the question of the relative merits of these kits has prompted comparative studies. This study was designed to define the strengths and weaknesses of seven FIV and eight FeLV tests that are commercially available. In this study, 536 serum samples from randomly selected cats were tested. Those samples reacting FIV-positive in at least one of the tests were confirmed by Western blot, and those reacting FeLV-positive were confirmed by virus isolation. In addition, a random selection of samples testing negative in all test systems was re-tested by Western blot (100 samples) and by virus isolation (81 samples). Specificity, sensitivity, positive and negative predictive values of each test and the quality of the results were compared.

  15. An overview of the Columbia Habitat Monitoring Program's (CHaMP) spatial-temporal design framework

    EPA Science Inventory

    We briefly review the concept of a master sample applied to stream networks in which a randomized set of stream sites is selected across a broad region to serve as a list of sites from which a subset of sites is selected to achieve multiple objectives of specific designs. The Col...

  16. Nursing Home Quality, Cost, Staffing, and Staff Mix

    ERIC Educational Resources Information Center

    Rantz, Marilyn J.; Hicks, Lanis; Grando, Victoria; Petroski, Gregory F.; Madsen, Richard W.; Mehr, David R.; Conn, Vicki; Zwygart-Staffacher, Mary; Scott, Jill; Flesner, Marcia; Bostick, Jane; Porter, Rose; Maas, Meridean

    2004-01-01

    Purpose: The purpose of this study was to describe the processes of care, organizational attributes, cost of care, staffing level, and staff mix in a sample of Missouri homes with good, average, and poor resident outcomes. Design and Methods: A three-group exploratory study design was used, with 92 nursing homes randomly selected from all nursing…

  17. Effect of Fresh Fruit Availability at Worksites on the Fruit and Vegetable Consumption of Low-Wage Employees

    ERIC Educational Resources Information Center

    Backman, Desiree; Gonzaga, Gian; Sugerman, Sharon; Francis, Dona; Cook, Sara

    2011-01-01

    Objective: To examine the impact of fresh fruit availability at worksites on the fruit and vegetable consumption and related psychosocial determinants of low-wage employees. Design: A prospective, randomized block experimental design. Setting: Seven apparel manufacturing and 2 food processing worksites. Participants: A convenience sample of 391…

  18. Effect of Ethnochemistry Practices on Secondary School Students' Attitude towards Chemistry

    ERIC Educational Resources Information Center

    Singh, Indra Sen; Chibuye, Bitwell

    2016-01-01

    The main purpose of the study was to find out the effect of ethnochemistry practices on secondary school students' attitude towards Chemistry. The design of the study was pre-test post-test control group quasiexperimental design. Two grade 11 intact classes were assigned into experimental and control groups randomly. The total sample size…

  19. School Types, Facilities and Academic Performance of Students in Senior Secondary Schools in Ondo State, Nigeria

    ERIC Educational Resources Information Center

    Alimi, Olatunji Sabitu; Ehinola, Gabriel Babatunde; Alabi, Festus Oluwole

    2012-01-01

    The study investigated the influence of school types and facilities on students' academic performance in Ondo State. It was designed to find out whether facilities and students' academic performance are related in private and public secondary schools respectively. Descriptive survey design was used. Proportionate random sampling technique was used…

  20. Internet-Based Interventions Have Potential to Affect Short-Term Mediators and Indicators of Dietary Behavior of Young Adults

    ERIC Educational Resources Information Center

    Park, Amanda; Nitzke, Susan; Kritsch, Karen; Kattelmann, Kendra; White, Adrienne; Boeckner, Linda; Lohse, Barbara; Hoerr, Sharon; Greene, Geoffrey; Zhang, Zhumin

    2008-01-01

    Objective: Evaluate a theory-based, Internet-delivered nutrition education module. Design: Randomized, treatment-control design with pre-post intervention assessments. Setting and Participants: Convenience sample of 160 young adults (aged 18-24) recruited by community educators in 4 states. Study completers (n = 96) included a mix of…

  1. Officer Career Development: Analytic Strategy Recommendations

    DTIC Science & Technology

    1989-07-01

    model is perhaps better suited to experimental designs where investigators determine discrete values for each Xi and then randomly sample subjects into... experimental designs . New York: McGraw-Hill. ,Terlove, 4. (1971). Further evidence on the estimation of dynamic economic relations from a time series of...Cunningham. W. R., & Birren, 1. E. (1980). Age chsane in the factor Newseroade. I R., & Labouvie. E V. 1985) Experimental design in structure of

  2. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  3. Design and implementation of a dental caries prevention trial in remote Canadian Aboriginal communities

    PubMed Central

    2010-01-01

    Background The goal of this cluster randomized trial is to test the effectiveness of a counseling approach, Motivational Interviewing, to control dental caries in young Aboriginal children. Motivational Interviewing, a client-centred, directive counseling style, has not yet been evaluated as an approach for promotion of behaviour change in indigenous communities in remote settings. Methods/design Aboriginal women were hired from the 9 communities to recruit expectant and new mothers to the trial, administer questionnaires and deliver the counseling to mothers in the test communities. The goal is for mothers to receive the intervention during pregnancy and at their child's immunization visits. Data on children's dental health status and family dental health practices will be collected when children are 30-months of age. The communities were randomly allocated to test or control group by a random "draw" over community radio. Sample size and power were determined based on an anticipated 20% reduction in caries prevalence. Randomization checks were conducted between groups. Discussion In the 5 test and 4 control communities, 272 of the original target sample size of 309 mothers have been recruited over a two-and-a-half year period. A power calculation using the actual attained sample size showed power to be 79% to detect a treatment effect. If an attrition fraction of 4% per year is maintained, power will remain at 80%. Power will still be > 90% to detect a 25% reduction in caries prevalence. The distribution of most baseline variables was similar for the two randomized groups of mothers. However, despite the random assignment of communities to treatment conditions, group differences exist for stage of pregnancy and prior tooth extractions in the family. Because of the group imbalances on certain variables, control of baseline variables will be done in the analyses of treatment effects. This paper explains the challenges of conducting randomized trials in remote settings, the importance of thorough community collaboration, and also illustrates the likelihood that some baseline variables that may be clinically important will be unevenly split in group-randomized trials when the number of groups is small. Trial registration This trial is registered as ISRCTN41467632. PMID:20465831

  4. NEKTON-HABITAT ASSOCIATIONS IN A PACIFIC NORTHWEST (USA) ESTUARY

    EPA Science Inventory

    Nekton−habitat associations were determined in Yaquina Bay, Oregon, United States, using a stratified-by-habitat, random, estuary-wide sampling design. Three habitats (intertidal eelgrass [Zostera marina], mud shrimp [Upogebia pugettensis], and ghost shrimp [Neotrypaea californie...

  5. Measuring Clinical Decision Support Influence on Evidence-Based Nursing Practice.

    PubMed

    Cortez, Susan; Dietrich, Mary S; Wells, Nancy

    2016-07-01

    To measure the effect of clinical decision support (CDS) on oncology nurse evidence-based practice (EBP).
. Longitudinal cluster-randomized design.
. Four distinctly separate oncology clinics associated with an academic medical center.
. The study sample was comprised of randomly selected data elements from the nursing documentation software. The data elements were patient-reported symptoms and the associated nurse interventions. The total sample observations were 600, derived from a baseline, posteducation, and postintervention sample of 200 each (100 in the intervention group and 100 in the control group for each sample).
. The cluster design was used to support randomization of the study intervention at the clinic level rather than the individual participant level to reduce possible diffusion of the study intervention. An elongated data collection cycle (11 weeks) controlled for temporary increases in nurse EBP related to the education or CDS intervention.
. The dependent variable was the nurse evidence-based documentation rate, calculated from the nurse-documented interventions. The independent variable was the CDS added to the nursing documentation software.
. The average EBP rate at baseline for the control and intervention groups was 27%. After education, the average EBP rate increased to 37%, and then decreased to 26% in the postintervention sample. Mixed-model linear statistical analysis revealed no significant interaction of group by sample. The CDS intervention did not result in an increase in nurse EBP.
. EBP education increased nurse EBP documentation rates significantly but only temporarily. Nurses may have used evidence in practice but may not have documented their interventions.
. More research is needed to understand the complex relationship between CDS, nursing practice, and nursing EBP intervention documentation. CDS may have a different effect on nurse EBP, physician EBP, and other medical professional EBP.

  6. Simulation of design-unbiased point-to-particle sampling compared to alternatives on plantation rows

    Treesearch

    Thomas B. Lynch; David Hamlin; Mark J. Ducey

    2016-01-01

    Total quantities of tree attributes can be estimated in plantations by sampling on plantation rows using several methods. At random sample points on a row, either fixed row lengths or variable row lengths with a fixed number of sample trees can be assessed. Ratio of means or mean of ratios estimators can be developed for the fixed number of trees option but are not...

  7. Variable density randomized stack of spirals (VDR-SoS) for compressive sensing MRI.

    PubMed

    Valvano, Giuseppe; Martini, Nicola; Landini, Luigi; Santarelli, Maria Filomena

    2016-07-01

    To develop a 3D sampling strategy based on a stack of variable density spirals for compressive sensing MRI. A random sampling pattern was obtained by rotating each spiral by a random angle and by delaying for few time steps the gradient waveforms of the different interleaves. A three-dimensional (3D) variable sampling density was obtained by designing different variable density spirals for each slice encoding. The proposed approach was tested with phantom simulations up to a five-fold undersampling factor. Fully sampled 3D dataset of a human knee, and of a human brain, were obtained from a healthy volunteer. The proposed approach was tested with off-line reconstructions of the knee dataset up to a four-fold acceleration and compared with other noncoherent trajectories. The proposed approach outperformed the standard stack of spirals for various undersampling factors. The level of coherence and the reconstruction quality of the proposed approach were similar to those of other trajectories that, however, require 3D gridding for the reconstruction. The variable density randomized stack of spirals (VDR-SoS) is an easily implementable trajectory that could represent a valid sampling strategy for 3D compressive sensing MRI. It guarantees low levels of coherence without requiring 3D gridding. Magn Reson Med 76:59-69, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  8. Rationale and design of the HOME trial: A pragmatic randomized controlled trial of home-based human papillomavirus (HPV) self-sampling for increasing cervical cancer screening uptake and effectiveness in a U.S. healthcare system.

    PubMed

    Winer, Rachel L; Tiro, Jasmin A; Miglioretti, Diana L; Thayer, Chris; Beatty, Tara; Lin, John; Gao, Hongyuan; Kimbel, Kilian; Buist, Diana S M

    2018-01-01

    Women who delay or do not attend Papanicolaou (Pap) screening are at increased risk for cervical cancer. Trials in countries with organized screening programs have demonstrated that mailing high-risk (hr) human papillomavirus (HPV) self-sampling kits to under-screened women increases participation, but U.S. data are lacking. HOME is a pragmatic randomized controlled trial set within a U.S. integrated healthcare delivery system to compare two programmatic approaches for increasing cervical cancer screening uptake and effectiveness in under-screened women (≥3.4years since last Pap) aged 30-64years: 1) usual care (annual patient reminders and ad hoc outreach by clinics) and 2) usual care plus mailed hrHPV self-screening kits. Over 2.5years, eligible women were identified through electronic medical record (EMR) data and randomized 1:1 to the intervention or control arm. Women in the intervention arm were mailed kits with pre-paid envelopes to return samples to the central clinical laboratory for hrHPV testing. Results were documented in the EMR to notify women's primary care providers of appropriate follow-up. Primary outcomes are detection and treatment of cervical neoplasia. Secondary outcomes are cervical cancer screening uptake, abnormal screening results, and women's experiences and attitudes towards hrHPV self-sampling and follow-up of hrHPV-positive results (measured through surveys and interviews). The trial was designed to evaluate whether a programmatic strategy incorporating hrHPV self-sampling is effective in promoting adherence to the complete screening process (including follow-up of abnormal screening results and treatment). The objective of this report is to describe the rationale and design of this pragmatic trial. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. MAP: an iterative experimental design methodology for the optimization of catalytic search space structure modeling.

    PubMed

    Baumes, Laurent A

    2006-01-01

    One of the main problems in high-throughput research for materials is still the design of experiments. At early stages of discovery programs, purely exploratory methodologies coupled with fast screening tools should be employed. This should lead to opportunities to find unexpected catalytic results and identify the "groups" of catalyst outputs, providing well-defined boundaries for future optimizations. However, very few new papers deal with strategies that guide exploratory studies. Mostly, traditional designs, homogeneous covering, or simple random samplings are exploited. Typical catalytic output distributions exhibit unbalanced datasets for which an efficient learning is hardly carried out, and interesting but rare classes are usually unrecognized. Here is suggested a new iterative algorithm for the characterization of the search space structure, working independently of learning processes. It enhances recognition rates by transferring catalysts to be screened from "performance-stable" space zones to "unsteady" ones which necessitate more experiments to be well-modeled. The evaluation of new algorithm attempts through benchmarks is compulsory due to the lack of past proofs about their efficiency. The method is detailed and thoroughly tested with mathematical functions exhibiting different levels of complexity. The strategy is not only empirically evaluated, the effect or efficiency of sampling on future Machine Learning performances is also quantified. The minimum sample size required by the algorithm for being statistically discriminated from simple random sampling is investigated.

  10. Challenges to Successful Total Quality Management Implementation in Public Secondary Schools: A Case Study of Kohat District, Pakistan

    ERIC Educational Resources Information Center

    Suleman, Qaiser; Gul, Rizwana

    2015-01-01

    The current study explores the challenges faced by public secondary schools in successful implementation of total quality management (TQM) in Kohat District. A sample of 25 heads and 75 secondary school teachers selected from 25 public secondary schools through simple random sampling technique was used. Descriptive research designed was used and a…

  11. The National Center on Indigenous Hawaiian Behavioral Health Study of Prevalence of Psychiatric Disorders in Native Hawaiian Adolescents

    ERIC Educational Resources Information Center

    Andrade, Naleen N.; Hishinuma, Earl S.; McDermott, John F., Jr.; Johnson, Ronald C.; Goebert, Deborah A.; Makini, George K., Jr.; Nahulu, Linda B.; Yuen, Noelle Y. C.; McArdle, John J.; Bell, Cathy K.; Carlton, Barry S.; Miyamoto, Robin H.; Nishimura, Stephanie T.; Else, Iwalani R. N.; Guerrero, Anthony P. S.; Darmal, Arsalan; Yates, Alayne; Waldron, Jane A.

    2006-01-01

    Objectives: The prevalence rates of disorders among a community-based sample of Hawaiian youths were determined and compared to previously published epidemiological studies. Method: Using a two-phase design, 7,317 adolescents were surveyed (60% participation rate), from which 619 were selected in a modified random sample during the 1992-1993 to…

  12. How Professionalized Is College Teaching? Norms and the Ideal of Service. ASHE Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Braxton, John M.; Bayer, Alan E.

    This study examined the behavioral expectations and norms for college and university faculty particularly whether they varied with respect to the level of commitment to teaching at different institutions and in different disciplines. A cluster sampling design was used to select a random sample of the population of faculty in biology, history,…

  13. How Much Videos Win over Audios in Listening Instruction for EFL Learners

    ERIC Educational Resources Information Center

    Yasin, Burhanuddin; Mustafa, Faisal; Permatasari, Rizki

    2017-01-01

    This study aims at comparing the benefits of using videos instead of audios for improving students' listening skills. This experimental study used a pre-test and post-test control group design. The sample, selected by cluster random sampling resulted in the selection of 32 second year high school students for each group. The instruments used were…

  14. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    PubMed Central

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  15. A comparison of adaptive sampling designs and binary spatial models: A simulation study using a census of Bromus inermis

    USGS Publications Warehouse

    Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa

    2013-01-01

    Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.

  16. Research Designs for Intervention Research with Small Samples II: Stepped Wedge and Interrupted Time-Series Designs.

    PubMed

    Fok, Carlotta Ching Ting; Henry, David; Allen, James

    2015-10-01

    The stepped wedge design (SWD) and the interrupted time-series design (ITSD) are two alternative research designs that maximize efficiency and statistical power with small samples when contrasted to the operating characteristics of conventional randomized controlled trials (RCT). This paper provides an overview and introduction to previous work with these designs and compares and contrasts them with the dynamic wait-list design (DWLD) and the regression point displacement design (RPDD), which were presented in a previous article (Wyman, Henry, Knoblauch, and Brown, Prevention Science. 2015) in this special section. The SWD and the DWLD are similar in that both are intervention implementation roll-out designs. We discuss similarities and differences between the SWD and DWLD in their historical origin and application, along with differences in the statistical modeling of each design. Next, we describe the main design characteristics of the ITSD, along with some of its strengths and limitations. We provide a critical comparative review of strengths and weaknesses in application of the ITSD, SWD, DWLD, and RPDD as small sample alternatives to application of the RCT, concluding with a discussion of the types of contextual factors that influence selection of an optimal research design by prevention researchers working with small samples.

  17. Research Designs for Intervention Research with Small Samples II: Stepped Wedge and Interrupted Time-Series Designs

    PubMed Central

    Ting Fok, Carlotta Ching; Henry, David; Allen, James

    2015-01-01

    The stepped wedge design (SWD) and the interrupted time-series design (ITSD) are two alternative research designs that maximize efficiency and statistical power with small samples when contrasted to the operating characteristics of conventional randomized controlled trials (RCT). This paper provides an overview and introduction to previous work with these designs, and compares and contrasts them with the dynamic wait-list design (DWLD) and the regression point displacement design (RPDD), which were presented in a previous article (Wyman, Henry, Knoblauch, and Brown, 2015) in this Special Section. The SWD and the DWLD are similar in that both are intervention implementation roll-out designs. We discuss similarities and differences between the SWD and DWLD in their historical origin and application, along with differences in the statistical modeling of each design. Next, we describe the main design characteristics of the ITSD, along with some of its strengths and limitations. We provide a critical comparative review of strengths and weaknesses in application of the ITSD, SWD, DWLD, and RPDD as small samples alternatives to application of the RCT, concluding with a discussion of the types of contextual factors that influence selection of an optimal research design by prevention researchers working with small samples. PMID:26017633

  18. Methodological Rigor in Preclinical Cardiovascular Studies

    PubMed Central

    Ramirez, F. Daniel; Motazedian, Pouya; Jung, Richard G.; Di Santo, Pietro; MacDonald, Zachary D.; Moreland, Robert; Simard, Trevor; Clancy, Aisling A.; Russo, Juan J.; Welch, Vivian A.; Wells, George A.

    2017-01-01

    Rationale: Methodological sources of bias and suboptimal reporting contribute to irreproducibility in preclinical science and may negatively affect research translation. Randomization, blinding, sample size estimation, and considering sex as a biological variable are deemed crucial study design elements to maximize the quality and predictive value of preclinical experiments. Objective: To examine the prevalence and temporal patterns of recommended study design element implementation in preclinical cardiovascular research. Methods and Results: All articles published over a 10-year period in 5 leading cardiovascular journals were reviewed. Reports of in vivo experiments in nonhuman mammals describing pathophysiology, genetics, or therapeutic interventions relevant to specific cardiovascular disorders were identified. Data on study design and animal model use were collected. Citations at 60 months were additionally examined as a surrogate measure of research impact in a prespecified subset of studies, stratified by individual and cumulative study design elements. Of 28 636 articles screened, 3396 met inclusion criteria. Randomization was reported in 21.8%, blinding in 32.7%, and sample size estimation in 2.3%. Temporal and disease-specific analyses show that the implementation of these study design elements has overall not appreciably increased over the past decade, except in preclinical stroke research, which has uniquely demonstrated significant improvements in methodological rigor. In a subset of 1681 preclinical studies, randomization, blinding, sample size estimation, and inclusion of both sexes were not associated with increased citations at 60 months. Conclusions: Methodological shortcomings are prevalent in preclinical cardiovascular research, have not substantially improved over the past 10 years, and may be overlooked when basing subsequent studies. Resultant risks of bias and threats to study validity have the potential to hinder progress in cardiovascular medicine as preclinical research often precedes and informs clinical trials. Stroke research quality has uniquely improved in recent years, warranting a closer examination for interventions to model in other cardiovascular fields. PMID:28373349

  19. Methodological Rigor in Preclinical Cardiovascular Studies: Targets to Enhance Reproducibility and Promote Research Translation.

    PubMed

    Ramirez, F Daniel; Motazedian, Pouya; Jung, Richard G; Di Santo, Pietro; MacDonald, Zachary D; Moreland, Robert; Simard, Trevor; Clancy, Aisling A; Russo, Juan J; Welch, Vivian A; Wells, George A; Hibbert, Benjamin

    2017-06-09

    Methodological sources of bias and suboptimal reporting contribute to irreproducibility in preclinical science and may negatively affect research translation. Randomization, blinding, sample size estimation, and considering sex as a biological variable are deemed crucial study design elements to maximize the quality and predictive value of preclinical experiments. To examine the prevalence and temporal patterns of recommended study design element implementation in preclinical cardiovascular research. All articles published over a 10-year period in 5 leading cardiovascular journals were reviewed. Reports of in vivo experiments in nonhuman mammals describing pathophysiology, genetics, or therapeutic interventions relevant to specific cardiovascular disorders were identified. Data on study design and animal model use were collected. Citations at 60 months were additionally examined as a surrogate measure of research impact in a prespecified subset of studies, stratified by individual and cumulative study design elements. Of 28 636 articles screened, 3396 met inclusion criteria. Randomization was reported in 21.8%, blinding in 32.7%, and sample size estimation in 2.3%. Temporal and disease-specific analyses show that the implementation of these study design elements has overall not appreciably increased over the past decade, except in preclinical stroke research, which has uniquely demonstrated significant improvements in methodological rigor. In a subset of 1681 preclinical studies, randomization, blinding, sample size estimation, and inclusion of both sexes were not associated with increased citations at 60 months. Methodological shortcomings are prevalent in preclinical cardiovascular research, have not substantially improved over the past 10 years, and may be overlooked when basing subsequent studies. Resultant risks of bias and threats to study validity have the potential to hinder progress in cardiovascular medicine as preclinical research often precedes and informs clinical trials. Stroke research quality has uniquely improved in recent years, warranting a closer examination for interventions to model in other cardiovascular fields. © 2017 The Authors.

  20. Estimating accuracy of land-cover composition from two-stage cluster sampling

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.

    2009-01-01

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.

  1. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  2. SAS procedures for designing and analyzing sample surveys

    USGS Publications Warehouse

    Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.

    2003-01-01

    Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).

  3. A Comparison of Seventh Grade Thai Students' Reading Comprehension and Motivation to Read English through Applied Instruction Based on the Genre-Based Approach and the Teacher's Manual

    ERIC Educational Resources Information Center

    Sawangsamutchai, Yutthasak; Rattanavich, Saowalak

    2016-01-01

    The objective of this research is to compare the English reading comprehension and motivation to read of seventh grade Thai students taught with applied instruction through the genre-based approach and teachers' manual. A randomized pre-test post-test control group design was used through the cluster random sampling technique. The data were…

  4. An Empirical Comparison of Methods for Equating with Randomly Equivalent Groups of 50 to 400 Test Takers. Research Report. ETS RR-10-05

    ERIC Educational Resources Information Center

    Livingston, Samuel A.; Kim, Sooyeon

    2010-01-01

    A series of resampling studies investigated the accuracy of equating by four different methods in a random groups equating design with samples of 400, 200, 100, and 50 test takers taking each form. Six pairs of forms were constructed. Each pair was constructed by assigning items from an existing test taken by 9,000 or more test takers. The…

  5. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis.

    PubMed

    Ozçift, Akin

    2011-05-01

    Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Utility-based designs for randomized comparative trials with categorical outcomes

    PubMed Central

    Murray, Thomas A.; Thall, Peter F.; Yuan, Ying

    2016-01-01

    A general utility-based testing methodology for design and conduct of randomized comparative clinical trials with categorical outcomes is presented. Numerical utilities of all elementary events are elicited to quantify their desirabilities. These numerical values are used to map the categorical outcome probability vector of each treatment to a mean utility, which is used as a one-dimensional criterion for constructing comparative tests. Bayesian tests are presented, including fixed sample and group sequential procedures, assuming Dirichlet-multinomial models for the priors and likelihoods. Guidelines are provided for establishing priors, eliciting utilities, and specifying hypotheses. Efficient posterior computation is discussed, and algorithms are provided for jointly calibrating test cutoffs and sample size to control overall type I error and achieve specified power. Asymptotic approximations for the power curve are used to initialize the algorithms. The methodology is applied to re-design a completed trial that compared two chemotherapy regimens for chronic lymphocytic leukemia, in which an ordinal efficacy outcome was dichotomized and toxicity was ignored to construct the trial’s design. The Bayesian tests also are illustrated by several types of categorical outcomes arising in common clinical settings. Freely available computer software for implementation is provided. PMID:27189672

  7. Experimental Design in Clinical 'Omics Biomarker Discovery.

    PubMed

    Forshed, Jenny

    2017-11-03

    This tutorial highlights some issues in the experimental design of clinical 'omics biomarker discovery, how to avoid bias and get as true quantities as possible from biochemical analyses, and how to select samples to improve the chance of answering the clinical question at issue. This includes the importance of defining clinical aim and end point, knowing the variability in the results, randomization of samples, sample size, statistical power, and how to avoid confounding factors by including clinical data in the sample selection, that is, how to avoid unpleasant surprises at the point of statistical analysis. The aim of this Tutorial is to help translational clinical and preclinical biomarker candidate research and to improve the validity and potential of future biomarker candidate findings.

  8. Model-based quantification of image quality

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Miller, Keith W.; Park, Stephen K.

    1989-01-01

    In 1982, Park and Schowengerdt published an end-to-end analysis of a digital imaging system quantifying three principal degradation components: (1) image blur - blurring caused by the acquisition system, (2) aliasing - caused by insufficient sampling, and (3) reconstruction blur - blurring caused by the imperfect interpolative reconstruction. This analysis, which measures degradation as the square of the radiometric error, includes the sample-scene phase as an explicit random parameter and characterizes the image degradation caused by imperfect acquisition and reconstruction together with the effects of undersampling and random sample-scene phases. In a recent paper Mitchell and Netravelli displayed the visual effects of the above mentioned degradations and presented subjective analysis about their relative importance in determining image quality. The primary aim of the research is to use the analysis of Park and Schowengerdt to correlate their mathematical criteria for measuring image degradations with subjective visual criteria. Insight gained from this research can be exploited in the end-to-end design of optical systems, so that system parameters (transfer functions of the acquisition and display systems) can be designed relative to each other, to obtain the best possible results using quantitative measurements.

  9. Design of a Phase III cluster randomized trial to assess the efficacy and safety of a malaria transmission blocking vaccine.

    PubMed

    Delrieu, Isabelle; Leboulleux, Didier; Ivinson, Karen; Gessner, Bradford D

    2015-03-24

    Vaccines interrupting Plasmodium falciparum malaria transmission targeting sexual, sporogonic, or mosquito-stage antigens (SSM-VIMT) are currently under development to reduce malaria transmission. An international group of malaria experts was established to evaluate the feasibility and optimal design of a Phase III cluster randomized trial (CRT) that could support regulatory review and approval of an SSM-VIMT. The consensus design is a CRT with a sentinel population randomly selected from defined inner and buffer zones in each cluster, a cluster size sufficient to assess true vaccine efficacy in the inner zone, and inclusion of ongoing assessment of vaccine impact stratified by distance of residence from the cluster edge. Trials should be conducted first in areas of moderate transmission, where SSM-VIMT impact should be greatest. Sample size estimates suggest that such a trial is feasible, and within the range of previously supported trials of malaria interventions, although substantial issues to implementation exist. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. The Perils of Ignoring Design Effects in Experimental Studies: Lessons from a Mammography Screening Trial

    PubMed Central

    Glenn, Beth A.; Bastani, Roshan; Maxwell, Annette E.

    2013-01-01

    Objective Threats to external validity including pretest sensitization and the interaction of selection and an intervention are frequently overlooked by researchers despite their potential to significantly influence study outcomes. The purpose of this investigation was to conduct secondary data analyses to assess the presence of external validity threats in the setting of a randomized trial designed to promote mammography use in a high risk sample of women. Design During the trial, recruitment and intervention implementation took place in three cohorts (with different ethnic composition), utilizing two different designs (pretest-posttest control group design; posttest only control group design). Results Results reveal that the intervention produced different outcomes across cohorts, dependent upon the research design used and the characteristics of the sample. Conclusion These results illustrate the importance of weighing the pros and cons of potential research designs before making a selection and attending more closely to issues of external validity. PMID:23289517

  11. MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.

    PubMed

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.

  12. MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark

    PubMed Central

    Qin, Li-Xuan; Zhou, Qin

    2014-01-01

    MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456

  13. Application and testing of a procedure to evaluate transferability of habitat suitability criteria

    USGS Publications Warehouse

    Thomas, Jeff A.; Bovee, Ken D.

    1993-01-01

    A procedure designed to test the transferability of habitat suitability criteria was evaluated in the Cache la Poudre River, Colorado. Habitat suitability criteria were developed for active adult and juvenile rainbow trout in the South Platte River, Colorado. These criteria were tested by comparing microhabitat use predicted from the criteria with observed microhabitat use by adult rainbow trout in the Cache la Poudre River. A one-sided X2 test, using counts of occupied and unoccupied cells in each suitability classification, was used to test for non-random selection for optimum habitat use over usable habitat and for suitable over unsuitable habitat. Criteria for adult rainbow trout were judged to be transferable to the Cache la Poudre River, but juvenile criteria (applied to adults) were not transferable. Random subsampling of occupied and unoccupied cells was conducted to determine the effect of sample size on the reliability of the test procedure. The incidence of type I and type II errors increased rapidly as the sample size was reduced below 55 occupied and 200 unoccupied cells. Recommended modifications to the procedure included the adoption of a systematic or randomized sampling design and direct measurement of microhabitat variables. With these modifications, the procedure is economical, simple and reliable. Use of the procedure as a quality assurance device in routine applications of the instream flow incremental methodology was encouraged.

  14. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  15. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  16. Investigation of Hall Effect Thruster Channel Wall Erosion Mechanisms

    DTIC Science & Technology

    2016-08-02

    pretest height and laser image, c, d) post - test height and laser image. On all the pre-roughened samples, a cell-pattern developed from the random...7.8: Pre and post - test sample microscopy: Fused silica sample SA6 (loaded), 20x, center of exposed surface, a, b) pretest height and laser image, c, d...stress on the surface features developed during plasma erosion. The experiment is also designed specifically to test the SRH. A test fixture is

  17. Design of Phase II Non-inferiority Trials.

    PubMed

    Jung, Sin-Ho

    2017-09-01

    With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.

  18. Effects of a Worksite Tobacco Control Intervention in India: The Mumbai Worksite Tobacco Control Study, a Cluster Randomized Trial

    PubMed Central

    Sorensen, Glorian; Pednekar, Mangesh; Cordeira, Laura Shulman; Pawar, Pratibha; Nagler, Eve; Stoddard, Anne M.; Kim, Hae-Young; Gupta, Prakash C.

    2016-01-01

    Objectives We assessed a worksite intervention designed to promote tobacco control among manufacturing workers in Greater Mumbai, India. Methods We used a cluster-randomized design to test an integrated health promotion/health protection intervention, which addressed changes at the management and worker levels. Between July 2012 and July 2013, we recruited 20 worksites on a rolling basis and randomly assigned them to intervention or delayed-intervention control conditions. The follow-up survey was conducted between December 2013 and November 2014. Results The difference in 30-day quit rates between intervention and control conditions was statistically significant for production workers (OR=2.25, P=0.03), although not for the overall sample (OR=1.70; P=0.12). The intervention resulted in a doubling of the 6-month cessation rates among workers in the intervention worksites compared to those in the control, for production workers (OR=2.29; P=0.07) and for the overall sample (OR=1.81; P=0.13), but the difference did not reach statistical significance. Conclusions These findings demonstrate the potential impact of a tobacco control intervention that combined tobacco control and health protection programming within Indian manufacturing worksites. PMID:26883793

  19. Statistical considerations in evaluating pharmacogenomics-based clinical effect for confirmatory trials.

    PubMed

    Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James

    2010-10-01

    The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.

  20. Design and patient characteristics of ESHOL study, a Catalonian prospective randomized study.

    PubMed

    Maduell, Francisco; Moreso, Francesc; Pons, Mercedes; Ramos, Rosa; Mora-Macià, Josep; Foraster, Andreu; Soler, Jordi; Galceran, Josep M; Martinez-Castelao, Alberto

    2011-01-01

    Retrospective studies showed that online hemodiafiltration (OL-HDF) is associated with a risk reduction of mortality over standard hemodialysis (HD) in patients with end-stage renal disease. Until now, no information was available from prospective randomized clinical trials. A prospective, randomized, multicenter, open study was designed to be conducted in HD units from Catalonia (Spain). The aim of the study is to compare 3-year survival in prevalent end-stage renal disease patients randomized to OL-HDF or to continue on standard HD. The minimum sample size was calculated according to Catalonian mortality of patients on dialysis and assuming a risk reduction associated with OL-HDF of 35% (1-sided p<0.05 and a statistical power of 0.8) and a rate of dropout due to renal transplantation or loss to follow-up of 30%. From May 2007 to September 2008, 906 patients were included and randomized to OL-HDF (n=456) or standard HD (n=450). Demographics and analytical data at the time of randomization were not different between both groups of patients. Patients will be followed during a 3-year period. The present study will contribute to evaluating the benefit for patient survival of OL-HDF over standard HD.

  1. Multi-species attributes as the condition for adaptive sampling of rare species using two-stage sequential sampling with an auxiliary variable

    USGS Publications Warehouse

    Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.

    2011-01-01

    Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.

  2. A Demonstration Sample for Poetry Education: Poem under the Light of "Poetics of the Open Work"

    ERIC Educational Resources Information Center

    Afacan, Aydin

    2016-01-01

    The aim of this study is to provide a demonstration sample for the high school stage under the light of "Poetics of the Open Work" that is considered as a step towards comprehending the qualified poem. In this study, has been built in single group pretest-posttest design. Independent variables are applied to a randomly selected group to…

  3. Factors Influencing Teachers' Competence in Developing Resilience in Vulnerable Children in Primary Schools in Uasin Gishu County, Kenya

    ERIC Educational Resources Information Center

    Silyvier, Tsindoli; Nyandusi, Charles

    2015-01-01

    The purpose of the study was to assess the effect of teacher characteristics on their competence in developing resilience in vulnerable primary school children. A descriptive survey research design was used. This study was based on resiliency theory as proposed by Krovetz (1998). Simple random sampling was used to select a sample size of 108…

  4. Dietary Habits and Nutritional Status in Mentally Retarded Children and Adolescents: A Study from North Western India

    ERIC Educational Resources Information Center

    Mathur, Manju; Bhargava, Rachna; Benipal, Ramandeep; Luthra, Neena; Basu, Sabita; Kaur, Jasbinder; Chavan, B. S.

    2007-01-01

    Objective: To compare the dietary habits and nutritional status of mentally retarded (MR) and normal (NG) subjects and to examine the relationship between the dietary habits and nutritional status and the level of mental retardation in the MR group. Method: A case control design was utilized: 117 MR (random sampling) and 100 NG (quota sampling)…

  5. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  6. How informative are open-label studies for youth with bipolar disorder? A meta-analysis comparing open-label versus randomized, placebo-controlled clinical trials.

    PubMed

    Biederman, Joseph; Petty, Carter R; Woodworth, K Yvonne; Lomedico, Alexandra; O'Connor, Katherine B; Wozniak, Janet; Faraone, Stephen V

    2012-03-01

    To examine the informativeness of open-label trials toward predicting results in subsequent randomized, placebo-controlled clinical trials of psychopharmacologic treatments for pediatric bipolar disorder. We searched journal articles through PubMed at the National Library of Medicine using bipolar disorder, mania, pharmacotherapy, treatment and clinical trial as keywords. This search was supplemented with scientific presentations at national and international scientific meetings and submitted manuscripts from our group. Selection criteria included (1) enrollment of children diagnosed with DSM-IV bipolar disorder; (2) prospective assessment of at least 3 weeks; (3) monotherapy of a pharmacologic treatment for bipolar disorder; (4) use of a randomized placebo-controlled design or an open-label design for the same therapeutic compound; and (5) repeated use of the Young Mania Rating Scale (YMRS) as an outcome. The following information and data were extracted from 14 studies: study design, name of medication, class of medication, dose of medication, sample size, age, sex, trial length, and YMRS mean and standard deviation baseline and follow-up scores. For both study designs, the pooled effect size was statistically significant (open-label studies, z = 8.88, P < .001; randomized placebo-controlled studies, z = 13.75, P < .001), indicating a reduction in the YMRS from baseline to endpoint in both study designs. In a meta-analysis regression, study design was not a significant predictor of mean change in the YMRS. We found similarities in the treatment effects between open-label and randomized placebo-controlled studies in youth with bipolar disorder indicating that open-label studies are useful predictors of the potential safety and efficacy of a given compound in the treatment of pediatric bipolar disorder. © Copyright 2012 Physicians Postgraduate Press, Inc.

  7. Design of state-feedback controllers including sensitivity reduction, with applications to precision pointing

    NASA Technical Reports Server (NTRS)

    Hadass, Z.

    1974-01-01

    The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.

  8. Experimental toxicology: Issues of statistics, experimental design, and replication.

    PubMed

    Briner, Wayne; Kirwan, Jeral

    2017-01-01

    The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Implications of sampling design and sample size for national carbon accounting systems.

    PubMed

    Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel

    2011-11-08

    Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.

  10. Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.

    PubMed

    Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo

    2012-01-01

    The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.

  11. Experimental Design Considerations for Establishing an Off-Road, Habitat-Specific Bird Monitoring Program Using Point Counts

    Treesearch

    JoAnn M. Hanowski; Gerald J. Niemi

    1995-01-01

    We established bird monitoring programs in two regions of Minnesota: the Chippewa National Forest and the Superior National Forest. The experimental design defined forest cover types as strata in which samples of forest stands were randomly selected. Subsamples (3 point counts) were placed in each stand to maximize field effort and to assess within-stand and between-...

  12. Evaluating Effect of Students' Academic Achievement on Identified Difficult Concepts in Senior Secondary School Biology in Delta State

    ERIC Educational Resources Information Center

    Agboghoroma, Tim E.; Oyovwi, E. O.

    2015-01-01

    This study evaluated the effect of students' academic achievement on identified difficult concepts or topics in Senior Secondary School Biology in Delta State, Nigeria. The study was quasi-experimental and the design was a 2X2 factorial non-randomized pretest-posttest control group design. The sample was drawn from intact classes from four…

  13. Evaluation of Child Health Matters: A Web-Based Tutorial to Enhance School Nurses' Communications with Families about Weight-Related Health

    ERIC Educational Resources Information Center

    Steele, Ric G.; Wu, Yelena P.; Cushing, Christopher C.; Jensen, Chad D.

    2013-01-01

    The goal of the current study was to assess the efficacy and acceptability of a web-based tutorial (Child Health Matters, CHM) designed to improve school nurses' communications with families about pediatric weight-related health issues. Using a randomized wait-list control design, a nationally representative sample of school nurses was assigned to…

  14. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  15. Methods for a multicenter randomized trial for mixed urinary incontinence: rationale and patient-centeredness of the ESTEEM trial.

    PubMed

    Sung, Vivian W; Borello-France, Diane; Dunivan, Gena; Gantz, Marie; Lukacz, Emily S; Moalli, Pamela; Newman, Diane K; Richter, Holly E; Ridgeway, Beri; Smith, Ariana L; Weidner, Alison C; Meikle, Susan

    2016-10-01

    Mixed urinary incontinence (MUI) can be a challenging condition to manage. We describe the protocol design and rationale for the Effects of Surgical Treatment Enhanced with Exercise for Mixed Urinary Incontinence (ESTEEM) trial, designed to compare a combined conservative and surgical treatment approach versus surgery alone for improving patient-centered MUI outcomes at 12 months. ESTEEM is a multisite, prospective, randomized trial of female participants with MUI randomized to a standardized perioperative behavioral/pelvic floor exercise intervention plus midurethral sling versus midurethral sling alone. We describe our methods and four challenges encountered during the design phase: defining the study population, selecting relevant patient-centered outcomes, determining sample size estimates using a patient-reported outcome measure, and designing an analysis plan that accommodates MUI failure rates. A central theme in the design was patient centeredness, which guided many key decisions. Our primary outcome is patient-reported MUI symptoms measured using the Urogenital Distress Inventory (UDI) score at 12 months. Secondary outcomes include quality of life, sexual function, cost-effectiveness, time to failure, and need for additional treatment. The final study design was implemented in November 2013 across eight clinical sites in the Pelvic Floor Disorders Network. As of 27 February 2016, 433 total/472 targeted participants had been randomized. We describe the ESTEEM protocol and our methods for reaching consensus for methodological challenges in designing a trial for MUI by maintaining the patient perspective at the core of key decisions. This trial will provide information that can directly impact patient care and clinical decision making.

  16. Implications of clinical trial design on sample size requirements.

    PubMed

    Leon, Andrew C

    2008-07-01

    The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.

  17. Assessing the Generalizability of Randomized Trial Results to Target Populations

    PubMed Central

    Stuart, Elizabeth A.; Bradshaw, Catherine P.; Leaf, Philip J.

    2014-01-01

    Recent years have seen increasing interest in and attention to evidence-based practices, where the “evidence” generally comes from well-conducted randomized trials. However, while those trials yield accurate estimates of the effect of the intervention for the participants in the trial (known as “internal validity”), they do not always yield relevant information about the effects in a particular target population (known as “external validity”). This may be due to a lack of specification of a target population when designing the trial, difficulties recruiting a sample that is representative of a pre-specified target population, or to interest in considering a target population somewhat different from the population directly targeted by the trial. This paper first provides an overview of existing design and analysis methods for assessing and enhancing the ability of a randomized trial to estimate treatment effects in a target population. It then provides a case study using one particular method, which weights the subjects in a randomized trial to match the population on a set of observed characteristics. The case study uses data from a randomized trial of School-wide Positive Behavioral Interventions and Supports (PBIS); our interest is in generalizing the results to the state of Maryland. In the case of PBIS, after weighting, estimated effects in the target population were similar to those observed in the randomized trial. The paper illustrates that statistical methods can be used to assess and enhance the external validity of randomized trials, making the results more applicable to policy and clinical questions. However, there are also many open research questions; future research should focus on questions of treatment effect heterogeneity and further developing these methods for enhancing external validity. Researchers should think carefully about the external validity of randomized trials and be cautious about extrapolating results to specific populations unless they are confident of the similarity between the trial sample and that target population. PMID:25307417

  18. Assessing the generalizability of randomized trial results to target populations.

    PubMed

    Stuart, Elizabeth A; Bradshaw, Catherine P; Leaf, Philip J

    2015-04-01

    Recent years have seen increasing interest in and attention to evidence-based practices, where the "evidence" generally comes from well-conducted randomized trials. However, while those trials yield accurate estimates of the effect of the intervention for the participants in the trial (known as "internal validity"), they do not always yield relevant information about the effects in a particular target population (known as "external validity"). This may be due to a lack of specification of a target population when designing the trial, difficulties recruiting a sample that is representative of a prespecified target population, or to interest in considering a target population somewhat different from the population directly targeted by the trial. This paper first provides an overview of existing design and analysis methods for assessing and enhancing the ability of a randomized trial to estimate treatment effects in a target population. It then provides a case study using one particular method, which weights the subjects in a randomized trial to match the population on a set of observed characteristics. The case study uses data from a randomized trial of school-wide positive behavioral interventions and supports (PBIS); our interest is in generalizing the results to the state of Maryland. In the case of PBIS, after weighting, estimated effects in the target population were similar to those observed in the randomized trial. The paper illustrates that statistical methods can be used to assess and enhance the external validity of randomized trials, making the results more applicable to policy and clinical questions. However, there are also many open research questions; future research should focus on questions of treatment effect heterogeneity and further developing these methods for enhancing external validity. Researchers should think carefully about the external validity of randomized trials and be cautious about extrapolating results to specific populations unless they are confident of the similarity between the trial sample and that target population.

  19. PRELIMINARY REPORT ON NATIONWIDE STUDY OF DRINKING WATER AND CARDIOVASCULAR DISEASES

    EPA Science Inventory

    This study was designed to further investigate the association(s) of cardiovascular diseases and drinking water constituents. A sample of 4200 adults were randomly selected from 35 geographic areas to represent the civilian noninstitutionalized population of the contiguous United...

  20. Sample size requirements for the design of reliability studies: precision consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.

  1. Optimizing adaptive design for Phase 2 dose-finding trials incorporating long-term success and financial considerations: A case study for neuropathic pain.

    PubMed

    Gao, Jingjing; Nangia, Narinder; Jia, Jia; Bolognese, James; Bhattacharyya, Jaydeep; Patel, Nitin

    2017-06-01

    In this paper, we propose an adaptive randomization design for Phase 2 dose-finding trials to optimize Net Present Value (NPV) for an experimental drug. We replace the traditional fixed sample size design (Patel, et al., 2012) by this new design to see if NPV from the original paper can be improved. Comparison of the proposed design to the previous design is made via simulations using a hypothetical example based on a Diabetic Neuropathic Pain Study. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Quantifying the impact of time-varying baseline risk adjustment in the self-controlled risk interval design.

    PubMed

    Li, Lingling; Kulldorff, Martin; Russek-Cohen, Estelle; Kawai, Alison Tse; Hua, Wei

    2015-12-01

    The self-controlled risk interval design is commonly used to assess the association between an acute exposure and an adverse event of interest, implicitly adjusting for fixed, non-time-varying covariates. Explicit adjustment needs to be made for time-varying covariates, for example, age in young children. It can be performed via either a fixed or random adjustment. The random-adjustment approach can provide valid point and interval estimates but requires access to individual-level data for an unexposed baseline sample. The fixed-adjustment approach does not have this requirement and will provide a valid point estimate but may underestimate the variance. We conducted a comprehensive simulation study to evaluate their performance. We designed the simulation study using empirical data from the Food and Drug Administration-sponsored Mini-Sentinel Post-licensure Rapid Immunization Safety Monitoring Rotavirus Vaccines and Intussusception study in children 5-36.9 weeks of age. The time-varying confounder is age. We considered a variety of design parameters including sample size, relative risk, time-varying baseline risks, and risk interval length. The random-adjustment approach has very good performance in almost all considered settings. The fixed-adjustment approach can be used as a good alternative when the number of events used to estimate the time-varying baseline risks is at least the number of events used to estimate the relative risk, which is almost always the case. We successfully identified settings in which the fixed-adjustment approach can be used as a good alternative and provided guidelines on the selection and implementation of appropriate analyses for the self-controlled risk interval design. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Improving the evidence base in palliative care to inform practice and policy: thinking outside the box.

    PubMed

    Aoun, Samar M; Nekolaichuk, Cheryl

    2014-12-01

    The adoption of evidence-based hierarchies and research methods from other disciplines may not completely translate to complex palliative care settings. The heterogeneity of the palliative care population, complexity of clinical presentations, and fluctuating health states present significant research challenges. The aim of this narrative review was to explore the debate about the use of current evidence-based approaches for conducting research, such as randomized controlled trials and other study designs, in palliative care, and more specifically to (1) describe key myths about palliative care research; (2) highlight substantive challenges of conducting palliative care research, using case illustrations; and (3) propose specific strategies to address some of these challenges. Myths about research in palliative care revolve around evidence hierarchies, sample heterogeneity, random assignment, participant burden, and measurement issues. Challenges arise because of the complex physical, psychological, existential, and spiritual problems faced by patients, families, and service providers. These challenges can be organized according to six general domains: patient, system/organization, context/setting, study design, research team, and ethics. A number of approaches for dealing with challenges in conducting research fall into five separate domains: study design, sampling, conceptual, statistical, and measures and outcomes. Although randomized controlled trials have their place whenever possible, alternative designs may offer more feasible research protocols that can be successfully implemented in palliative care. Therefore, this article highlights "outside the box" approaches that would benefit both clinicians and researchers in the palliative care field. Ultimately, the selection of research designs is dependent on a clearly articulated research question, which drives the research process. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  4. Validation of optical codes based on 3D nanostructures

    NASA Astrophysics Data System (ADS)

    Carnicer, Artur; Javidi, Bahram

    2017-05-01

    Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.

  5. A post hoc evaluation of a sample size re-estimation in the Secondary Prevention of Small Subcortical Strokes study.

    PubMed

    McClure, Leslie A; Szychowski, Jeff M; Benavente, Oscar; Hart, Robert G; Coffey, Christopher S

    2016-10-01

    The use of adaptive designs has been increasing in randomized clinical trials. Sample size re-estimation is a type of adaptation in which nuisance parameters are estimated at an interim point in the trial and the sample size re-computed based on these estimates. The Secondary Prevention of Small Subcortical Strokes study was a randomized clinical trial assessing the impact of single- versus dual-antiplatelet therapy and control of systolic blood pressure to a higher (130-149 mmHg) versus lower (<130 mmHg) target on recurrent stroke risk in a two-by-two factorial design. A sample size re-estimation was performed during the Secondary Prevention of Small Subcortical Strokes study resulting in an increase from the planned sample size of 2500-3020, and we sought to determine the impact of the sample size re-estimation on the study results. We assessed the results of the primary efficacy and safety analyses with the full 3020 patients and compared them to the results that would have been observed had randomization ended with 2500 patients. The primary efficacy outcome considered was recurrent stroke, and the primary safety outcomes were major bleeds and death. We computed incidence rates for the efficacy and safety outcomes and used Cox proportional hazards models to examine the hazard ratios for each of the two treatment interventions (i.e. the antiplatelet and blood pressure interventions). In the antiplatelet intervention, the hazard ratio was not materially modified by increasing the sample size, nor did the conclusions regarding the efficacy of mono versus dual-therapy change: there was no difference in the effect of dual- versus monotherapy on the risk of recurrent stroke hazard ratios (n = 3020 HR (95% confidence interval): 0.92 (0.72, 1.2), p = 0.48; n = 2500 HR (95% confidence interval): 1.0 (0.78, 1.3), p = 0.85). With respect to the blood pressure intervention, increasing the sample size resulted in less certainty in the results, as the hazard ratio for higher versus lower systolic blood pressure target approached, but did not achieve, statistical significance with the larger sample (n = 3020 HR (95% confidence interval): 0.81 (0.63, 1.0), p = 0.089; n = 2500 HR (95% confidence interval): 0.89 (0.68, 1.17), p = 0.40). The results from the safety analyses were similar to 3020 and 2500 patients for both study interventions. Other trial-related factors, such as contracts, finances, and study management, were impacted as well. Adaptive designs can have benefits in randomized clinical trials, but do not always result in significant findings. The impact of adaptive designs should be measured in terms of both trial results, as well as practical issues related to trial management. More post hoc analyses of study adaptations will lead to better understanding of the balance between the benefits and the costs. © The Author(s) 2016.

  6. Computerized stratified random site-selection approaches for design of a ground-water-quality sampling network

    USGS Publications Warehouse

    Scott, J.C.

    1990-01-01

    Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.

  7. Pseudo-Random Sequence Modifications for Ion Mobility Orthogonal Time of Flight Mass Spectrometry

    PubMed Central

    Clowers, Brian H.; Belov, Mikhail E.; Prior, David C.; Danielson, William F.; Ibrahim, Yehia; Smith, Richard D.

    2008-01-01

    Due to the inherently low duty cycle of ion mobility spectrometry (IMS) experiments that sample from continuous ion sources, a range of experimental advances have been developed to maximize ion utilization efficiency. The use of ion trapping mechanisms prior to the ion mobility drift tube has demonstrated significant gains over discrete sampling from continuous sources; however, these technologies have traditionally relied upon a signal averaging to attain analytically relevant signal-to-noise ratios (SNR). Multiplexed (MP) techniques based upon the Hadamard transform offer an alternative experimental approach by which ion utilization efficiency can be elevated to ∼ 50 %. Recently, our research group demonstrated a unique multiplexed ion mobility time-of-flight (MP-IMS-TOF) approach that incorporates ion trapping and can extend ion utilization efficiency beyond 50 %. However, the spectral reconstruction of the multiplexed signal using this experiment approach requires the use of sample-specific weighing designs. Though general weighing designs have been shown to significantly enhance ion utilization efficiency using this MP technique, such weighing designs cannot be applied to all samples. By modifying both the ion funnel trap and the pseudo random sequence (PRS) used for the MP experiment we have eliminated the need for complex weighing matrices. For both simple and complex mixtures SNR enhancements of up to 13 were routinely observed as compared to the SA-IMS-TOF experiment. In addition, this new class of PRS provides a two fold enhancement in ion throughput compared to the traditional HT-IMS experiment. PMID:18311942

  8. Two-sample binary phase 2 trials with low type I error and low sample size

    PubMed Central

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A.

    2017-01-01

    Summary We address design of two-stage clinical trials comparing experimental and control patients. Our end-point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p0 and alternative that it is p0 among controls and p1 > p0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E ≥ m, with two-sample rules of the form E – C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. PMID:28118686

  9. Random On-Board Pixel Sampling (ROPS) X-Ray Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhehui; Iaroshenko, O.; Li, S.

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less

  10. A randomized clinical trial aimed at preventing poor psychosocial and glycemic outcomes in teens with type 1 diabetes (T1D)

    PubMed Central

    Weissberg-Benchell, Jill; Rausch, Joseph; Iturralde, Esti; Jedraszko, Aneta; Hood, Korey

    2016-01-01

    Adolescents with type 1 diabetes have an increased risk for a variety of emotional and behavioral challenges as well as negative diabetes outcomes. This study was designed to compare the effectiveness of a depression-prevention, resilience promotion program with an advanced diabetes education program. Each program consisted of 9 group-based sessions. There were 264 adolescents enrolled in this multi-site randomized clinical trial. The primary outcomes were depressive symptoms and glycemic control; secondary outcomes included resilience skills, diabetes management and adherence, and diabetes-specific distress. The goal of the present paper is to describe the study design, the intervention, and the baseline characteristics of the sample. Preliminary data suggests that enrollment, randomization and retention were successful. Longitudinal follow-up and examination of mechanisms of action as they relate to psychosocial and glycemic outcomes will be explored in the future. PMID:27267154

  11. Improving health outcomes for youth living with the human immunodeficiency virus: a multisite randomized trial of a motivational intervention targeting multiple risk behaviors.

    PubMed

    Naar-King, Sylvie; Parsons, Jeffrey T; Murphy, Debra A; Chen, Xinguang; Harris, D Robert; Belzer, Marvin E

    2009-12-01

    To determine if Healthy Choices, a motivational interviewing intervention targeting multiple risk behaviors, improved human immunodeficiency virus (HIV) viral load. A randomized, 2-group repeated measures design with analysis of data from baseline and 6- and 9-month follow-up collected from 2005 to 2007. Five US adolescent medicine HIV clinics. A convenience sample with at least 1 of 3 risk behaviors (nonadherence to HIV medications, substance abuse, and unprotected sex) was enrolled. The sample was aged 16 to 24 years and primarily African American. Of the 205 enrolled, 19 did not complete baseline data collections, for a final sample size of 186. Young people living with HIV were randomized to the intervention plus specialty care (n = 94) or specialty care alone (n = 92). The 3- and 6-month follow-up rates, respectively, were 86% and 82% for the intervention group and 81% and 73% for controls. Intervention Healthy Choices was a 4-session individual clinic-based motivational interviewing intervention delivered during a 10-week period. Motivational interviewing is a method of communication designed to elicit and reinforce intrinsic motivation for change. Outcome Measure Plasma viral load. Youth randomized to Healthy Choices showed a significant decline in viral load at 6 months postintervention compared with youth in the control condition (beta = -0.36, t = -2.15, P = .03), with those prescribed antiretroviral medications showing the lowest viral loads. Differences were no longer significant at 9 months. A motivational interviewing intervention targeting multiple risk behaviors resulted in short-term improvements in viral load for youth living with HIV. Trial Registration clinicaltrials.gov Identifier: NCT00103532.

  12. Randomized controlled trials and neuro-oncology: should alternative designs be considered?

    PubMed

    Mansouri, Alireza; Shin, Samuel; Cooper, Benjamin; Srivastava, Archita; Bhandari, Mohit; Kondziolka, Douglas

    2015-09-01

    Deficiencies in design and reporting of randomized controlled trials (RCTs) hinders interpretability and critical appraisal. The reporting quality of recent RCTs in neuro-oncology was analyzed to assess adequacy of design and reporting. The MEDLINE and EMBASE databases were searched to identify non-surgical RCTs (years 2005-2014, inclusive). The CONSORT and Jadad scales were used to assess the quality of design/reporting. Studies published in 2005-2010 were compared as a cohort against studies published in 2011-2014, in terms of general characteristics and reporting quality. A PRECIS-based scale was used to designate studies on the pragmatic-explanatory continuum. Spearman's test was used to assess correlations. Regression analysis was used to assess associations. Overall 68 RCTs were identified. Studies were often chemotherapy-based (n = 41 studies) focusing upon high grade gliomas (46 %) and metastases (41 %) as the top pathologies. Multi-center trials (71 %) were frequent. The overall median CONSORT and Jadad scores were 34.5 (maximum 44) and 2 (maximum 5), respectively; these scores were similar in radiation and chemotherapy-based trials. Major areas of deficiency pertained to allocation concealment, implementation of methods, and blinding whereby less than 20 % of articles fulfilled all criteria. Description of intervention, random sequence generation, and the details regarding recruitment were also deficient; less than 50 % of studies fulfilled all criteria. Description of sample size calculations and blinding improved in later published cohorts. Journal impact factor was significantly associated with higher quality (p = 0.04). Large academic consortia, multi-center designs, ITT analysis, collaboration with biostatisticians, larger sample sizes, and studies with pragmatic objectives were more likely to achieve positive primary outcomes on univariate analysis; none of these variables were significant on multivariate analysis. Deficiencies in the quality of design/reporting of RCTs in neuro-oncology persist. Quality improvement is necessary. Consideration of alternative strategies should be considered.

  13. The Level of High-Order Thinking and Its Relation to Quality of Life among Students at Ajloun University College

    ERIC Educational Resources Information Center

    Al Rabadi, Wail Minwer; Salem, Rifqa Khleif

    2018-01-01

    The study was designed to identify the effect of high-order thinking on the quality of life among Ajloun University students. The study used the associative method. The randomly selected sample consisted of 147 students from Ajloun University College. The study used two tools: The two measures were applied to the sample of the current study after…

  14. Sample size determinations for group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms.

    PubMed

    Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H

    2017-02-01

    We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.

  15. TOWARDS USING STABLE SPERMATOZOAL RNAS FOR PROGNOSTIC ASSESSMENT OF MALE FACTOR FERTILITY

    EPA Science Inventory

    Objective: To establish the stability of spermatozoal RNAs as a means to validate their use as a male fertility marker. Design: Semen samples were randomly selected for 1 of 3 cryopreservation treatments. Setting: An academic research environment. Patient(s): Men aged...

  16. Randomized Controlled Trials in Music Therapy: Guidelines for Design and Implementation.

    PubMed

    Bradt, Joke

    2012-01-01

    Evidence from randomized controlled trials (RCTs) plays a powerful role in today's healthcare industry. At the same time, it is important that multiple types of evidence contribute to music therapy's knowledge base and that the dialogue of clinical effectiveness in music therapy is not dominated by the biomedical hierarchical model of evidence-based practice. Whether or not one agrees with the hierarchical model of evidence in the current healthcare climate, RCTs can contribute important knowledge to our field. Therefore, it is important that music therapists are prepared to design trials that meet current methodological standards and, equally important, are able to respond appropriately to those design aspects that may not be feasible in music therapy research. To provide practical guidelines to music therapy researchers for the design and implementation of RCTs as well as to enable music therapists to be well-informed consumers of RCT evidence. This article reviews key design aspects of RCTs and discusses how to best implement these standards in music therapy trials. A systematic presentation of basic randomization methods, allocation concealment strategies, issues related to blinding in music therapy trials and strategies for implementation, the use of treatment manuals, types of control groups, outcome selection, and sample size computation is provided. Despite the challenges of meeting all key design demands typical of an RCT, it is possible to design rigorous music therapy RCTs that accurately estimate music therapy treatment benefits.

  17. Methodological reporting of randomized trials in five leading Chinese nursing journals.

    PubMed

    Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu

    2014-01-01

    Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34 ± 0.97 (Mean ± SD). No RCT reported descriptions and changes in "trial design," changes in "outcomes" and "implementation," or descriptions of the similarity of interventions for "blinding." Poor reporting was found in detailing the "settings of participants" (13.1%), "type of randomization sequence generation" (1.8%), calculation methods of "sample size" (0.4%), explanation of any interim analyses and stopping guidelines for "sample size" (0.3%), "allocation concealment mechanism" (0.3%), additional analyses in "statistical methods" (2.1%), and targeted subjects and methods of "blinding" (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of "participants," "interventions," and definitions of the "outcomes" and "statistical methods." The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods.

  18. A design methodology for nonlinear systems containing parameter uncertainty

    NASA Technical Reports Server (NTRS)

    Young, G. E.; Auslander, D. M.

    1983-01-01

    In the present design methodology for nonlinear systems containing parameter uncertainty, a generalized sensitivity analysis is incorporated which employs parameter space sampling and statistical inference. For the case of a system with j adjustable and k nonadjustable parameters, this methodology (which includes an adaptive random search strategy) is used to determine the combination of j adjustable parameter values which maximize the probability of those performance indices which simultaneously satisfy design criteria in spite of the uncertainty due to k nonadjustable parameters.

  19. Benchmarking protein classification algorithms via supervised cross-validation.

    PubMed

    Kertész-Farkas, Attila; Dhir, Somdutta; Sonego, Paolo; Pacurar, Mircea; Netoteia, Sergiu; Nijveen, Harm; Kuzniar, Arnold; Leunissen, Jack A M; Kocsor, András; Pongor, Sándor

    2008-04-24

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold, leave-one-out, etc.) may not give reliable estimates on how an algorithm will generalize to novel, distantly related subtypes of the known protein classes. Supervised cross-validation, i.e., selection of test and train sets according to the known subtypes within a database has been successfully used earlier in conjunction with the SCOP database. Our goal was to extend this principle to other databases and to design standardized benchmark datasets for protein classification. Hierarchical classification trees of protein categories provide a simple and general framework for designing supervised cross-validation strategies for protein classification. Benchmark datasets can be designed at various levels of the concept hierarchy using a simple graph-theoretic distance. A combination of supervised and random sampling was selected to construct reduced size model datasets, suitable for algorithm comparison. Over 3000 new classification tasks were added to our recently established protein classification benchmark collection that currently includes protein sequence (including protein domains and entire proteins), protein structure and reading frame DNA sequence data. We carried out an extensive evaluation based on various machine-learning algorithms such as nearest neighbor, support vector machines, artificial neural networks, random forests and logistic regression, used in conjunction with comparison algorithms, BLAST, Smith-Waterman, Needleman-Wunsch, as well as 3D comparison methods DALI and PRIDE. The resulting datasets provide lower, and in our opinion more realistic estimates of the classifier performance than do random cross-validation schemes. A combination of supervised and random sampling was used to construct model datasets, suitable for algorithm comparison.

  20. External validity of randomized controlled trials in older adults, a systematic review.

    PubMed

    van Deudekom, Floor J; Postmus, Iris; van der Ham, Danielle J; Pothof, Alexander B; Broekhuizen, Karen; Blauw, Gerard J; Mooijaart, Simon P

    2017-01-01

    To critically assess the external validity of randomized controlled trials (RCTs) it is important to know what older adults have been enrolled in the trials. The aim of this systematic review is to study what proportion of trials specifically designed for older patients report on somatic status, physical and mental functioning, social environment and frailty in the patient characteristics. PubMed was searched for articles published in 2012 and only RCTs were included. Articles were further excluded if not conducted with humans or only secondary analyses were reported. A random sample of 10% was drawn. The current review analyzed this random sample and further selected trials when the reported mean age was ≥ 60 years. We extracted geriatric assessments from the population descriptives or the in- and exclusion criteria. In total 1396 trials were analyzed and 300 trials included. The median of the reported mean age was 66 (IQR 63-70) and the median percentage of men in the trials was 60 (IQR 45-72). In 34% of the RCTs specifically designed for older patients somatic status, physical and mental functioning, social environment or frailty were reported in the population descriptives or the in- and exclusion criteria. Physical and mental functioning was reported most frequently (22% and 14%). When selecting RCTs on a mean age of 70 or 80 years the report of geriatric assessments in the patient characteristics was 46% and 85% respectively but represent only 5% and 1% of the trials. Somatic status, physical and mental functioning, social environment and frailty are underreported even in RCTs specifically designed for older patients published in 2012. Therefore, it is unclear for clinicians to which older patients the results can be applied. We recommend systematic to transparently report these relevant characteristics of older participants included in RCTs.

  1. Treatment of Middle East Respiratory Syndrome with a combination of lopinavir-ritonavir and interferon-β1b (MIRACLE trial): study protocol for a randomized controlled trial.

    PubMed

    Arabi, Yaseen M; Alothman, Adel; Balkhy, Hanan H; Al-Dawood, Abdulaziz; AlJohani, Sameera; Al Harbi, Shmeylan; Kojan, Suleiman; Al Jeraisy, Majed; Deeb, Ahmad M; Assiri, Abdullah M; Al-Hameed, Fahad; AlSaedi, Asim; Mandourah, Yasser; Almekhlafi, Ghaleb A; Sherbeeni, Nisreen Murad; Elzein, Fatehi Elnour; Memon, Javed; Taha, Yusri; Almotairi, Abdullah; Maghrabi, Khalid A; Qushmaq, Ismael; Al Bshabshe, Ali; Kharaba, Ayman; Shalhoub, Sarah; Jose, Jesna; Fowler, Robert A; Hayden, Frederick G; Hussein, Mohamed A

    2018-01-30

    It had been more than 5 years since the first case of Middle East Respiratory Syndrome coronavirus infection (MERS-CoV) was recorded, but no specific treatment has been investigated in randomized clinical trials. Results from in vitro and animal studies suggest that a combination of lopinavir/ritonavir and interferon-β1b (IFN-β1b) may be effective against MERS-CoV. The aim of this study is to investigate the efficacy of treatment with a combination of lopinavir/ritonavir and recombinant IFN-β1b provided with standard supportive care, compared to treatment with placebo provided with standard supportive care in patients with laboratory-confirmed MERS requiring hospital admission. The protocol is prepared in accordance with the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) guidelines. Hospitalized adult patients with laboratory-confirmed MERS will be enrolled in this recursive, two-stage, group sequential, multicenter, placebo-controlled, double-blind randomized controlled trial. The trial is initially designed to include 2 two-stage components. The first two-stage component is designed to adjust sample size and determine futility stopping, but not efficacy stopping. The second two-stage component is designed to determine efficacy stopping and possibly readjustment of sample size. The primary outcome is 90-day mortality. This will be the first randomized controlled trial of a potential treatment for MERS. The study is sponsored by King Abdullah International Medical Research Center, Riyadh, Saudi Arabia. Enrollment for this study began in November 2016, and has enrolled thirteen patients as of Jan 24-2018. ClinicalTrials.gov, ID: NCT02845843 . Registered on 27 July 2016.

  2. Sample size determination for GEE analyses of stepped wedge cluster randomized trials.

    PubMed

    Li, Fan; Turner, Elizabeth L; Preisser, John S

    2018-06-19

    In stepped wedge cluster randomized trials, intact clusters of individuals switch from control to intervention from a randomly assigned period onwards. Such trials are becoming increasingly popular in health services research. When a closed cohort is recruited from each cluster for longitudinal follow-up, proper sample size calculation should account for three distinct types of intraclass correlations: the within-period, the inter-period, and the within-individual correlations. Setting the latter two correlation parameters to be equal accommodates cross-sectional designs. We propose sample size procedures for continuous and binary responses within the framework of generalized estimating equations that employ a block exchangeable within-cluster correlation structure defined from the distinct correlation types. For continuous responses, we show that the intraclass correlations affect power only through two eigenvalues of the correlation matrix. We demonstrate that analytical power agrees well with simulated power for as few as eight clusters, when data are analyzed using bias-corrected estimating equations for the correlation parameters concurrently with a bias-corrected sandwich variance estimator. © 2018, The International Biometric Society.

  3. Are quantitative trait-dependent sampling designs cost-effective for analysis of rare and common variants?

    PubMed

    Yilmaz, Yildiz E; Bull, Shelley B

    2011-11-29

    Use of trait-dependent sampling designs in whole-genome association studies of sequence data can reduce total sequencing costs with modest losses of statistical efficiency. In a quantitative trait (QT) analysis of data from the Genetic Analysis Workshop 17 mini-exome for unrelated individuals in the Asian subpopulation, we investigate alternative designs that sequence only 50% of the entire cohort. In addition to a simple random sampling design, we consider extreme-phenotype designs that are of increasing interest in genetic association analysis of QTs, especially in studies concerned with the detection of rare genetic variants. We also evaluate a novel sampling design in which all individuals have a nonzero probability of being selected into the sample but in which individuals with extreme phenotypes have a proportionately larger probability. We take differential sampling of individuals with informative trait values into account by inverse probability weighting using standard survey methods which thus generalizes to the source population. In replicate 1 data, we applied the designs in association analysis of Q1 with both rare and common variants in the FLT1 gene, based on knowledge of the generating model. Using all 200 replicate data sets, we similarly analyzed Q1 and Q4 (which is known to be free of association with FLT1) to evaluate relative efficiency, type I error, and power. Simulation study results suggest that the QT-dependent selection designs generally yield greater than 50% relative efficiency compared to using the entire cohort, implying cost-effectiveness of 50% sample selection and worthwhile reduction of sequencing costs.

  4. Simulation methods to estimate design power: an overview for applied research.

    PubMed

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  5. Simulation methods to estimate design power: an overview for applied research

    PubMed Central

    2011-01-01

    Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447

  6. How Important Are 'Entry Effects' in Financial Incentive Programs for Welfare Recipients? Experimental Evidence from the Self-Sufficiency Project. SRDC Working Papers.

    ERIC Educational Resources Information Center

    Card, David; Robins, Philip K.; Lin, Winston

    The Self-Sufficiency Project (SSP) entry effect experiment was designed to measure the effect of the future availability of an earnings supplement on the behavior of newly enrolled income assistance (IA) recipients. It used a classical randomized design. From a sample of 3,315 single parents who recently started a new period of IA, one-half were…

  7. The Effects of a Simulation Game on Learning of Geographic Information at the Fifth Grade Level. Final Report.

    ERIC Educational Resources Information Center

    Keach, Everett T., Jr.; Pierfy, David A.

    The research in this report was conducted to assess the cognitive impact of a simulation game designed to teach selected geographic data about wind and ocean currents to fifth graders. A two-group, post-test research design was used. A random procedure was used to assign 185 students to two treatment groups. The sample was divided by sex, ranked…

  8. School-Based Service-Learning for Promoting Citizenship in Young People: A Systematic Review

    DTIC Science & Technology

    2005-09-06

    nonequivalent pre- and post-test design with control group was utilized but participants were not randomized to groups . The sample...other methodology. She notes the limitations of the research chosen for the review (i.e., most studies lack a control group , do not track effects over...experimental and control groups Pre- and post- test design Surveys “Service-learning”12 Intervention groups : Service-learning

  9. Image gathering and processing - Information and fidelity

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Halyo, N.; Samms, R. W.; Stacy, K.

    1985-01-01

    In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scences indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.

  10. Characteristics of men with substance use disorder consequent to illicit drug use: comparison of a random sample and volunteers.

    PubMed

    Reynolds, Maureen D; Tarter, Ralph E; Kirisci, Levent

    2004-09-06

    Men qualifying for substance use disorder (SUD) consequent to consumption of an illicit drug were compared according to recruitment method. It was hypothesized that volunteers would be more self-disclosing and exhibit more severe disturbances compared to randomly recruited subjects. Personal, demographic, family, social, substance use, psychiatric, and SUD characteristics of volunteers (N = 146) were compared to randomly recruited (N = 102) subjects. Volunteers had lower socioceconomic status, were more likely to be African American, and had lower IQ than randomly recruited subjects. Volunteers also evidenced greater social and family maladjustment and more frequently had received treatment for substance abuse. In addition, lower social desirability response bias was observed in the volunteers. SUD was not more severe in the volunteers; however, they reported a higher lifetime rate of opiate, diet, depressant, and analgesic drug use. Volunteers and randomly recruited subjects qualifying for SUD consequent to illicit drug use are similar in SUD severity but differ in terms of severity of psychosocial disturbance and history of drug involvement. The factors discriminating volunteers and randomly recruited subjects are well known to impact on outcome, hence they need to be considered in research design, especially when selecting a sampling strategy in treatment research.

  11. A New Stratified Sampling Procedure which Decreases Error Estimation of Varroa Mite Number on Sticky Boards.

    PubMed

    Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y

    2015-06-01

    A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Exact tests using two correlated binomial variables in contemporary cancer clinical trials.

    PubMed

    Yu, Jihnhee; Kepner, James L; Iyer, Renuka

    2009-12-01

    New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.

  13. Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach

    NASA Astrophysics Data System (ADS)

    Xiao, T.

    2012-12-01

    One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.

  14. Who Recommends Long-Term Care Matters

    ERIC Educational Resources Information Center

    Kane, Robert L.; Bershadsky, Boris; Bershadsky, Julie

    2006-01-01

    Purpose: Making good consumer decisions requires having good information. This study compared long-term-care recommendations among various types of health professionals. Design and Methods: We gave randomly varied scenarios to a convenience national sample of 211 professionals from varying disciplines and work locations. For each scenario, we…

  15. Determinants of Effective Communication among Undergraduate Students

    ERIC Educational Resources Information Center

    Anvari, Roya; Atiyaye, Dauda Mohammed

    2014-01-01

    This study aims to investigate the relationship between effective communication and transferring information. In the present correlational study, a cross-sectional research design was employed, and data were collected using a questionnaire-based survey. 46 students were chosen based on random sampling and questionnaires were distributed among…

  16. [Krigle estimation and its simulated sampling of Chilo suppressalis population density].

    PubMed

    Yuan, Zheming; Bai, Lianyang; Wang, Kuiwu; Hu, Xiangyue

    2004-07-01

    In order to draw up a rational sampling plan for the larvae population of Chilo suppressalis, an original population and its two derivative populations, random population and sequence population, were sampled and compared with random sampling, gap-range-random sampling, and a new systematic sampling integrated Krigle interpolation and random original position. As for the original population whose distribution was up to aggregative and dependence range in line direction was 115 cm (6.9 units), gap-range-random sampling in line direction was more precise than random sampling. Distinguishing the population pattern correctly is the key to get a better precision. Gap-range-random sampling and random sampling are fit for aggregated population and random population, respectively, but both of them are difficult to apply in practice. Therefore, a new systematic sampling named as Krigle sample (n = 441) was developed to estimate the density of partial sample (partial estimation, n = 441) and population (overall estimation, N = 1500). As for original population, the estimated precision of Krigle sample to partial sample and population was better than that of investigation sample. With the increase of the aggregation intensity of population, Krigel sample was more effective than investigation sample in both partial estimation and overall estimation in the appropriate sampling gap according to the dependence range.

  17. The effect of atomoxetine on random and directed exploration in humans.

    PubMed

    Warren, Christopher M; Wilson, Robert C; van der Wee, Nic J; Giltay, Eric J; van Noorden, Martijn S; Cohen, Jonathan D; Nieuwenhuis, Sander

    2017-01-01

    The adaptive regulation of the trade-off between pursuing a known reward (exploitation) and sampling lesser-known options in search of something better (exploration) is critical for optimal performance. Theory and recent empirical work suggest that humans use at least two strategies for solving this dilemma: a directed strategy in which choices are explicitly biased toward information seeking, and a random strategy in which decision noise leads to exploration by chance. Here we examined the hypothesis that random exploration is governed by the neuromodulatory locus coeruleus-norepinephrine system. We administered atomoxetine, a norepinephrine transporter blocker that increases extracellular levels of norepinephrine throughout the cortex, to 22 healthy human participants in a double-blind crossover design. We examined the effect of treatment on performance in a gambling task designed to produce distinct measures of directed exploration and random exploration. In line with our hypothesis we found an effect of atomoxetine on random, but not directed exploration. However, contrary to expectation, atomoxetine reduced rather than increased random exploration. We offer three potential explanations of our findings, involving the non-linear relationship between tonic NE and cognitive performance, the interaction of atomoxetine with other neuromodulators, and the possibility that atomoxetine affected phasic norepinephrine activity more so than tonic norepinephrine activity.

  18. Best (but oft-forgotten) practices: the design, analysis, and interpretation of Mendelian randomization studies1

    PubMed Central

    Bowden, Jack; Relton, Caroline; Davey Smith, George

    2016-01-01

    Mendelian randomization (MR) is an increasingly important tool for appraising causality in observational epidemiology. The technique exploits the principle that genotypes are not generally susceptible to reverse causation bias and confounding, reflecting their fixed nature and Mendel’s first and second laws of inheritance. The approach is, however, subject to important limitations and assumptions that, if unaddressed or compounded by poor study design, can lead to erroneous conclusions. Nevertheless, the advent of 2-sample approaches (in which exposure and outcome are measured in separate samples) and the increasing availability of open-access data from large consortia of genome-wide association studies and population biobanks mean that the approach is likely to become routine practice in evidence synthesis and causal inference research. In this article we provide an overview of the design, analysis, and interpretation of MR studies, with a special emphasis on assumptions and limitations. We also consider different analytic strategies for strengthening causal inference. Although impossible to prove causality with any single approach, MR is a highly cost-effective strategy for prioritizing intervention targets for disease prevention and for strengthening the evidence base for public health policy. PMID:26961927

  19. Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.

    PubMed

    Keller, Lisa A; Clauser, Brian E; Swanson, David B

    2010-12-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.

  20. Fitting distributions to microbial contamination data collected with an unequal probability sampling design.

    PubMed

    Williams, M S; Ebel, E D; Cao, Y

    2013-01-01

    The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.

  1. Landsat image and sample design for water reservoirs (Rapel dam Central Chile).

    PubMed

    Lavanderos, L; Pozo, M E; Pattillo, C; Miranda, H

    1990-01-01

    Spatial heterogeneity of the Rapel reservoir surface waters is analyzed through Landsat images. The image digital counts are used with the aim or developing an aprioristic quantitative sample design.Natural horizontal stratification of the Rapel Reservoir (Central Chile) is produced mainly by suspended solids. The spatial heterogeneity conditions of the reservoir for the Spring 86-Summer 87 period were determined by qualitative analysis and image processing of the MSS Landsat, bands 1 and 3. The space-time variations of the different observed strata obtained with multitemporal image analysis.A random stratified sample design (r.s.s.d) was developed, based on the digital counts statistical analysis. Strata population size as well as the average, variance and sampling size of the digital counts were obtained by the r.s.s.d method.Stratification determined by analysis of satellite images were later correlated with ground data. Though the stratification of the reservoir is constant over time, the shape and size of the strata varys.

  2. Uncertainty in monitoring E. coli concentrations in streams and stormwater runoff

    NASA Astrophysics Data System (ADS)

    Harmel, R. D.; Hathaway, J. M.; Wagner, K. L.; Wolfe, J. E.; Karthikeyan, R.; Francesconi, W.; McCarthy, D. T.

    2016-03-01

    Microbial contamination of surface waters, a substantial public health concern throughout the world, is typically identified by fecal indicator bacteria such as Escherichia coli. Thus, monitoring E. coli concentrations is critical to evaluate current conditions, determine restoration effectiveness, and inform model development and calibration. An often overlooked component of these monitoring and modeling activities is understanding the inherent random and systematic uncertainty present in measured data. In this research, a review and subsequent analysis was performed to identify, document, and analyze measurement uncertainty of E. coli data collected in stream flow and stormwater runoff as individual discrete samples or throughout a single runoff event. Data on the uncertainty contributed by sample collection, sample preservation/storage, and laboratory analysis in measured E. coli concentrations were compiled and analyzed, and differences in sampling method and data quality scenarios were compared. The analysis showed that: (1) manual integrated sampling produced the lowest random and systematic uncertainty in individual samples, but automated sampling typically produced the lowest uncertainty when sampling throughout runoff events; (2) sample collection procedures often contributed the highest amount of uncertainty, although laboratory analysis introduced substantial random uncertainty and preservation/storage introduced substantial systematic uncertainty under some scenarios; and (3) the uncertainty in measured E. coli concentrations was greater than that of sediment and nutrients, but the difference was not as great as may be assumed. This comprehensive analysis of uncertainty in E. coli concentrations measured in streamflow and runoff should provide valuable insight for designing E. coli monitoring projects, reducing uncertainty in quality assurance efforts, regulatory and policy decision making, and fate and transport modeling.

  3. Intraherd correlation coefficients and design effects for bovine viral diarrhoea, infectious bovine rhinotracheitis, leptospirosis and neosporosis in cow-calf system herds in North-eastern Mexico.

    PubMed

    Segura-Correa, J C; Domínguez-Díaz, D; Avalos-Ramírez, R; Argaez-Sosa, J

    2010-09-01

    Knowledge of the intraherd correlation coefficient (ICC) and design (D) effect for infectious diseases could be of interest in sample size calculation and to provide the correct standard errors of prevalence estimates in cluster or two-stage samplings surveys. Information on 813 animals from 48 non-vaccinated cow-calf herds from North-eastern Mexico was used. The ICC for the bovine viral diarrhoea (BVD), infectious bovine rhinotracheitis (IBR), leptospirosis and neosporosis diseases were calculated using a Bayesian approach adjusting for the sensitivity and specificity of the diagnostic tests. The ICC and D values for BVD, IBR, leptospirosis and neosporosis were 0.31 and 5.91, 0.18 and 3.88, 0.22 and 4.53, and 0.11 and 2.68, respectively. The ICC and D values were different from 0 and D greater than 1, therefore large sample sizes are required to obtain the same precision in prevalence estimates than for a random simple sampling design. The report of ICC and D values is of great help in planning and designing two-stage sampling studies. 2010 Elsevier B.V. All rights reserved.

  4. Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling

    PubMed Central

    2006-01-01

    Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling. PMID:16937083

  5. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    PubMed

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  6. Designing group sequential randomized clinical trials with time to event end points using a R function.

    PubMed

    Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew

    2012-10-01

    A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Learning Bayesian Networks from Correlated Data

    NASA Astrophysics Data System (ADS)

    Bae, Harold; Monti, Stefano; Montano, Monty; Steinberg, Martin H.; Perls, Thomas T.; Sebastiani, Paola

    2016-05-01

    Bayesian networks are probabilistic models that represent complex distributions in a modular way and have become very popular in many fields. There are many methods to build Bayesian networks from a random sample of independent and identically distributed observations. However, many observational studies are designed using some form of clustered sampling that introduces correlations between observations within the same cluster and ignoring this correlation typically inflates the rate of false positive associations. We describe a novel parameterization of Bayesian networks that uses random effects to model the correlation within sample units and can be used for structure and parameter learning from correlated data without inflating the Type I error rate. We compare different learning metrics using simulations and illustrate the method in two real examples: an analysis of genetic and non-genetic factors associated with human longevity from a family-based study, and an example of risk factors for complications of sickle cell anemia from a longitudinal study with repeated measures.

  8. What is covered by "cancer rehabilitation" in PubMed? A review of randomized controlled trials 1990-2011.

    PubMed

    Gudbergsson, Sævar Berg; Dahl, Alv A; Loge, Jon Håvard; Thorsen, Lene; Oldervoll, Line M; Grov, Ellen K

    2015-02-01

    This focused review examines randomized controlled studies included by the term "cancer rehabilitation" in PubMed. The research questions concern the type of interventions performed and their methodological quality. Using the Medical Subject Headings (MeSH) terms: neoplasm AND rehabilitation, all articles with randomized controlled studies that included adult cancer patients, written in English, were extracted from PubMed. Papers covering physical exercise, psychiatric/psychological treatment or social support only were excluded as they had been reviewed recently. Abstracts and papers were assessed by 3 pairs of reviewers, and descriptive information was extracted systematically. Methodological quality was rated on a 10-item index scale, and the cut-off for acceptable quality was set at ≥ 8. A total of 132 (19%) of the 683 identified papers met the eligibility criteria and were assessed in detail. The papers were grouped into 5 thematic categories: 44 physical; 15 art and expressive; 47 psycho-educative; 21 emotionally supportive; and 5 others. Good quality of design was observed in 32 studies, 18 of them uni-dimensional and 14 multi-dimensional. Published randomized controlled studies on cancer rehabilitation are heterogeneous in terms of content and samples, and are mostly characterized by suboptimal design quality. Future studies should be more specific and well-designed with sufficient statistical strength.

  9. Science Laboratory Environment and Academic Performance

    ERIC Educational Resources Information Center

    Aladejana, Francisca; Aderibigbe, Oluyemisi

    2007-01-01

    The study determined how students assess the various components of their science laboratory environment. It also identified how the laboratory environment affects students' learning outcomes. The modified ex-post facto design was used. A sample of 328 randomly selected students was taken from a population of all Senior Secondary School chemistry…

  10. 48 CFR 13.303-6 - Review procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Review procedures. (a) The contracting officer placing orders under a BPA, or the designated representative of the contracting officer, shall review a sufficient random sample of the BPA files at least... into the BPA shall— (1) Ensure that each BPA is reviewed at least annually and, if necessary, updated...

  11. 48 CFR 13.303-6 - Review procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Review procedures. (a) The contracting officer placing orders under a BPA, or the designated representative of the contracting officer, shall review a sufficient random sample of the BPA files at least... into the BPA shall— (1) Ensure that each BPA is reviewed at least annually and, if necessary, updated...

  12. 48 CFR 13.303-6 - Review procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Review procedures. (a) The contracting officer placing orders under a BPA, or the designated representative of the contracting officer, shall review a sufficient random sample of the BPA files at least... into the BPA shall— (1) Ensure that each BPA is reviewed at least annually and, if necessary, updated...

  13. 48 CFR 13.303-6 - Review procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Review procedures. (a) The contracting officer placing orders under a BPA, or the designated representative of the contracting officer, shall review a sufficient random sample of the BPA files at least... into the BPA shall— (1) Ensure that each BPA is reviewed at least annually and, if necessary, updated...

  14. 48 CFR 13.303-6 - Review procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Review procedures. (a) The contracting officer placing orders under a BPA, or the designated representative of the contracting officer, shall review a sufficient random sample of the BPA files at least... into the BPA shall— (1) Ensure that each BPA is reviewed at least annually and, if necessary, updated...

  15. Psycholinguistic Behaviors of Black, Disadvantaged Rural Children.

    ERIC Educational Resources Information Center

    Cowles, Milly; Daniel, Kathryn Barchard

    This study was designed to compare the psycholinguistic abilities of a randomly selected sample of 32 kindergarten children and 32 first grade children (with no kindergarten experience) and to analyze any discrepancy existing between psycholinguistic age (PLA) and chronological age (CA). Each of the kindergarten classrooms from which children were…

  16. Perfectionism Moderates Stereotype Threat Effects on STEM Majors' Academic Performance

    ERIC Educational Resources Information Center

    Rice, Kenneth G.; Lopez, Frederick G.; Richardson, Clarissa M. E.; Stinson, Jennifer M.

    2013-01-01

    Using a randomized, between-subjects experimental design, we tested hypotheses that self-critical perfectionism would moderate the effects of subtle stereotype threat (ST) for women and students in underrepresented racial/ethnic groups who are pursuing traditional degrees in science, technology, engineering, or math (STEM). A diverse sample of…

  17. APPLICATION OF A MULTIPURPOSE UNEQUAL-PROBABILITY STREAM SURVEY IN THE MID-ATLANTIC COASTAL PLAIN

    EPA Science Inventory

    A stratified random sample with unequal-probability selection was used to design a multipurpose survey of headwater streams in the Mid-Atlantic Coastal Plain. Objectives for data from the survey include unbiased estimates of regional stream conditions, and adequate coverage of un...

  18. Relationship of Study Habits with Mathematics Achievement

    ERIC Educational Resources Information Center

    Odiri, Onoshakpokaiye E.

    2015-01-01

    The study examined the relationship of study habits of students and their achievement in mathematics. The method used for the study was correlation design. A sample of 500 students were randomly selected from 25 public secondary schools in Delta Central Senatorial District, Delta State, Nigeria. Questionnaires were drawn to gather data on…

  19. EXTENT, PROPERTIES, AND LANDSCAPE SETTING OF GEOGRAPHICALLY ISOLATED WETLANDS IN URBAN SOUTHERN NEW ENGLAND WATERSHEDS

    EPA Science Inventory

    We assessed the extent and characteristics of geographically isolated wetlands (i.e., wetlands completely surrounded by upland) in a series of drainage basins in the urban northeast U.S. We employed a random sampling design that stratifies study sites according to their degree o...

  20. Assessing Principals' Quality Assurance Strategies in Osun State Secondary Schools, Nigeria

    ERIC Educational Resources Information Center

    Fasasi, Yunus Adebunmi; Oyeniran, Saheed

    2014-01-01

    This paper examined principals' quality assurance strategies in secondary schools in Osun State, Nigeria. The study adopted a descriptive survey research design. Stratified random sampling technique was used to select 10 male and 10 female principals, and 190 male and190 female teachers. "Secondary School Principal Quality Assurance…

  1. Implications of Birth Order for Motivational and Achievement-Related Characteristics of Adults Enrolled in Non-Traditional Instruction

    ERIC Educational Resources Information Center

    Farley, Frank; And Others

    1974-01-01

    The present study was designed to investigate the relationship of birth order to achievement motivation and achievement-related variables employing a random sample of students enrolled in the courses offered through the United States Armed Forces Institute (USAFI) in 1970. (Author)

  2. Quantifying Urban Watershed Stressor Gradients and Evaluating How Different Land Cover Datasets Affect Stream Management

    EPA Science Inventory

    We used a gradient (divided into impervious cover categories), spatially-balanced, random design (1) to sample streams along an impervious cover gradient in a large coastal watershed, (2) to characterize relationships between water chemistry and land cover, and (3) to document di...

  3. Southern Schools: An Evaluation of the Effects of the Emergency School Assistance Program and of School Desegregation. Volume I.

    ERIC Educational Resources Information Center

    Crain, Robert L., Ed.

    This evaluation sampled 150 pairs of schools (50 pairs of high schools and 100 pairs of elementary schools) eligible for ESAP funds, randomly designating one school from each pair as a control school to receive no ESAP funds and using a flip of the coin to so designate. The first volume of the report comprises four chapters and seven appendices.…

  4. Data on Enacted Curriculum Study: Summary of Findings Experimental Design Study of Effectiveness of DEC Professional Development Model in Urban Middle Schools

    ERIC Educational Resources Information Center

    Blank, Rolf K.

    2004-01-01

    The purpose of the three-year CCSSO study was to design, implement, and test the effectiveness of the Data on Enacted Curriculum (DEC) model for improving math and science instruction. The model was tested by measuring its effects with a randomly selected sample of ?treatment? schools at the middle grades level as compared to a control group of…

  5. Two-sample binary phase 2 trials with low type I error and low sample size.

    PubMed

    Litwin, Samuel; Basickes, Stanley; Ross, Eric A

    2017-04-30

    We address design of two-stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1  > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E⩾m, with two-sample rules of the form E - C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  7. The Method of Randomization for Cluster-Randomized Trials: Challenges of Including Patients with Multiple Chronic Conditions

    PubMed Central

    Esserman, Denise; Allore, Heather G.; Travison, Thomas G.

    2016-01-01

    Cluster-randomized clinical trials (CRT) are trials in which the unit of randomization is not a participant but a group (e.g. healthcare systems or community centers). They are suitable when the intervention applies naturally to the cluster (e.g. healthcare policy); when lack of independence among participants may occur (e.g. nursing home hygiene); or when it is most ethical to apply an intervention to all within a group (e.g. school-level immunization). Because participants in the same cluster receive the same intervention, CRT may approximate clinical practice, and may produce generalizable findings. However, when not properly designed or interpreted, CRT may induce biased results. CRT designs have features that add complexity to statistical estimation and inference. Chief among these is the cluster-level correlation in response measurements induced by the randomization. A critical consideration is the experimental unit of inference; often it is desirable to consider intervention effects at the level of the individual rather than the cluster. Finally, given that the number of clusters available may be limited, simple forms of randomization may not achieve balance between intervention and control arms at either the cluster- or participant-level. In non-clustered clinical trials, balance of key factors may be easier to achieve because the sample can be homogenous by exclusion of participants with multiple chronic conditions (MCC). CRTs, which are often pragmatic, may eschew such restrictions. Failure to account for imbalance may induce bias and reducing validity. This article focuses on the complexities of randomization in the design of CRTs, such as the inclusion of patients with MCC, and imbalances in covariate factors across clusters. PMID:27478520

  8. A model-based 'varimax' sampling strategy for a heterogeneous population.

    PubMed

    Akram, Nuzhat A; Farooqi, Shakeel R

    2014-01-01

    Sampling strategies are planned to enhance the homogeneity of a sample, hence to minimize confounding errors. A sampling strategy was developed to minimize the variation within population groups. Karachi, the largest urban agglomeration in Pakistan, was used as a model population. Blood groups ABO and Rh factor were determined for 3000 unrelated individuals selected through simple random sampling. Among them five population groups, namely Balochi, Muhajir, Pathan, Punjabi and Sindhi, based on paternal ethnicity were identified. An index was designed to measure the proportion of admixture at parental and grandparental levels. Population models based on index score were proposed. For validation, 175 individuals selected through stratified random sampling were genotyped for the three STR loci CSF1PO, TPOX and TH01. ANOVA showed significant differences across the population groups for blood groups and STR loci distribution. Gene diversity was higher across the sub-population model than in the agglomerated population. At parental level gene diversities are significantly higher across No admixture models than Admixture models. At grandparental level the difference was not significant. A sub-population model with no admixture at parental level was justified for sampling the heterogeneous population of Karachi.

  9. Reflections on experimental research in medical education.

    PubMed

    Cook, David A; Beckman, Thomas J

    2010-08-01

    As medical education research advances, it is important that education researchers employ rigorous methods for conducting and reporting their investigations. In this article we discuss several important yet oft neglected issues in designing experimental research in education. First, randomization controls for only a subset of possible confounders. Second, the posttest-only design is inherently stronger than the pretest-posttest design, provided the study is randomized and the sample is sufficiently large. Third, demonstrating the superiority of an educational intervention in comparison to no intervention does little to advance the art and science of education. Fourth, comparisons involving multifactorial interventions are hopelessly confounded, have limited application to new settings, and do little to advance our understanding of education. Fifth, single-group pretest-posttest studies are susceptible to numerous validity threats. Finally, educational interventions (including the comparison group) must be described in detail sufficient to allow replication.

  10. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    PubMed Central

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-01-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. PMID:27555464

  11. On the use of secondary capture-recapture samples to estimate temporary emigration and breeding proportions

    USGS Publications Warehouse

    Kendall, W.L.; Nichols, J.D.; North, P.M.; Nichols, J.D.

    1995-01-01

    The use of the Cormack- Jolly-Seber model under a standard sampling scheme of one sample per time period, when the Jolly-Seber assumption that all emigration is permanent does not hold, leads to the confounding of temporary emigration probabilities with capture probabilities. This biases the estimates of capture probability when temporary emigration is a completely random process, and both capture and survival probabilities when there is a temporary trap response in temporary emigration, or it is Markovian. The use of secondary capture samples over a shorter interval within each period, during which the population is assumed to be closed (Pollock's robust design), provides a second source of information on capture probabilities. This solves the confounding problem, and thus temporary emigration probabilities can be estimated. This process can be accomplished in an ad hoc fashion for completely random temporary emigration and to some extent in the temporary trap response case, but modelling the complete sampling process provides more flexibility and permits direct estimation of variances. For the case of Markovian temporary emigration, a full likelihood is required.

  12. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Hierarchical model analysis of the Atlantic Flyway Breeding Waterfowl Survey

    USGS Publications Warehouse

    Sauer, John R.; Zimmerman, Guthrie S.; Klimstra, Jon D.; Link, William A.

    2014-01-01

    We used log-linear hierarchical models to analyze data from the Atlantic Flyway Breeding Waterfowl Survey. The survey has been conducted by state biologists each year since 1989 in the northeastern United States from Virginia north to New Hampshire and Vermont. Although yearly population estimates from the survey are used by the United States Fish and Wildlife Service for estimating regional waterfowl population status for mallards (Anas platyrhynchos), black ducks (Anas rubripes), wood ducks (Aix sponsa), and Canada geese (Branta canadensis), they are not routinely adjusted to control for time of day effects and other survey design issues. The hierarchical model analysis permits estimation of year effects and population change while accommodating the repeated sampling of plots and controlling for time of day effects in counting. We compared population estimates from the current stratified random sample analysis to population estimates from hierarchical models with alternative model structures that describe year to year changes as random year effects, a trend with random year effects, or year effects modeled as 1-year differences. Patterns of population change from the hierarchical model results generally were similar to the patterns described by stratified random sample estimates, but significant visibility differences occurred between twilight to midday counts in all species. Controlling for the effects of time of day resulted in larger population estimates for all species in the hierarchical model analysis relative to the stratified random sample analysis. The hierarchical models also provided a convenient means of estimating population trend as derived statistics from the analysis. We detected significant declines in mallard and American black ducks and significant increases in wood ducks and Canada geese, a trend that had not been significant for 3 of these 4 species in the prior analysis. We recommend using hierarchical models for analysis of the Atlantic Flyway Breeding Waterfowl Survey.

  14. A remote sensing and geographic information system approach to sampling malaria vector habitats in Chiapas, Mexico

    NASA Astrophysics Data System (ADS)

    Beck, L.; Wood, B.; Whitney, S.; Rossi, R.; Spanner, M.; Rodriguez, M.; Rodriguez-Ramirez, A.; Salute, J.; Legters, L.; Roberts, D.; Rejmankova, E.; Washino, R.

    1993-08-01

    This paper describes a procedure whereby remote sensing and geographic information system (GIS) technologies are used in a sample design to study the habitat of Anopheles albimanus, one of the principle vectors of malaria in Central America. This procedure incorporates Landsat-derived land cover maps with digital elevation and road network data to identify a random selection of larval habitats accessible for field sampling. At the conclusion of the sampling season, the larval counts will be used to determine habitat productivity, and then integrated with information on human settlement to assess where people are at high risk of malaria. This aproach would be appropriate in areas where land cover information is lacking and problems of access constrain field sampling. The use of a GIS also permits other data (such as insecticide spraying data) to the incorporated in the sample design as they arise. This approach would also be pertinent for other tropical vector-borne diseases, particularly where human activities impact disease vector habitat.

  15. Sampling design for an integrated socioeconomic and ecological survey by using satellite remote sensing and ordination

    PubMed Central

    Binford, Michael W.; Lee, Tae Jeong; Townsend, Robert M.

    2004-01-01

    Environmental variability is an important risk factor in rural agricultural communities. Testing models requires empirical sampling that generates data that are representative in both economic and ecological domains. Detrended correspondence analysis of satellite remote sensing data were used to design an effective low-cost sampling protocol for a field study to create an integrated socioeconomic and ecological database when no prior information on ecology of the survey area existed. We stratified the sample for the selection of tambons from various preselected provinces in Thailand based on factor analysis of spectral land-cover classes derived from satellite data. We conducted the survey for the sampled villages in the chosen tambons. The resulting data capture interesting variations in soil productivity and in the timing of good and bad years, which a purely random sample would likely have missed. Thus, this database will allow tests of hypotheses concerning the effect of credit on productivity, the sharing of idiosyncratic risks, and the economic influence of environmental variability. PMID:15254298

  16. A sero-survey of rinderpest in nomadic pastoral systems in central and southern Somalia from 2002 to 2003, using a spatially integrated random sampling approach.

    PubMed

    Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M

    2010-12-01

    A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.

  17. Demonstration of Multi- and Single-Reader Sample Size Program for Diagnostic Studies software.

    PubMed

    Hillis, Stephen L; Schartz, Kevin M

    2015-02-01

    The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies , written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.

  18. Design of the value of imaging in enhancing the wellness of your heart (VIEW) trial and the impact of uncertainty on power.

    PubMed

    Ambrosius, Walter T; Polonsky, Tamar S; Greenland, Philip; Goff, David C; Perdue, Letitia H; Fortmann, Stephen P; Margolis, Karen L; Pajewski, Nicholas M

    2012-04-01

    Although observational evidence has suggested that the measurement of coronary artery calcium (CAC) may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether CAC testing leads to improved patient outcomes. To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of nonfatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, nonfatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. We have proposed a sample size of 30,000 (800 events), which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events), and 27,078 (722 events) provide 80%, 85%, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8-89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9-78.0), 82.5% (82.5-82.6), and 87.2% (87.2-87.3), respectively. These power estimates are dependent on form and parameters of the prior distributions. Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power.

  19. Design of the Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) Trial and the Impact of Uncertainty on Power

    PubMed Central

    Ambrosius, Walter T.; Polonsky, Tamar S.; Greenland, Philip; Goff, David C.; Perdue, Letitia H.; Fortmann, Stephen P.; Margolis, Karen L.; Pajewski, Nicholas M.

    2014-01-01

    Background Although observational evidence has suggested that the measurement of CAC may improve risk stratification for cardiovascular events and thus help guide the use of lipid-lowering therapy, this contention has not been evaluated within the context of a randomized trial. The Value of Imaging in Enhancing the Wellness of Your Heart (VIEW) trial is proposed as a randomized study in participants at low intermediate risk of future coronary heart disease (CHD) events to evaluate whether coronary artery calcium (CAC) testing leads to improved patient outcomes. Purpose To describe the challenges encountered in designing a prototypical screening trial and to examine the impact of uncertainty on power. Methods The VIEW trial was designed as an effectiveness clinical trial to examine the benefit of CAC testing to guide therapy on a primary outcome consisting of a composite of non-fatal myocardial infarction, probable or definite angina with revascularization, resuscitated cardiac arrest, non-fatal stroke (not transient ischemic attack (TIA)), CHD death, stroke death, other atherosclerotic death, or other cardiovascular disease (CVD) death. Many critical choices were faced in designing the trial, including: (1) the choice of primary outcome, (2) the choice of therapy, (3) the target population with corresponding ethical issues, (4) specifications of assumptions for sample size calculations, and (5) impact of uncertainty in these assumptions on power/sample size determination. Results We have proposed a sample size of 30,000 (800 events) which provides 92.7% power. Alternatively, sample sizes of 20,228 (539 events), 23,138 (617 events) and 27,078 (722 events) provide 80, 85, and 90% power. We have also allowed for uncertainty in our assumptions by computing average power integrated over specified prior distributions. This relaxation of specificity indicates a reduction in power, dropping to 89.9% (95% confidence interval (CI): 89.8 to 89.9) for a sample size of 30,000. Samples sizes of 20,228, 23,138, and 27,078 provide power of 78.0% (77.9 to 78.0), 82.5% (82.5 to 82.6), and 87.2% (87.2 to 87.3), respectively. Limitations These power estimates are dependent on form and parameters of the prior distributions. Conclusions Despite the pressing need for a randomized trial to evaluate the utility of CAC testing, conduct of such a trial requires recruiting a large patient population, making efficiency of critical importance. The large sample size is primarily due to targeting a study population at relatively low risk of a CVD event. Our calculations also illustrate the importance of formally considering uncertainty in power calculations of large trials as standard power calculations may tend to overestimate power. PMID:22333998

  20. A design of experiments approach to validation sampling for logistic regression modeling with error-prone medical records.

    PubMed

    Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay

    2016-04-01

    Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Optimal sample sizes for the design of reliability studies: power consideration.

    PubMed

    Shieh, Gwowen

    2014-09-01

    Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.

  2. Performance of some biotic indices in the real variable world: a case study at different spatial scales in North-Western Mediterranean Sea.

    PubMed

    Tataranni, Mariella; Lardicci, Claudio

    2010-01-01

    The aim of this study was to analyse the variability of four different benthic biotic indices (AMBI, BENTIX, H', M-AMBI) in two marine coastal areas of the North-Western Mediterranean Sea. In each coastal area, 36 replicates were randomly selected according to a hierarchical sampling design, which allowed estimating the variance components of the indices associated with four different spatial scales (ranging from metres to kilometres). All the analyses were performed at two different sampling periods in order to evaluate if the observed trends were consistent over the time. The variance components of the four indices revealed complex trends and different patterns in the two sampling periods. These results highlighted that independently from the employed index, a rigorous and appropriate sampling design taking into account different scales should always be used in order to avoid erroneous classifications and to develop effective monitoring programs.

  3. Design and simulation study of the immunization Data Quality Audit (DQA).

    PubMed

    Woodard, Stacy; Archer, Linda; Zell, Elizabeth; Ronveaux, Olivier; Birmingham, Maureen

    2007-08-01

    The goal of the Data Quality Audit (DQA) is to assess whether the Global Alliance for Vaccines and Immunization-funded countries are adequately reporting the number of diphtheria-tetanus-pertussis immunizations given, on which the "shares" are awarded. Given that this sampling design is a modified two-stage cluster sample (modified because a stratified, rather than a simple, random sample of health facilities is obtained from the selected clusters); the formula for the calculation of the standard error for the estimate is unknown. An approximated standard error has been proposed, and the first goal of this simulation is to assess the accuracy of the standard error. Results from the simulations based on hypothetical populations were found not to be representative of the actual DQAs that were conducted. Additional simulations were then conducted on the actual DQA data to better access the precision of the DQ with both the original and the increased sample sizes.

  4. Being "SMART" About Adolescent Conduct Problems Prevention: Executing a SMART Pilot Study in a Juvenile Diversion Agency.

    PubMed

    August, Gerald J; Piehler, Timothy F; Bloomquist, Michael L

    2016-01-01

    The development of adaptive treatment strategies (ATS) represents the next step in innovating conduct problems prevention programs within a juvenile diversion context. Toward this goal, we present the theoretical rationale, associated methods, and anticipated challenges for a feasibility pilot study in preparation for implementing a full-scale SMART (i.e., sequential, multiple assignment, randomized trial) for conduct problems prevention. The role of a SMART design in constructing ATS is presented. The SMART feasibility pilot study includes a sample of 100 youth (13-17 years of age) identified by law enforcement as early stage offenders and referred for precourt juvenile diversion programming. Prior data on the sample population detail a high level of ethnic diversity and approximately equal representations of both genders. Within the SMART, youth and their families are first randomly assigned to one of two different brief-type evidence-based prevention programs, featuring parent-focused behavioral management or youth-focused strengths-building components. Youth who do not respond sufficiently to brief first-stage programming will be randomly assigned a second time to either an extended parent- or youth-focused second-stage programming. Measures of proximal intervention response and measures of potential candidate tailoring variables for developing ATS within this sample are detailed. Results of the described pilot study will include information regarding feasibility and acceptability of the SMART design. This information will be used to refine a subsequent full-scale SMART. The use of a SMART to develop ATS for prevention will increase the efficiency and effectiveness of prevention programing for youth with developing conduct problems.

  5. Rationale, design, samples, and baseline sun protection in a randomized trial on a skin cancer prevention intervention in resort environments.

    PubMed

    Buller, David B; Andersen, Peter A; Walkosz, Barbara J; Scott, Michael D; Beck, Larry; Cutter, Gary R

    2016-01-01

    Exposure to solar ultraviolet radiation during recreation is a risk factor for skin cancer. A trial evaluated an intervention to promote advanced sun protection (sunscreen pre-application/reapplication; protective hats and clothing; use of shade) during vacations. Adult visitors to hotels/resorts with outdoor recreation (i.e., vacationers) participated in a group-randomized pretest-posttest controlled quasi-experimental design in 2012-14. Hotels/resorts were pair-matched and randomly assigned to the intervention or untreated control group. Sun. protection (e.g., clothing, hats, shade and sunscreen) was measured in cross-sectional samples by observation and a face-to-face intercept survey during two-day visits. Initially, 41 hotel/resorts (11%) participated but 4 dropped out before posttest. Hotel/resorts were diverse (employees=30 to 900; latitude=24° 78' N to 50° 52' N; elevation=2ft. to 9726ft. above sea level), and had a variety of outdoor venues (beaches/pools, court/lawn games, golf courses, common areas, and chairlifts). At pretest, 4347 vacationers were observed and 3531 surveyed. More females were surveyed (61%) than observed (50%). Vacationers were mostly 35-60years old, highly educated (college education=68%) and non-Hispanic white (93%), with high-risk skin types (22%). Vacationers reported covering 60% of their skin with clothing. Also, 40% of vacationers used shade; 60% applied sunscreen; and 42% had been sunburned. The trial faced challenges recruiting resorts but result showed that the large, multi-state sample of vacationers were at high risk for solar UV exposure. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. A systematic examination of a random sampling strategy for source apportionment calculations.

    PubMed

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Intimate Partner Violence in Older Women

    ERIC Educational Resources Information Center

    Bonomi, Amy E.; Anderson, Melissa L.; Reid, Robert J.; Carrell, David; Fishman, Paul A.; Rivara, Frederick P.; Thompson, Robert S.

    2007-01-01

    Purpose: We describe the prevalence, types, duration, frequency, and severity of intimate partner violence ("partner violence") in older women. Design and Methods: We randomly sampled a total of 370 English-speaking women (65 years of age and older) from a health care system to participate in a cross-sectional telephone interview. Using 5…

  8. Internet and Internet Use: Teacher Trainees' Perspective

    ERIC Educational Resources Information Center

    Akinoglu, Orhan

    2009-01-01

    The aim of this study is to present the development and issues of internet and internet use. The study has a descriptive survey design and 185 randomly selected teacher trainees at Marmara University, Ataturk Education Faculty in the 2001-2002 academic year constitute the sample. Data were collected via a questionnaire prepared by the researcher…

  9. Postadoption and Guardianship: An Evaluation of the Adoption Preservation, Assessment, and Linkage Program

    ERIC Educational Resources Information Center

    Liao, Minli; Testa, Mark

    2016-01-01

    Objectives: This study evaluated the effects of the Adoption Preservation, Assessment, and Linkage (APAL) postpermanency program. Method: A quasi-experimental, posttest-only design was used to estimate the program's effects on youth discharged from foster care to adoption or legal guardianship. A random sample was surveyed (female = 44.7%; African…

  10. Assimilating and Following through with Nutritional Recommendations by Adolescents

    ERIC Educational Resources Information Center

    Pich, Jordi; Ballester, Lluis; Thomas, Monica; Canals, Ramon; Tur, Josep A.

    2011-01-01

    Objective: To investigate the relationship between knowledge about a healthy diet and the actual food consumption habits of adolescents. Design: A survey of several food-related aspects applied to a representative sample of adolescents. Setting: One thousand, six hundred and sixty three individuals aged 11 to 18 from 40 schools randomly selected…

  11. The Effects of Grouping Practices and Curricular Adjustments on Achievement

    ERIC Educational Resources Information Center

    Tieso, Carol

    2005-01-01

    The purpose of this study was to examine the effects of curricular (textbook, revised, and differentiated) and grouping (whole, between, and within-class) practices on intermediate students' achievement in mathematics. A pretest-posttest, quasi-experimental design using a stratified random sample of 31 teachers and their students (N = 645) was…

  12. Duration of Sleep and ADHD Tendency among Adolescents in China

    ERIC Educational Resources Information Center

    Lam, Lawrence T.; Yang, L.

    2008-01-01

    Objective: This study investigates the association between duration of sleep and ADHD tendency among adolescents. Method: This population-based health survey uses a two-stage random cluster sampling design. Participants ages 13 to 17 are recruited from the total population of adolescents attending high school in one city of China. Duration of…

  13. The Influence of Leadership Practices on Faculty Job Satisfaction in Baccalaureate Degree Nursing Program

    ERIC Educational Resources Information Center

    Afam, Clifford C.

    2012-01-01

    Using a correlational, cross-sectional study design with self-administered questionnaires, this study explored the extent to which leadership practices of deans and department heads influence faculty job satisfaction in baccalaureate degree nursing programs. Using a simple random sampling technique, the study survey was sent to 400 faculty…

  14. Improving EFL Learners' Pronunciation of English through Quiz-Demonstration-Practice-Revision (QDPR)

    ERIC Educational Resources Information Center

    Moedjito

    2018-01-01

    This study investigates the effectiveness of Quiz-Demonstration-Practice-Revision (QDPR) in improving EFL learners' pronunciation of English. To achieve the goal, the present researcher conducted a one-group pretest-posttest design. The experimental group was selected using a random sampling technique with consideration of the inclusion criteria.…

  15. Testing the "Learning Journey" of MSW Students in a Rural Program

    ERIC Educational Resources Information Center

    Wall, Misty L.; Rainford, Will

    2013-01-01

    Using a quasi-experimental one-group, pretest-posttest design with non-random convenience sampling, the researchers assessed 61 advanced standing MSW students who matriculated at a rural intermountain Northwest school of social work. Changes in students' knowledge and attitudes toward lesbian, gay, and bisexual (LGB) people were measured using…

  16. Logging utilization in Idaho: Current and past trends

    Treesearch

    Eric A. Simmons; Todd A. Morgan; Erik C. Berg; Stanley J. Zarnoch; Steven W. Hayes; Mike T. Thompson

    2014-01-01

    A study of commercial timber-harvesting activities in Idaho was conducted during 2008 and 2011 to characterize current tree utilization, logging operations, and changes from previous Idaho logging utilization studies. A two-stage simple random sampling design was used to select sites and felled trees for measurement within active logging sites. Thirty-three logging...

  17. Validity of a Checklist for the Design, Content, and Instructional Qualities of Children's Books

    ERIC Educational Resources Information Center

    Çer, Erkan; Sahin, Ertugrul

    2016-01-01

    The purpose of this study was to develop a checklist whose validity has been tested in assessing children's books. Participants consisted of university students who had taken a course in children's literature. They were selected through convenience sampling and randomly assigned into experimental and control groups. Participants in the…

  18. Fruit and Vegetable Intake among Urban Community Gardeners

    ERIC Educational Resources Information Center

    Alaimo, Katherine; Packnett, Elizabeth; Miles, Richard A.; Kruger, Daniel J.

    2008-01-01

    Objective: To determine the association between household participation in a community garden and fruit and vegetable consumption among urban adults. Design: Data were analyzed from a cross-sectional random phone survey conducted in 2003. A quota sampling strategy was used to ensure that all census tracts within the city were represented. Setting:…

  19. Knowledge, Attitudes and Behaviours. Concerning Education for Sustainable Development: Two Exploratory Studies

    ERIC Educational Resources Information Center

    Michalos, Alex C.; Creech, Heather; McDonald, Christina; Kahlke, P. Maurine Hatch

    2011-01-01

    Celebrating the UN Decade of Education for Sustainable Development (2005-2014), this paper presents results of two exploratory surveys taken in the province of Manitoba, Canada in January to March 2008. A random sample of 506 adults completed a mailed out questionnaire designed to measure respondents' knowledge, attitudes and behaviours concerning…

  20. A Study of Teachers' Perception of Schools' Organizational Health in Osun State

    ERIC Educational Resources Information Center

    Omoyemiju, Michael Adeniyi; Adediwura, Alaba Adeyemi

    2011-01-01

    This study examined the teachers' perceptions of school organizational health (i.e. resource support, job satisfaction among staff, morale boosts, institutional integrity and initiating structure). Descriptive survey design was used for the study. The sample was composed of 330 secondary school teachers randomly selected from 283, 826 secondary…

  1. The Social Standing of a Married Woman

    ERIC Educational Resources Information Center

    Nilson, Linda Burzotta

    1976-01-01

    The traditional manner of designating a married womans' social status by her husbands' occupational attainment is questioned, and a framework is proposed for re-conceptualizing a married womans' social standing in terms of both her own and her husbands' occupational attainments. Presents the results of a Milwaukee area survey of a random sample of…

  2. Social Strain, Self-Control, and Juvenile Gambling Pathology: Evidence From Chinese Adolescents

    ERIC Educational Resources Information Center

    Cheung, Nicole W. T.

    2016-01-01

    Despite recent concerns over youthful problem gambling, few gambling studies have looked into Asian adolescent populations. This study of a stratified, random sample of high school students in Hong Kong is designed to estimate the prevalence of gambling pathology among Chinese adolescents and to examine the relationships between social strain,…

  3. Communication Channels as Implementation Determinants of Performance Management Framework in Kenya

    ERIC Educational Resources Information Center

    Sang, Jane

    2016-01-01

    The purpose of this study to assess communication channels as implementation determinants of performance management framework In Kenya at Moi Teaching and Referral Hospital (MTRH). The communication theory was used to inform the study. This study adopted an explanatory design. The target sampled 510 respondents through simple random and stratified…

  4. School Nurse Online Emergency Preparedness Training: An Analysis of Knowledge, Skills, and Confidence

    ERIC Educational Resources Information Center

    Elgie, Robert; Sapien, Robert; Fullerton, Lynne; Moore, Brian

    2010-01-01

    The objective of this study was to evaluate the effectiveness of a computer-assisted emergency preparedness course for school nurses. Participants from a convenience sample (52) of school nurses from New Mexico were randomly assigned to intervention or control groups in an experimental after-only posttest design. Intervention group participants…

  5. Predictors of Self-Regulated Learning in Malaysian Smart Schools

    ERIC Educational Resources Information Center

    Yen, Ng Lee; Bakar, Kamariah Abu; Roslan, Samsilah; Luan, Wong Su; Abd Rahman, Petri Zabariah Mega

    2005-01-01

    This study sought to uncover the predictors of self-regulated learning in Malaysian smart schools. The sample consisted of 409 students, from six randomly chosen smart schools. A quantitative correlational research design was employed and the data were collected through survey method. Six factors were examined in relation to the predictors of…

  6. TT : a program that implements predictor sort design and analysis

    Treesearch

    S. P. Verrill; D. W. Green; V. L. Herian

    1997-01-01

    In studies on wood strength, researchers sometimes replace experimental unit allocation via random sampling with allocation via sorts based on nondestructive measurements of strength predictors such as modulus of elasticity and specific gravity. This report documents TT, a computer program that implements recently published methods to increase the sensitivity of such...

  7. Motivation among Public Primary School Teachers in Mauritius

    ERIC Educational Resources Information Center

    Seebaluck, Ashley Keshwar; Seegum, Trisha Devi

    2013-01-01

    Purpose: The purpose of this study was to critically analyse the factors that affect the motivation of public primary school teachers and also to investigate if there is any relationship between teacher motivation and job satisfaction in Mauritius. Design/methodology/approach: Simple random sampling method was used to collect data from 250 primary…

  8. Perceptions of Online Credentials for School Principals

    ERIC Educational Resources Information Center

    Richardson, Jayson W.; McLeod, Scott; Dikkers, Amy Garrett

    2011-01-01

    Purpose: The purpose of this study is to investigate the perceptions of human resource directors in the USA about online credentials earned by K-12 school principals and principal candidates. Design/methodology/approach: In this mixed methods study, a survey was sent to a random sample of 500 human resource directors in K-12 school districts…

  9. A Confirmatory Factor Analysis of the Professional Opinion Scale

    ERIC Educational Resources Information Center

    Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.

    2007-01-01

    The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…

  10. A Catalog of Tasks, Performance Objectives, Performance Guides, Tools, and Equipment, Homemaker Human Development.

    ERIC Educational Resources Information Center

    Watkins, Betty; And Others

    This catalog was designed to provide performance objectives and performance guides associated with current occupational information relating to the job of homemaker. The performance objectives and guides were developed following a literature review and a survey of random samples of homemakers in the five participating states (Alabama, Florida,…

  11. Biomarker Evaluation Does Not Confirm Efficacy of Computer-Tailored Nutrition Education

    ERIC Educational Resources Information Center

    Kroeze, Willemieke; Dagnelie, Pieter C.; Heymans, Martijn W.; Oenema, Anke; Brug, Johannes

    2011-01-01

    Objective: To evaluate the efficacy of computer-tailored nutrition education with objective outcome measures. Design: A 3-group randomized, controlled trial with posttests at 1 and 6 months post-intervention. Setting: Worksites and 2 neighborhoods in the urban area of Rotterdam. Participants: A convenience sample of healthy Dutch adults (n = 442).…

  12. Public-Private Partnership and Infrastructural Development in Nigerian Universities

    ERIC Educational Resources Information Center

    Oduwaiye, R. O.; Sofoluwe, A. O.; Bello, T. O.; Durosaro, I. A.

    2014-01-01

    This study investigated the degree to which Public-Private Partnership (PPP) services are related to infrastructural development in Nigerian Universities. The research design used was descriptive survey method. The population for the study encompassed all the 20 universities in South-west Nigeria. Stratified random sampling was used to select 12…

  13. Cigarette Smoking and Anti-Smoking Counseling Practices among Physicians in Wuhan, China

    ERIC Educational Resources Information Center

    Gong, Jie; Zhang, Zhifeng; Zhu, Zhaoyang; Wan, Jun; Yang, Niannian; Li, Fang; Sun, Huiling; Li, Weiping; Xia, Jiang; Zhou, Dunjin; Chen, Xinguang

    2012-01-01

    Purpose: The paper seeks to report data on cigarette smoking, anti-smoking practices, physicians' receipt of anti-smoking training, and the association between receipt of the training and anti-smoking practice among physicians in Wuhan, China. Design/methodology/approach: Participants were selected through the stratified random sampling method.…

  14. Effects of spatial allocation and parameter variability on lakewide estimates from surveys of Lake Superior, North America’s largest lake

    EPA Science Inventory

    Lake Superior was sampled in 2011 using a Generalized Random Tessellation Stratified design (n=54 sites) to characterize biological and chemical properties of this huge aquatic resource, with statistical confidence. The lake was divided into two strata (inshore <100m and offsh...

  15. Reasons for University Students' Violence in Jordan

    ERIC Educational Resources Information Center

    Alshoraty, Yazid Isa

    2015-01-01

    The aim of this study was to examine the reasons for students' violence at Jordanian Universities from the viewpoint of the Hashemite University students. The sample consisted of 521 male and female students, chosen randomly. To collect data, the researcher designed a three-domain questionnaire. The findings of the study revealed that the most…

  16. Preliminary Findings on Rural Homelessness in Ohio.

    ERIC Educational Resources Information Center

    First, Richard J.; And Others

    This report is designed to present preliminary findings from the first comprehensive study of rural homelessness in the United States. The study was conducted during the first 6 months of 1990, and data were collected from interviews with 921 homeless adults in 21 randomly selected rural counties in Ohio. The sample counties represent 26% of the…

  17. Balance Sheet for Catholic Elementary Schools: 2001 Income and Expenses.

    ERIC Educational Resources Information Center

    Kealey, Robert J.

    This financial report was designed to provide a basis for informed discussion regarding potential forms of federal and state assistance to students attending Catholic elementary schools, and to encourage improved local management. The information presented in this study is based upon a random sample of Catholic elementary schools across the United…

  18. Tier 3 Specialized Writing Instruction for Students with Dyslexia

    ERIC Educational Resources Information Center

    Berninger, Virginia W.; Winn, William D.; Stock, Patricia; Abbott, Robert D.; Eschen, Kate; Lin, Shin-Ju; Garcia, Noelia; Anderson-Youngstrom, Marci; Murphy, Heather; Lovitt, Dan; Trivedi, Pamala; Jones, Janine; Amtmann, Dagmar; Nagy, William

    2008-01-01

    Two instructional experiments used randomized, controlled designs to evaluate the effectiveness of writing instruction for students with carefully diagnosed dyslexia, which is both an oral reading and writing disorder, characterized by impaired "word" decoding, reading, and spelling. In Study 1 (4th to 6th grade sample and 7th to 9th grade…

  19. Exploring Student Understanding of Grades and Report Cards

    ERIC Educational Resources Information Center

    Gwidt, Kathleen M.

    2010-01-01

    This qualitative study was designed to identify how students from a single high school in the rural Midwest perceive grades and report cards. Stratified purposeful random sampling resulted in the inclusion of 14 students who provided journal entries and participated in one-on-one interviews for the purpose of exploring student understanding of…

  20. Behavioural Precursors and HIV Testing Behaviour among African American Women

    ERIC Educational Resources Information Center

    Uhrig, Jennifer D.; Davis, Kevin C.; Rupert, Doug; Fraze, Jami

    2012-01-01

    Objective: To examine whether there is an association between knowledge, attitudes and beliefs, reported intentions to get an HIV test, and reported HIV testing behaviour at a later date among a sample of African American women. Design: Secondary analysis of data collected from October 2007 through March 2008 for a randomized controlled experiment…

  1. Internal Challenges Affecting Academic Performance of Student-Athletes in Ghanaian Public Universities

    ERIC Educational Resources Information Center

    Apaak, Daniel; Sarpong, Emmanuel Osei

    2015-01-01

    This paper examined internal challenges affecting academic performance of student-athletes in Ghanaian public universities, using a descriptive survey research design. Proportionate random sampling technique was employed to select Three Hundred and Thirty-Two (332) respondents for the study. The instrument used in gathering data for the study was…

  2. Primary Teacher Trainees Preparedness to Teach Science: A Gender Perspective

    ERIC Educational Resources Information Center

    Mutisya, Sammy M.

    2015-01-01

    The purpose of this study was to determine Primary Teacher Education (PTE) Trainees' perceptions regarding their preparedness to teach science in primary schools. A descriptive survey research design was used and stratified proportionate random sampling techniques used to select 177 males and 172 females. The study found out that more male trainee…

  3. Measuring Perceived Barriers to Healthful Eating in Obese, Treatment-Seeking Adults

    ERIC Educational Resources Information Center

    Welsh, Ericka M.; Jeffery, Robert W.; Levy, Rona L.; Langer, Shelby L.; Flood, Andrew P.; Jaeb, Melanie A.; Laqua, Patricia S.

    2012-01-01

    Objective: To characterize perceived barriers to healthful eating in a sample of obese, treatment-seeking adults and to examine whether changes in barriers are associated with energy intake and body weight. Design: Observational study based on findings from a randomized, controlled behavioral weight-loss trial. Participants: Participants were 113…

  4. Evaluation of Residential Consumers Knowledge of Wireless Network Security and Its Correlation with Identity Theft

    ERIC Educational Resources Information Center

    Kpaduwa, Fidelis Iheanyi

    2010-01-01

    This current quantitative correlational research study evaluated the residential consumers' knowledge of wireless network security and its relationship with identity theft. Data analysis was based on a sample of 254 randomly selected students. All the study participants completed a survey questionnaire designed to measure their knowledge of…

  5. Relationships between Teacher Organizational Commitment, Psychological Hardiness and Some Demographic Variables in Turkish Primary Schools

    ERIC Educational Resources Information Center

    Sezgin, Ferudun

    2009-01-01

    Purpose: The purpose of this paper is to examine the relationships between teachers' organizational commitment perceptions and both their psychological hardiness and some demographic variables in a sample of Turkish primary schools. Design/methodology/approach: A total of 405 randomly selected teachers working at primary schools in Ankara…

  6. Psychosocial Determinants of Conflict-Handling Behaviour of Workers in Oil Sector in Nigeria

    ERIC Educational Resources Information Center

    Bankole, Akanji Rafiu

    2011-01-01

    The study examined the joint and relative influence of three psychosocial factors: Emotional intelligence, communication skill and interpersonal skill on conflict-handling behaviour of oil workers in Nigeria. Survey research design was adopted and a sample of 610 workers was randomly selected from oil companies across the country. Data were…

  7. Review of Estimation Methods for Landline and Cell Phone Surveys

    ERIC Educational Resources Information Center

    Arcos, Antonio; del Mar Rueda, María; Trujillo, Manuel; Molina, David

    2015-01-01

    The rapid proliferation of cell phone use and the accompanying decline in landline service in recent years have resulted in substantial potential for coverage bias in landline random-digit-dial telephone surveys, which has led to the implementation of dual-frame designs that incorporate both landline and cell phone samples. Consequently,…

  8. Rigorously testing multialternative decision field theory against random utility models.

    PubMed

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. The impact of a prevention delivery system on perceived social capital: the PROSPER project.

    PubMed

    Chilenski, Sarah M; Ang, Patricia M; Greenberg, Mark T; Feinberg, Mark E; Spoth, Richard

    2014-04-01

    The current study examined the impact of the PROSPER delivery system for evidence-based prevention programs on multiple indicators of social capital in a rural and semi-rural community sample. Utilizing a randomized blocked design, 317 individuals in 28 communities across two states were interviewed at three time points over the course of 2.5 years. Bridging, linking, and the public life skills forms of social capital were assessed via community members' and leaders' reports on the perceptions of school functioning and the Cooperative Extension System, collaboration among organizations, communication and collaboration around youth problems, and other measures. Longitudinal mixed model results indicate significant improvements in some aspects of bridging and linking social capital in PROSPER intervention communities. Given the strength of the longitudinal and randomized research design, results advance prevention science by suggesting that community collaborative prevention initiatives can significantly impact community social capital in a rural and semi-rural sample. Future research should further investigate changes in social capital in different contexts and how changes in social capital relate to other intervention effects.

  10. Improvement program for polycarbonate capacitors. [hermetically sealed, and ac wound

    NASA Technical Reports Server (NTRS)

    Bailey, R. R.; Waterman, K. D.

    1973-01-01

    Hermetically sealed, wound, AC, polycarbonate capacitors incorporating design improvements recommended in a previous study were designed and built. A 5000 hour, 400 Hz ac life test was conducted using 384 of these capacitors to verify the adequacy of the design improvements. The improvements incorporated in the capacitors designed for this program eliminated the major cause of failure found in the preceding work, termination failure. A failure cause not present in the previous test became significant in this test with capacitors built from one lot of polycarbonate film. The samples from this lot accounted for 25 percent of the total test complement. Analyses of failed samples showed that the film had an excessive solvent content. This solvent problem was found in 37 of the total 46 failures which occurred in this test. The other nine were random failures resulting from causes such as seal leaks, foreign particles, and possibly wrinkles.

  11. Baseline adjustments for binary data in repeated cross-sectional cluster randomized trials.

    PubMed

    Nixon, R M; Thompson, S G

    2003-09-15

    Analysis of covariance models, which adjust for a baseline covariate, are often used to compare treatment groups in a controlled trial in which individuals are randomized. Such analysis adjusts for any baseline imbalance and usually increases the precision of the treatment effect estimate. We assess the value of such adjustments in the context of a cluster randomized trial with repeated cross-sectional design and a binary outcome. In such a design, a new sample of individuals is taken from the clusters at each measurement occasion, so that baseline adjustment has to be at the cluster level. Logistic regression models are used to analyse the data, with cluster level random effects to allow for different outcome probabilities in each cluster. We compare the estimated treatment effect and its precision in models that incorporate a covariate measuring the cluster level probabilities at baseline and those that do not. In two data sets, taken from a cluster randomized trial in the treatment of menorrhagia, the value of baseline adjustment is only evident when the number of subjects per cluster is large. We assess the generalizability of these findings by undertaking a simulation study, and find that increased precision of the treatment effect requires both large cluster sizes and substantial heterogeneity between clusters at baseline, but baseline imbalance arising by chance in a randomized study can always be effectively adjusted for. Copyright 2003 John Wiley & Sons, Ltd.

  12. Attitudes towards smoking restrictions and tobacco advertisement bans in Georgia.

    PubMed

    Bakhturidze, George D; Mittelmark, Maurice B; Aarø, Leif E; Peikrishvili, Nana T

    2013-11-25

    This study aims to provide data on a public level of support for restricting smoking in public places and banning tobacco advertisements. A nationally representative multistage sampling design, with sampling strata defined by region (sampling quotas proportional to size) and substrata defined by urban/rural and mountainous/lowland settlement, within which census enumeration districts were randomly sampled, within which households were randomly sampled, within which a randomly selected respondent was interviewed. The country of Georgia, population 4.7 million, located in the Caucasus region of Eurasia. One household member aged between 13 and 70 was selected as interviewee. In households with more than one age-eligible person, selection was carried out at random. Of 1588 persons selected, 14 refused to participate and interviews were conducted with 915 women and 659 men. Respondents were interviewed about their level of agreement with eight possible smoking restrictions/bans, used to calculate a single dichotomous (agree/do not agree) opinion indicator. The level of agreement with restrictions was analysed in bivariate and multivariate analyses by age, gender, education, income and tobacco use status. Overall, 84.9% of respondents indicated support for smoking restrictions and tobacco advertisement bans. In all demographic segments, including tobacco users, the majority of respondents indicated agreement with restrictions, ranging from a low of 51% in the 13-25 age group to a high of 98% in the 56-70 age group. Logistic regression with all demographic variables entered showed that agreement with restrictions was higher with age, and was significantly higher among never smokers as compared to daily smokers. Georgian public opinion is normatively supportive of more stringent tobacco-control measures in the form of smoking restrictions and tobacco advertisement bans.

  13. Sampling design for groundwater solute transport: Tests of methods and analysis of Cape Cod tracer test data

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.

    1991-01-01

    Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.

  14. Sociodemographic Correlates of Behavioral Problems Among Rural Chinese Schoolchildren

    PubMed Central

    Feng, Hui; Liu, Jianghong; Wang, Ying; He, Guoping

    2014-01-01

    Objective To investigate the problem behaviors of children from grades 4–6 and associated factors in the rural Hunan province of China. Design and Sample Randomized cluster sampling in 3 rural areas of the Hunan province was used. 435 subjects were randomly selected from grades 4–6. Measures A researcher-designed questionnaire was used to find influential demographic, parental, and socioeconomic factors. The prediction test of problem children (PPCT) was used to assess problem behaviors. Results The prevalence of the early child problem behaviors in our sample was 17.44%. Associated factors include gender, willingness to attend school, parents’ expectations of the children’s educational degree, parents working outside the home (left-behind children), and children’s feeling of their parents’ understanding of them. Conclusions The prevalence of children with problem behaviors was higher in rural areas in Hunan than in China as a whole. This may be partly explained by the fact that parents must often work in the cities and leave their children behind at home, increasing the chances that those children develop behavioral problems. This phenomenon also applies in other developing countries, making it a public health concern. Therefore, there is a need to prevent problem behaviors through collaboration among families, schools, and society. PMID:21736608

  15. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  16. Publication of comparative effectiveness research has not increased in high-impact medical journals, 2004-2013.

    PubMed

    Hester, Laura L; Poole, Charles; Suarez, Elizabeth A; Der, Jane S; Anderson, Olivia G; Almon, Kathryn G; Shirke, Avanti V; Brookhart, M Alan

    2017-04-01

    To explore the impact of increasing interest and investment in patient-centered research, this study sought to describe patterns of comparative effectiveness research (CER) and patient-reported outcomes (PROs) in pharmacologic intervention studies published in widely read medical journals from 2004-2013. We identified 2335 articles published in five widely read medical journals from 2004-2013 with ≥1 intervention meeting the US Food and Drug Administration's definitions for a drug, biologic, or vaccine. Six trained reviewers extracted characteristics from a 20% random sample of articles (468 studies). We calculated the proportion of studies with CER and PROs. Trends were summarized using locally-weighted means and 95% confidence intervals. Of the 468 sampled studies, 30% used CER designs and 33% assessed PROs. The proportion of studies using CER designs did not meaningfully increase over the study period. However, we observed an increase in the use of PROs. Among pharmacological intervention studies published in widely read medical journals from 2004-2013, we identified no increase in CER. Randomized, placebo-controlled trials continue to be the dominant study design for assessing pharmacologic interventions. Increasing trends in PRO use may indicate greater acceptance of these outcomes as evidence for clinical benefit. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Methodological Reporting of Randomized Trials in Five Leading Chinese Nursing Journals

    PubMed Central

    Shi, Chunhu; Tian, Jinhui; Ren, Dan; Wei, Hongli; Zhang, Lihuan; Wang, Quan; Yang, Kehu

    2014-01-01

    Background Randomized controlled trials (RCTs) are not always well reported, especially in terms of their methodological descriptions. This study aimed to investigate the adherence of methodological reporting complying with CONSORT and explore associated trial level variables in the Chinese nursing care field. Methods In June 2012, we identified RCTs published in five leading Chinese nursing journals and included trials with details of randomized methods. The quality of methodological reporting was measured through the methods section of the CONSORT checklist and the overall CONSORT methodological items score was calculated and expressed as a percentage. Meanwhile, we hypothesized that some general and methodological characteristics were associated with reporting quality and conducted a regression with these data to explore the correlation. The descriptive and regression statistics were calculated via SPSS 13.0. Results In total, 680 RCTs were included. The overall CONSORT methodological items score was 6.34±0.97 (Mean ± SD). No RCT reported descriptions and changes in “trial design,” changes in “outcomes” and “implementation,” or descriptions of the similarity of interventions for “blinding.” Poor reporting was found in detailing the “settings of participants” (13.1%), “type of randomization sequence generation” (1.8%), calculation methods of “sample size” (0.4%), explanation of any interim analyses and stopping guidelines for “sample size” (0.3%), “allocation concealment mechanism” (0.3%), additional analyses in “statistical methods” (2.1%), and targeted subjects and methods of “blinding” (5.9%). More than 50% of trials described randomization sequence generation, the eligibility criteria of “participants,” “interventions,” and definitions of the “outcomes” and “statistical methods.” The regression analysis found that publication year and ITT analysis were weakly associated with CONSORT score. Conclusions The completeness of methodological reporting of RCTs in the Chinese nursing care field is poor, especially with regard to the reporting of trial design, changes in outcomes, sample size calculation, allocation concealment, blinding, and statistical methods. PMID:25415382

  18. Study protocol for a cluster randomized trial of the Community of Voices choir intervention to promote the health and well-being of diverse older adults.

    PubMed

    Johnson, Julene K; Nápoles, Anna M; Stewart, Anita L; Max, Wendy B; Santoyo-Olsson, Jasmine; Freyre, Rachel; Allison, Theresa A; Gregorich, Steven E

    2015-10-13

    Older adults are the fastest growing segment of the United States population. There is an immediate need to identify novel, cost-effective community-based approaches that promote health and well-being for older adults, particularly those from diverse racial/ethnic and socioeconomic backgrounds. Because choral singing is multi-modal (requires cognitive, physical, and psychosocial engagement), it has the potential to improve health outcomes across several dimensions to help older adults remain active and independent. The purpose of this study is to examine the effect of a community choir program (Community of Voices) on health and well-being and to examine its costs and cost-effectiveness in a large sample of diverse, community-dwelling older adults. In this cluster randomized controlled trial, diverse adults age 60 and older were enrolled at Administration on Aging-supported senior centers and completed baseline assessments. The senior centers were randomly assigned to either start the choir immediately (intervention group) or wait 6 months to start (control). Community of Voices is a culturally tailored choir program delivered at the senior centers by professional music conductors that reflects three components of engagement (cognitive, physical, and psychosocial). We describe the nature of the study including the cluster randomized trial study design, sampling frame, sample size calculation, methods of recruitment and assessment, and primary and secondary outcomes. The study involves conducting a randomized trial of an intervention as delivered in "real-world" settings. The choir program was designed using a novel translational approach that integrated evidence-based research on the benefits of singing for older adults, community best practices related to community choirs for older adults, and the perspective of the participating communities. The practicality and relatively low cost of the choir intervention means it can be incorporated into a variety of community settings and adapted to diverse cultures and languages. If successful, this program will be a practical and acceptable community-based approach for promoting health and well-being of older adults. ClinicalTrials.gov NCT01869179 registered 9 January 2013.

  19. Estimation of a partially linear additive model for data from an outcome-dependent sampling design with a continuous outcome

    PubMed Central

    Tan, Ziwen; Qin, Guoyou; Zhou, Haibo

    2016-01-01

    Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. PMID:27006375

  20. Spatial design and strength of spatial signal: Effects on covariance estimation

    USGS Publications Warehouse

    Irvine, Kathryn M.; Gitelman, Alix I.; Hoeting, Jennifer A.

    2007-01-01

    In a spatial regression context, scientists are often interested in a physical interpretation of components of the parametric covariance function. For example, spatial covariance parameter estimates in ecological settings have been interpreted to describe spatial heterogeneity or “patchiness” in a landscape that cannot be explained by measured covariates. In this article, we investigate the influence of the strength of spatial dependence on maximum likelihood (ML) and restricted maximum likelihood (REML) estimates of covariance parameters in an exponential-with-nugget model, and we also examine these influences under different sampling designs—specifically, lattice designs and more realistic random and cluster designs—at differing intensities of sampling (n=144 and 361). We find that neither ML nor REML estimates perform well when the range parameter and/or the nugget-to-sill ratio is large—ML tends to underestimate the autocorrelation function and REML produces highly variable estimates of the autocorrelation function. The best estimates of both the covariance parameters and the autocorrelation function come under the cluster sampling design and large sample sizes. As a motivating example, we consider a spatial model for stream sulfate concentration.

  1. Healthy Moms, a randomized trial to promote and evaluate weight maintenance among obese pregnant women: study design and rationale

    PubMed Central

    Vesco, Kimberly K.; Karanja, Njeri; King, Janet C.; Gillman, Matthew W.; Perrin, Nancy; McEvoy, Cindy; Eckhardt, Cara; Smith, K. Sabina; Stevens, Victor J.

    2012-01-01

    Background Obesity and excessive weight gain during pregnancy are associated with adverse pregnancy outcomes. Observational studies suggest that minimal or no gestational weight gain (GWG) may minimize the risk of adverse pregnancy outcomes for obese women. Objective This report describes the design of Healthy Moms, a randomized trial testing a weekly, group-based, weight management intervention designed to help limit GWG to 3% of weight (measured at the time of randomization) among obese pregnant women (BMI ≥30 kg/m2). Participants are randomized at 10–20 weeks gestation to either the intervention or a single dietary advice control condition. Primary Outcomes The study is powered for the primary outcome of total GWG, yielding a target sample size of 160 women. Additional secondary outcomes include weight change between randomization and one-year postpartum and proportion of infants with birth weight > 90th percentile for gestational age. Statistical analyses will be based on intention-to-treat. Methods Following randomization, all participants receive a 45-minute dietary consultation. They are encouraged to follow the Dietary Approaches to Stop Hypertension diet without sodium restriction. Intervention group participants receive an individualized calorie intake goal, a second individual counseling session and attend weekly group meetings until they give birth. Research staff assess all participants at 34-weeks gestation and at 2-weeks and one-year postpartum with their infants. Summary The Healthy Moms study is testing weight management techniques that have been used with non-pregnant adults. We aim to help obese women limit GWG to improve their long-term health and the health of their offspring. PMID:22465256

  2. Focus on Function – a randomized controlled trial comparing two rehabilitation interventions for young children with cerebral palsy

    PubMed Central

    Law, Mary; Darrah, Johanna; Pollock, Nancy; Rosenbaum, Peter; Russell, Dianne; Walter, Stephen D; Petrenchik, Theresa; Wilson, Brenda; Wright, Virginia

    2007-01-01

    Background Children with cerebral palsy receive a variety of long-term physical and occupational therapy interventions to facilitate development and to enhance functional independence in movement, self-care, play, school activities and leisure. Considerable human and financial resources are directed at the "intervention" of the problems of cerebral palsy, although the available evidence supporting current interventions is inconclusive. A considerable degree of uncertainty remains about the appropriate therapeutic approaches to manage the habilitation of children with cerebral palsy. The primary objective of this project is to conduct a multi-site randomized clinical trial to evaluate the efficacy of a task/context-focused approach compared to a child-focused remediation approach in improving performance of functional tasks and mobility, increasing participation in everyday activities, and improving quality of life in children 12 months to 5 years of age who have cerebral palsy. Method/Design A multi-centred randomized controlled trial research design will be used. Children will be recruited from a representative sample of children attending publicly-funded regional children's rehabilitation centers serving children with disabilities in Ontario and Alberta in Canada. Target sample size is 220 children with cerebral palsy aged 12 months to 5 years at recruitment date. Therapists are randomly assigned to deliver either a context-focused approach or a child-focused approach. Children follow their therapist into their treatment arm. Outcomes will be evaluated at baseline, after 6 months of treatment and at a 3-month follow-up period. Outcomes represent the components of the International Classification of Functioning, Disability and Health, including body function and structure (range of motion), activities (performance of functional tasks, motor function), participation (involvement in formal and informal activities), and environment (parent perceptions of care, parental empowerment). Discussion This paper presents the background information, design and protocol for a randomized controlled trial comparing a task/context-focused approach to a child-focused remediation approach in improving functional outcomes for young children with cerebral palsy. Trial registration [clinical trial registration #: NCT00469872] PMID:17900362

  3. The importance of replication in wildlife research

    USGS Publications Warehouse

    Johnson, D.H.

    2002-01-01

    Wildlife ecology and management studies have been widely criticized for deficiencies in design or analysis. Manipulative experiments--with controls, randomization, and replication in space and time--provide powerful ways of learning about natural systems and establishing causal relationships, but such studies are rare in our field. Observational studies and sample surveys are more common; they also require appropriate design and analysis. More important than the design and analysis of individual studies is metareplication: replication of entire studies. Similar conclusions obtained from studies of the same phenomenon conducted under widely differing conditions will give us greater confidence in the generality of those findings than would any single study, however well designed and executed.

  4. Behavior of sensitivities in the one-dimensional advection-dispersion equation: Implications for parameter estimation and sampling design

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.

  5. Preoperative endoscopic versus percutaneous transhepatic biliary drainage in potentially resectable perihilar cholangiocarcinoma (DRAINAGE trial): design and rationale of a randomized controlled trial.

    PubMed

    Wiggers, Jimme K; Coelen, Robert J S; Rauws, Erik A J; van Delden, Otto M; van Eijck, Casper H J; de Jonge, Jeroen; Porte, Robert J; Buis, Carlijn I; Dejong, Cornelis H C; Molenaar, I Quintus; Besselink, Marc G H; Busch, Olivier R C; Dijkgraaf, Marcel G W; van Gulik, Thomas M

    2015-02-14

    Liver surgery in perihilar cholangiocarcinoma (PHC) is associated with high postoperative morbidity because the tumor typically causes biliary obstruction. Preoperative biliary drainage is used to create a safer environment prior to liver surgery, but biliary drainage may be harmful when severe drainage-related complications deteriorate the patients' condition or increase the risk of postoperative morbidity. Biliary drainage can cause cholangitis/cholecystitis, pancreatitis, hemorrhage, portal vein thrombosis, bowel wall perforation, or dehydration. Two methods of preoperative biliary drainage are mostly applied: endoscopic biliary drainage, which is currently used in most regional centers before referring patients for surgical treatment, and percutaneous transhepatic biliary drainage. Both methods are associated with severe drainage-related complications, but two small retrospective series found a lower incidence in the number of preoperative complications after percutaneous drainage compared to endoscopic drainage (18-25% versus 38-60%, respectively). The present study randomizes patients with potentially resectable PHC and biliary obstruction between preoperative endoscopic or percutaneous transhepatic biliary drainage. The study is a multi-center trial with an "all-comers" design, randomizing patients between endoscopic or percutaneous transhepatic biliary drainage. All patients selected to potentially undergo a major liver resection for presumed PHC are eligible for inclusion in the study provided that the biliary system in the future liver remnant is obstructed (even if they underwent previous inadequate endoscopic drainage). Primary outcome measure is the total number of severe preoperative complications between randomization and exploratory laparotomy. The study is designed to detect superiority of percutaneous drainage: a provisional sample size of 106 patients is required to detect a relative decrease of 50% in the number of severe preoperative complications (alpha = 0.95; beta = 0.8). Interim analysis after inclusion of 53 patients (50%) will provide the definitive sample size. Secondary outcome measures encompass the success of biliary drainage, quality of life, and postoperative morbidity and mortality. The DRAINAGE trial is designed to identify a difference in the number of severe drainage-related complications after endoscopic and percutaneous transhepatic biliary drainage in patients selected to undergo a major liver resection for perihilar cholangiocarcinoma. Netherlands Trial Register [ NTR4243 , 11 October 2013].

  6. The Resources for Enhancing Alzheimer’s Caregiver Health (REACH): Project Design and Baseline Characteristics

    PubMed Central

    Wisniewski, Stephen R.; Belle, Steven H.; Marcus, Susan M.; Burgio, Louis D.; Coon, David W.; Ory, Marcia G.; Burns, Robert; Schulz, Richard

    2008-01-01

    The Resources for Enhancing Alzheimer’s Cargiver Health (REACH) project was designed to test promising interventions for enhancing family caregiving for persons with dementia. The purpose of this article is to describe the research design, interventions, and outcome measures used in REACH and to characterize the sample recruited for the study. Nine interventions and 2 control conditions were implemented at 6 sites; 1,222 dyads were randomly assigned to an intervention or a control condition. The caregiver sample was 18.6% male with an average age of 62.3 years (56% Caucasian, 24% Black, and 19% Hispanic). Caregivers reported high levels of depressive symptoms and moderate burden. Care recipients were older, with a mean age of 79, and were moderately to severely impaired with mean Mini-Mental State Exam scores of 13/30. PMID:14518801

  7. Some practical problems in implementing randomization.

    PubMed

    Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet

    2010-06-01

    While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.

  8. Randomly picked cosmid clones overlap the pyrB and oriC gap in the physical map of the E. coli chromosome.

    PubMed Central

    Knott, V; Rees, D J; Cheng, Z; Brownlee, G G

    1988-01-01

    Sets of overlapping cosmid clones generated by random sampling and fingerprinting methods complement data at pyrB (96.5') and oriC (84') in the published physical map of E. coli. A new cloning strategy using sheared DNA, and a low copy, inducible cosmid vector were used in order to reduce bias in libraries, in conjunction with micro-methods for preparing cosmid DNA from a large number of clones. Our results are relevant to the design of the best approach to the physical mapping of large genomes. PMID:2834694

  9. Nonuniform sampling theorems for random signals in the linear canonical transform domain

    NASA Astrophysics Data System (ADS)

    Shuiqing, Xu; Congmei, Jiang; Yi, Chai; Youqiang, Hu; Lei, Huang

    2018-06-01

    Nonuniform sampling can be encountered in various practical processes because of random events or poor timebase. The analysis and applications of the nonuniform sampling for deterministic signals related to the linear canonical transform (LCT) have been well considered and researched, but up to now no papers have been published regarding the various nonuniform sampling theorems for random signals related to the LCT. The aim of this article is to explore the nonuniform sampling and reconstruction of random signals associated with the LCT. First, some special nonuniform sampling models are briefly introduced. Second, based on these models, some reconstruction theorems for random signals from various nonuniform samples associated with the LCT have been derived. Finally, the simulation results are made to prove the accuracy of the sampling theorems. In addition, the latent real practices of the nonuniform sampling for random signals have been also discussed.

  10. Time-dependent classification accuracy curve under marker-dependent sampling.

    PubMed

    Zhu, Zhaoyin; Wang, Xiaofei; Saha-Chaudhuri, Paramita; Kosinski, Andrzej S; George, Stephen L

    2016-07-01

    Evaluating the classification accuracy of a candidate biomarker signaling the onset of disease or disease status is essential for medical decision making. A good biomarker would accurately identify the patients who are likely to progress or die at a particular time in the future or who are in urgent need for active treatments. To assess the performance of a candidate biomarker, the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are commonly used. In many cases, the standard simple random sampling (SRS) design used for biomarker validation studies is costly and inefficient. In order to improve the efficiency and reduce the cost of biomarker validation, marker-dependent sampling (MDS) may be used. In a MDS design, the selection of patients to assess true survival time is dependent on the result of a biomarker assay. In this article, we introduce a nonparametric estimator for time-dependent AUC under a MDS design. The consistency and the asymptotic normality of the proposed estimator is established. Simulation shows the unbiasedness of the proposed estimator and a significant efficiency gain of the MDS design over the SRS design. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Motivating smokers at outdoor public smoking hotspots to have a quit attempt with a nicotine replacement therapy sample: study protocol for a randomized controlled trial.

    PubMed

    Cheung, Yee Tak Derek; Leung, Jessica Pui Kei; Cheung, Chelsia Ka Ching; Li, William Ho Cheung; Wang, Man Ping; Lam, Tai Hing

    2016-07-26

    About half of the daily smokers in Hong Kong have never tried and have no intention to quit smoking. More than one-third (37.9 %) of daily smokers have attempted to quit but failed. Nicotine replacement therapy (NRT) is a safe and effective pharmacotherapy to increase abstinence by reducing withdrawal symptoms during the early stage of smoking abstinence. However, the prevalence of NRT use in Hong Kong is lower than in most developed countries. The proposed study aims to assess the effectiveness of providing free NRT samples to smokers on increasing quit attempts and the quit rate. Trained university undergraduate students as ambassadors will invite smokers at outdoor public smoking hotspots to participate in the randomized controlled trial, in which eligible smokers will be randomized to receive a 1-week free NRT sample and medication counselling (intervention) or advice to purchase NRT on their own (control). The primary outcome is self-reported quit attempts (no smoking for at least 24 hours) in the past 30 days at 1-month and 3-month telephone follow-up. The findings will inform the effectiveness of delivering free NRT samples at outdoor public smoking hotspots to increase quit attempts and abstinence. The study will also provide information on smokers' adherence to the NRT sample, side effects and safety issues related to the usage. This will improve the design of a large trial to test the effect of the NRT sample. ClinicalTrials.gov NCT02491086 . Registered on 7 July 2015.

  12. A Study of Reliability of Marking and Absolute Grading in Secondary Schools

    ERIC Educational Resources Information Center

    Abdul Gafoor, K.; Jisha, P.

    2014-01-01

    Using a non-experimental comparative group design in a sample consisting of 100 English teachers randomly selected from 30 secondary schools of a district of Kerala and assigning fifty teachers to groups for marking and grading, this study compares inter and intra-individual reliability in marking and absolute grading. Studying (1) the in marking…

  13. Creating an Awareness of Alternatives to Psycho-Social Situations in Elementary School Children.

    ERIC Educational Resources Information Center

    LeCapitaine, John E.

    This study was designed to determine the effectiveness of 18 selected lessons from Dupont's Toward Affective Development (TAD) program for creating an awareness in students of alternatives to psycho-social situations. Using a sample of 60 subjects randomly selected from 111 sixth-grade students in northwest Wisconsin, two experimental and two…

  14. Impact of Sex Education in Kogi State, Nigeria

    ERIC Educational Resources Information Center

    Sule, H. A.; Akor, J. A.; Toluhi, O. J.; Suleiman, R. O.; Akpihi, L.; Ali, O. U.

    2015-01-01

    The focus of this study was to investigate the impact of family sex education in secondary schools on students in Kogi State, Nigeria. The descriptive survey design was used for the study. A total of 1,960 secondary school students were drawn by stratified random sampling from 40 schools within Kogi State, Nigeria. Three research questions were…

  15. A Comparative Study of Factors Influencing Male and Female Lecturers' Job Satisfaction in Ghanaian Higher Education

    ERIC Educational Resources Information Center

    Amos, Patricia Mawusi; Acquah, Sakina; Antwi, Theresa; Adzifome, Nixon Saba

    2015-01-01

    The study sought to compare factors influencing male and female lecturers' job satisfaction. Cross-sectional survey designs employing both quantitative and qualitative approaches were adopted for the study. Simple random sampling was used to select 163 lecturers from the four oldest public universities in Ghana. Celep's (2000) Organisational…

  16. Selling Internet Gambling: Advertising, New Media and the Content of Poker Promotion

    ERIC Educational Resources Information Center

    McMullan, John L.; Kervin, Melissa

    2012-01-01

    This study examines the web design and engineering, advertising and marketing, and pedagogical features present at a random sample of 71 international poker sites obtained from the Casino City directory in the summer of 2009. We coded for 22 variables related to access, appeal, player protection, customer services, on-site security, use of images,…

  17. Measuring Organizational Learning Capability in Indian Managers and Establishing Firm Performance Linkage: An Empirical Analysis

    ERIC Educational Resources Information Center

    Bhatnagar, Jyotsna

    2006-01-01

    Purpose: The purpose of this research is to measure Organizational Learning Capability (OLC) perception in the managers of public, private and multinational organizations and establish the link between OLC and firm performance. Design/methodology/approach: The data were collected from a sample of 612 managers randomly drawn from Indian industry,…

  18. The Use and Validation of Qualitative Methods Used in Program Evaluation.

    ERIC Educational Resources Information Center

    Plucker, Frank E.

    When conducting a two-year college program review, there are several advantages to supplementing the standard quantitative research approach with qualitative measures. Qualitative research does not depend on a large number of random samples, it uses a flexible design which can be refined as the research is executed, and it generates findings in a…

  19. Teaching Efficacy in the Classroom: Skill Based Training for Teachers' Empowerment

    ERIC Educational Resources Information Center

    Karimzadeh, Mansoureh; Salehi, Hadi; Embi, Mohamed Amin; Nasiri, Mehdi; Shojaee, Mohammad

    2014-01-01

    This study aims to use an experimental research design to enhance teaching efficacy by social-emotional skills training in teachers. The statistical sample comprised of 68 elementary teachers (grades 4 and 5) with at least 10 years teaching experience and a bachelor's degree who were randomly assigned into control (18 female, 16 male) and…

  20. From Policy to Practice: Implementation of Water Policies in Child Care Centers in Connecticut

    ERIC Educational Resources Information Center

    Middleton, Ann E.; Henderson, Kathryn E.; Schwartz, Marlene B.

    2013-01-01

    Objective: Child care policies may contribute to healthy beverage consumption patterns. This study documented availability and accessibility of water and correspondence with state and federal policy and accreditation standards in child care centers. Design: One-day observations were conducted in a random sample of 40 Child and Adult Care Food…

  1. School Performance Trajectories and the Challenges for Principal Succession

    ERIC Educational Resources Information Center

    Lee, Linda C.

    2015-01-01

    Purpose: The purpose of this paper is to use empirical data on new principals to clarify the connection between different succession situations and the challenges their successor principals face. Design/methodology/approach: The study draws on two waves of interview data from a random sample of 16 new elementary school principals in a major urban…

  2. Texas School Survey of Substance Abuse: Grades 7-12. 1992.

    ERIC Educational Resources Information Center

    Liu, Liang Y.; Fredlund, Eric V.

    The 1992 Texas School Survey results for secondary students are based on data collected from a sample of 73,073 students in grades 7 through 12. Students were randomly selected from school districts throughout the state using a multi-stage probability design. The procedure ensured that students living in metropolitan and rural areas of Texas are…

  3. The Williamsburg Charter Survey on Religion and Public Life.

    ERIC Educational Resources Information Center

    Williamsburg Charter Foundation, Washington, DC.

    Findings from a survey designed to gauge how U.S. citizens view the place of religion in public life are discussed. A total of 1,889 adults were interviewed at random by telephone for the cross-sectional sample. Additional interviews were conducted with more than 300 teenagers and with 7 leadership groups representing business, higher education,…

  4. Understanding Sample Surveys: Selective Learning about Social Science Research Methods

    ERIC Educational Resources Information Center

    Currin-Percival, Mary; Johnson, Martin

    2010-01-01

    We investigate differences in what students learn about survey methodology in a class on public opinion presented in two critically different ways: with the inclusion or exclusion of an original research project using a random-digit-dial telephone survey. Using a quasi-experimental design and data obtained from pretests and posttests in two public…

  5. Public School Principals: Opinions and Status. ERS Educator Opinion Poll.

    ERIC Educational Resources Information Center

    Educational Research Service, Arlington, VA.

    The opinion poll of public school administrators reported in this document is part of a series designed to report scientifically collected data that accurately reflects the views and experiences of specific groups of educators. For this study, a total of 3,300 principals were included in the random sample and 1,502 (46 percent) responded. Tables…

  6. Analysis of the Special Studies Program Based on the Interviews of Its Students.

    ERIC Educational Resources Information Center

    Esp, Barbarann; Torelli, Alexis

    The special studies program at Hofstra University is designed for high school graduates applying to the university whose educational backgrounds require a more personalized approach to introductory college work. An attempt is made to minimize the risk of poor academic performance during the first year in college. A random sample of 24 students in…

  7. Effects of the "Positive Action" Program on Indicators of Positive Youth Development among Urban Youth

    ERIC Educational Resources Information Center

    Lewis, Kendra M.; Vuchinich, Samuel; Ji, Peter; DuBois, David L.; Acock, Alan; Bavarian, Niloofar; Day, Joseph; Silverthorn, Naida; Flay, Brian R.

    2016-01-01

    This study evaluated effects of "Positive Action," a school-based social-emotional and character development intervention, on indicators of positive youth development (PYD) among a sample of low-income, ethnic minority youth attending 14 urban schools. The study used a matched-pair, cluster-randomized controlled design at the school…

  8. Detecting a Gender-Related Differential Item Functioning Using Transformed Item Difficulty

    ERIC Educational Resources Information Center

    Abedalaziz, Nabeel; Leng, Chin Hai; Alahmadi, Ahlam

    2014-01-01

    The purpose of the study was to examine gender differences in performance on multiple-choice mathematical ability test, administered within the context of high school graduation test that was designed to match eleventh grade curriculum. The transformed item difficulty (TID) was used to detect a gender related DIF. A random sample of 1400 eleventh…

  9. The Utility of Blended Learning in EFL Reading and Grammar: A Case for Moodle

    ERIC Educational Resources Information Center

    Bataineh, Ruba Fahmi; Mayyas, Mais Barjas

    2017-01-01

    This study examines the effect of Moodle-enhanced instruction on Jordanian EFL students' reading comprehension and grammar performance. The study uses a quasi-experimental, pre- /post-test design. A purposeful sample of 32 students, enrolled in a language requirement course at a Jordanian state university, was randomly divided into an experimental…

  10. Affective Learning Outcomes in Workplace Training: A Test of Synchronous vs. Asynchronous Online Learning Environments

    ERIC Educational Resources Information Center

    Cleveland-Innes, Martha; Ally, Mohamed

    2004-01-01

    Research employing an experimental design pilot-tested two delivery platforms, WebCT™ and vClass™, for the generation of affective learning outcomes in the workplace. Using a sample of volunteer participants in the help-desk industry, participants were randomly assigned to one of the two types of delivery software. Thirty-eight subjects…

  11. Characteristics and Clinical Practices of Marriage and Family Therapists: A National Survey

    ERIC Educational Resources Information Center

    Northey, William F., Jr.

    2002-01-01

    This report presents data from a telephone survey of a randomly selected sample of 292 marriage and family therapists (MFTs) who were Clinical Members of the American Association for Marriage and Family Therapy. The study, designed to better understand the current state of the field of MFT, provides descriptive data on the demographic…

  12. Mind Maps to Modify Lack of Attention among Saudi Kindergarten Children

    ERIC Educational Resources Information Center

    Daghistan, Bulquees Ismail Abdul Majid

    2016-01-01

    This research study aims at investigating the impact of Mind Maps on modifying the lack of attention in Arabic language class among Saudi Kindergarten children. To achieve the goals of this study the researcher used an experimental design with a random sample from AlRae'd Kindergarten's children in Riyadh -Saudi Arabia for the academic year…

  13. Strengthening Teachers' Abilities to Implement a Vision Health Program in Taiwanese Schools

    ERIC Educational Resources Information Center

    Chang, L. C.; Liao, L. L.; Chen, M. I.; Niu, Y. Z.; Hsieh, P. L.

    2017-01-01

    We designed a school-based, nationwide program called the "New Era in Eye Health" to strengthen teacher training and to examine whether the existence of a government vision care policy influenced teachers' vision care knowledge and students' behavior. Baseline data and 3-month follow-up data were compared. A random sample of teachers (n…

  14. Professor Gender, Age, and "Hotness" in Influencing College Students' Generation and Interpretation of Professor Ratings

    ERIC Educational Resources Information Center

    Sohr-Preston, Sara L.; Boswell, Stefanie S.; McCaleb, Kayla; Robertson, Deanna

    2016-01-01

    A sample of 230 undergraduate psychology students rated their expectations of a bogus professor (who was randomly designated a man or woman and "hot" versus "not hot") based on ratings and comments found on RateMyProfessors.com. Five professor qualities were derived using principal components analysis: dedication,…

  15. Development and Validation of Social Provision Scale on First Year Undergraduate Psychological Adjustment

    ERIC Educational Resources Information Center

    Oluwatomiwo, Oladunmoye Enoch

    2015-01-01

    This study examined the development and validation of socio provision scale on first year undergraduates adjustment among institution in Ibadan metropolis. The study adopted a descriptive survey design. A sample of 300 participants was randomly selected across institutions in Ibadan. Data were collected using socio provision scale (a =0.76),…

  16. 4 out of 5 Students Surveyed Would Recommend this Activity (Comparing Chewing Gum Flavor Durations)

    ERIC Educational Resources Information Center

    Richardson, Mary; Rogness, Neal; Gajewski, Byron

    2005-01-01

    This paper describes an interactive activity developed for illustrating hypothesis tests on the mean for paired or matched samples. The activity is extended to illustrate assessing normality, the Wilcoxon signed rank test, Kaplan-Meier survival functions, two-way analysis of variance, and the randomized block design. (Contains 6 tables and 13…

  17. Effectiveness of a Phonological Awareness Training Intervention on Word Recognition Ability of Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Mohammed, Adel Abdulla; Mostafa, Amaal Ahmed

    2012-01-01

    This study describes an action research project designed to improve word recognition ability of children with Autism Spectrum Disorder. A total of 47 children diagnosed as having Autism Spectrum Disorder using Autism Spectrum Disorder Evaluation Inventory (Mohammed, 2006), participated in this study. The sample was randomly divided into two…

  18. The Contribution of Information Acquisition and Management Capacity to Administrators' Decision-Making Effectiveness in Tertiary Institutions in South-Western Nigeria

    ERIC Educational Resources Information Center

    Fabunmi, Martins; Erwat, Eseza Akiror

    2008-01-01

    This study investigated through empirical methods the extent to which information acquisition and information management capacity of administrators in tertiary institutions in South-Western Nigeria contributed to their decision-making effectiveness. It adopted the ex post facto survey research design, using the random sampling technique to select…

  19. Randomness, Sample Size, Imagination and Metacognition: Making Judgments about Differences in Data Sets

    ERIC Educational Resources Information Center

    Stack, Sue; Watson, Jane

    2013-01-01

    There is considerable research on the difficulties students have in conceptualising individual concepts of probability and statistics (see for example, Bryant & Nunes, 2012; Jones, 2005). The unit of work developed for the action research project described in this article is specifically designed to address some of these in order to help…

  20. Managing Human Resource Capabilities for Sustainable Competitive Advantage: An Empirical Analysis from Indian Global Organisations

    ERIC Educational Resources Information Center

    Khandekar, Aradhana; Sharma, Anuradha

    2005-01-01

    Purpose: The purpose of this article is to examine the role of human resource capability (HRC) in organisational performance and sustainable competitive advantage (SCA) in Indian global organisations. Design/Methodology/Approach: To carry out the present study, an empirical research on a random sample of 300 line or human resource managers from…

  1. Teaching Effectiveness in Private Higher Education Institutions in Botswana: Analysis of Students' Perceptions

    ERIC Educational Resources Information Center

    Baliyan, Som Pal; Moorad, Fazlur Rehman

    2018-01-01

    This quantitative study analyzed the perceptions of students on teaching effectiveness in private higher education institutions in Botswana. An exploratory and descriptive survey research design was adopted in this study. A valid and reliable questionnaire was used to collect data through a survey of 560 stratified randomly sampled students in…

  2. Coping with Resource Management Challenges in Mumias Sub-County, Kakamega County, Kenya

    ERIC Educational Resources Information Center

    Anyango, Onginjo Rose; Orodho, John Aluko

    2016-01-01

    The gist of the study was to examine the main coping strategies used to manage resources in public secondary schools in Mumias Sub-County, Kakamega County, Kenya. The study was premised on Hunts (2007) theory on project management. A descriptive survey design was adopted. A combination of purposive and simple random sampling techniques were used…

  3. Truancy and Its Influence on Students' Learning in Dormaa Senior High School

    ERIC Educational Resources Information Center

    Henry, Gyimah; Yelkpieri, Daniel

    2017-01-01

    The study instigated the incidence of truancy among students and its influence on learning in the Dormaa Senior High School. A descriptive survey design was adopted in carrying out the study. The study population consisted of teachers, students, parents and opinion leaders in the study area. The simple random and purposive samplings were used in…

  4. Conditions Restraining the Teaching of Major Nigerian Languages in Secondary School in Ebonyi State, Nigeria

    ERIC Educational Resources Information Center

    Chidi-Ehiem, Ugochi Ijeoma

    2015-01-01

    This descriptive survey was carried out in order to determine the conditions handicapping the teaching of major Nigerian languages in secondary schools in Ebonyi State, Nigeria. A random sample of 953 students and 602 language teachers completed a corresponding copies of questionnaire designed for the study. Out of 1555 copies of questionnaire…

  5. Flipping the Classroom: Embedding Self-Regulated Learning Prompts in Videos

    ERIC Educational Resources Information Center

    Moos, Daniel C.; Bonde, Caitlin

    2016-01-01

    This study examined the effectiveness of embedding self-regulated learning (SRL) prompts in a video designed for the flipped class model. The sample included 32 undergraduate participants who were randomly assigned to one of two conditions: control (video) or experimental (video + SRL prompts). Prior knowledge was measured with a pre-test, SRL was…

  6. Level of Discipline among University Academic Staff as a Correlate of University Development in Nigeria

    ERIC Educational Resources Information Center

    Uhoman, Anyi Mary

    2017-01-01

    This study entitled "Level of Discipline Among University Academic Staff as a Correlate of University Development in Nigeria" adopted the correlation design with a population of 2,301 academic staff purposively selected from four Universities in the North-Central Geo-Political zone of Nigeria. The Stratified Random Sampling Method was…

  7. Focus Group Studies on Food Safety Knowledge, Perceptions, and Practices of School-Going Adolescent Girls in South India

    ERIC Educational Resources Information Center

    Gavaravarapu, Subba Rao M.; Vemula, Sudershan R.; Rao, Pratima; Mendu, Vishnu Vardhana Rao; Polasa, Kalpagam

    2009-01-01

    Objective: To understand food safety knowledge, perceptions, and practices of adolescent girls. Design: Focus group discussions (FGDs) with 32 groups selected using stratified random sampling. Setting: Four South Indian states. Participants: Adolescent girls (10-19 years). Phenomena of Interest: Food safety knowledge, perceptions, and practices.…

  8. Patterns and Impact of Comorbidity and Multimorbidity among Community-Resident American Indian Elders

    ERIC Educational Resources Information Center

    John, Robert; Kerby, Dave S.; Hennessy, Catherine Hagan

    2003-01-01

    Purpose: The purpose of this study is to suggest a new approach to identifying patterns of comorbidity and multimorbidity. Design and Methods: A random sample of 1,039 rural community-resident American Indian elders aged 60 years and older was surveyed. Comorbidity was investigated with four standard approaches, and with cluster analysis. Results:…

  9. Experiments with central-limit properties of spatial samples from locally covariant random fields

    USGS Publications Warehouse

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  10. The use of group sequential, information-based sample size re-estimation in the design of the PRIMO study of chronic kidney disease.

    PubMed

    Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi

    2011-04-01

    Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.

  11. The effect of asthma education program on knowledge of school teachers: a randomized controlled trial.

    PubMed

    Kawafha, Mariam M; Tawalbeh, Loai Issa

    2015-04-01

    The purpose of this study was to examine the effect of an asthma education program on schoolteachers' knowledge. Pre-test-post-test experimental randomized controlled design was used. A multistage-cluster sampling technique was used to randomly select governorate, primary schools, and schoolteachers. Schoolteachers were randomly assigned either to the experimental group (n = 36) and attended three educational sessions or to the control group (n = 38) who did not receive any intervention. Knowledge about asthma was measured using the Asthma General Knowledge Questionnaire for Adults (AGKQA). The results indicated that teachers in the experimental group showed significantly (p < .001) higher knowledge of asthma in the first post-test and the second post-test compared with those in the control group. Implementing asthma education enhanced schoolteachers' knowledge of asthma. The asthma education program should target schoolteachers to improve knowledge about asthma. © The Author(s) 2014.

  12. A critical appraisal of the reporting quality of published randomized controlled trials in the fall injuries.

    PubMed

    Asghari Jafarabadi, Mohammad; Sadeghi-Bazrgani, Homayoun; Dianat, Iman

    2018-06-01

    To evaluate the quality of reporting in published randomized controlled trials (RTCs) in the field of fall injuries. The 188 RTCs published between 2001 and 2011, indexed in EMBASE and Medline databases were extracted through searching by appropriate keywords and EMTree classification terms. The evaluation trustworthiness was assured through parallel evaluations of two experts in epidemiology and biostatistics. About 40%-75% of papers had problems in reporting random allocation method, allocation concealment, random allocation implementation, blinding and similarity among groups, intention to treat and balancing benefits and harms. Moreover, at least 10% of papers inappropriately/not reported the design, protocol violations, sample size justification, subgroup/adjusted analyses, presenting flow diagram, drop outs, recruitment time, baseline data, suitable effect size on outcome, ancillary analyses, limitations and generalizability. Considering the shortcomings found and due to the importance of the RCTs for fall injury prevention programmes, their reporting quality should be improved.

  13. Effect of oil gum massage therapy on common pathogenic oral microorganisms - A randomized controlled trial

    PubMed Central

    Singla, Nishu; Acharya, Shashidhar; Martena, Suganthi; Singla, Ritesh

    2014-01-01

    Objectives: (i) To assess reduction in Streptococcus mutans and Lactobacillus species count in saliva sample after ten minutes of oil gum massage therapy (massage of gingival tissues) per day for three weeks with sesame oil, olive oil, and coconut oil in three different groups of subjects. (ii) To compare the efficacy between three different oils and the “gold standard” chlorhexidine gel. (iii) To assess reduction in gingival scores and plaque scores of study subjects. Materials and Methods: Study design – Single center, parallel design, and triple blind randomized clinical study with four treatment groups. Participants: 32 of the 40 study subjects working as housekeeping personnel at Kasturba Hospital, Manipal; aged 18-55 years completed the three-week study period. Interventions: Subjects were randomly assigned to massage their gingiva everyday for three weeks with sesame oil, olive oil, coconut oil (tests), and Chlorhexidine gel (control). Oral health status and paraffin stimulated saliva samples were obtained at baseline and after three weeks of oil gum massage therapy. Outcome measures: Microbial culture, plaque index, and gingival index. Statistical analysis: Paired t test and Kruskal Wallis test. Results: There was a significant reduction in mean Streptococcus mutans count, Lactobacillus count, plaque scores, and gingival scores in all four groups after the study. However, there was no significant difference found in percentage reduction of these variables between the four groups. Conclusion: These oils can be used as valuable preventive agents in maintaining and improving oral health in low socioeconomic status population. However, it is recommended that further research should be conducted in other populations with a larger sample and longer duration of follow-up period. PMID:25210256

  14. Rationale, Design, Samples, and Baseline Sun Protection in a Randomized Trial on a Skin Cancer Prevention Intervention in Resort Environments

    PubMed Central

    Buller, David B.; Andersen, Peter A.; Walkosz, Barbara J.; Scott, Michael D.; Beck, Larry; Cutter, Gary R.

    2016-01-01

    Introduction Exposure to solar ultraviolet radiation during recreation is a risk factor for skin cancer. A trial evaluating an intervention to promote advanced sun protection (sunscreen pre-application/reapplication; protective hats and clothing; use of shade) during vacations. Materials and Methods Adult visitors to hotels/resorts with outdoor recreation (i.e., vacationers) participated in a group-randomized pretest-posttest controlled quasi-experimental design in 2012–14. Hotels/resorts were pair-matched and randomly assigned to the intervention or untreated control group. Sun protection (e.g., clothing, hats, shade and sunscreen) was measured in cross-sectional samples by observation and a face-to-face intercept survey during two-day visits. Results Initially, 41 hotel/resorts (11%) participated but 4 dropped out before posttest. Hotel/resorts were diverse (employees=30 to 900; latitude=24o 78′ N to 50o 52′ N; elevation=2 ft. to 9,726 ft. above sea level), and had a variety of outdoor venues (beaches/pools, court/lawn games, golf courses, common areas, and chairlifts). At pretest, 4,347 vacationers were observed and 3,531 surveyed. More females were surveyed (61%) than observed (50%). Vacationers were mostly 35–60 years old, highly educated (college education = 68%) and non-Hispanic white (93%), with high-risk skin types (22%). Vacationers reported covering 60% of their skin with clothing. Also, 40% of vacationers used shade; 60% applied sunscreen; and 42% had been sunburned. Conclusions The trial faced challenges recruiting resorts but result show that the large, multi-state sample of vacationers were at high risk for solar UV exposure. PMID:26593781

  15. Being “SMART” about Adolescent Conduct Problems Prevention: Executing a SMART Pilot Study in a Juvenile Diversion Agency

    PubMed Central

    August, Gerald J.; Piehler, Timothy F.; Bloomquist, Michael L.

    2014-01-01

    OBJECTIVE The development of adaptive treatment strategies (ATS) represents the next step in innovating conduct problems prevention programs within a juvenile diversion context. Towards this goal, we present the theoretical rationale, associated methods, and anticipated challenges for a feasibility pilot study in preparation for implementing a full-scale SMART (i.e., sequential, multiple assignment, randomized trial) for conduct problems prevention. The role of a SMART design in constructing ATS is presented. METHOD The SMART feasibility pilot study includes a sample of 100 youth (13–17 years of age) identified by law enforcement as early stage offenders and referred for pre-court juvenile diversion programming. Prior data on the sample population detail a high level of ethnic diversity and approximately equal representations of both genders. Within the SMART, youth and their families are first randomly assigned to one of two different brief-type evidence-based prevention programs, featuring parent-focused behavioral management or youth-focused strengths-building components. Youth who do not respond sufficiently to brief first-stage programming will be randomly assigned a second time to either an extended parent- or youth-focused second-stage programming. Measures of proximal intervention response and measures of potential candidate tailoring variables for developing ATS within this sample are detailed. RESULTS Results of the described pilot study will include information regarding feasibility and acceptability of the SMART design. This information will be used to refine a subsequent full-scale SMART. CONCLUSIONS The use of a SMART to develop ATS for prevention will increase the efficiency and effectiveness of prevention programing for youth with developing conduct problems. PMID:25256135

  16. Randomized comparison of 3 different-sized biopsy forceps for quality of sampling in Barrett’s esophagus

    PubMed Central

    Gonzalez, Susana; Yu, Woojin M.; Smith, Michael S.; Slack, Kristen N.; Rotterdam, Heidrun; Abrams, Julian A.; Lightdale, Charles J.

    2011-01-01

    Background Several types of forceps are available for use in sampling Barrett’s esophagus (BE). Few data exist with regard to biopsy quality for histologic assessment. Objective To evaluate sampling quality of 3 different forceps in patients with BE. Design Single-center, randomized clinical trial. Patients Consecutive patients with BE undergoing upper endoscopy. Interventions Patients randomized to have biopsy specimens taken with 1 of 3 types of forceps: standard, large capacity, or jumbo. Main Outcome Measurements Specimen adequacy was defined a priori as a well-oriented biopsy sample 2 mm or greater in diameter and with at least muscularis mucosa present. Results A total of 65 patients were enrolled and analyzed (standard forceps, n = 21; large-capacity forceps, n = 21; jumbo forceps, n = 23). Compared with jumbo forceps, a significantly higher proportion of biopsy samples with large-capacity forceps were adequate (37.8% vs 25.2%, P = .002). Of the standard forceps biopsy samples, 31.9% were adequate, which was not significantly different from specimens taken with large-capacity (P = .20) or jumbo (P = .09) forceps. Biopsy specimens taken with jumbo forceps had the largest diameter (median, 3.0 mm vs 2.5 mm [standard] vs 2.8 mm [large capacity]; P = .0001). However, jumbo forceps had the lowest proportion of specimens that were well oriented (overall P = .001). Limitations Heterogeneous patient population precluded dysplasia detection analyses. Conclusions Our results challenge the requirement of jumbo forceps and therapeutic endoscopes to properly perform the Seattle protocol. We found that standard and large-capacity forceps used with standard upper endoscopes produced biopsy samples at least as adequate as those obtained with jumbo forceps and therapeutic endoscopes in patients with BE. PMID:21034895

  17. Methodological Issues in Trials of Complementary and Alternative Medicine Interventions

    PubMed Central

    Sikorskii, Alla; Wyatt, Gwen; Victorson, David; Faulkner, Gwen; Rahbar, Mohammad Hossein

    2010-01-01

    Background Complementary and alternative medicine (CAM) use is widespread among cancer patients. Information on safety and efficacy of CAM therapies is needed for both patients and health care providers. Well-designed randomized clinical trials (RCTs) of CAM therapy interventions can inform both clinical research and practice. Objectives To review important issues that affect the design of RCTs for CAM interventions. Methods Using the methods component of the Consolidated Standards for Reporting Trials (CONSORT) as a guiding framework, and a National Cancer Institute-funded reflexology study as an exemplar, methodological issues related to participants, intervention, objectives, outcomes, sample size, randomization, blinding, and statistical methods were reviewed. Discussion Trials of CAM interventions designed and implemented according to appropriate methodological standards will facilitate the needed scientific rigor in CAM research. Interventions in CAM can be tested using proposed methodology, and the results of testing will inform nursing practice in providing safe and effective supportive care and improving the well-being of patients. PMID:19918155

  18. SAMPLING OSCILLOSCOPE

    DOEpatents

    Sugarman, R.M.

    1960-08-30

    An oscilloscope is designed for displaying transient signal waveforms having random time and amplitude distributions. The oscilloscopc is a sampling device that selects for display a portion of only those waveforms having a particular range of amplitudes. For this purpose a pulse-height analyzer is provided to screen the pulses. A variable voltage-level shifter and a time-scale rampvoltage generator take the pulse height relative to the start of the waveform. The variable voltage shifter produces a voltage level raised one step for each sequential signal waveform to be sampled and this results in an unsmeared record of input signal waveforms. Appropriate delay devices permit each sample waveform to pass its peak amplitude before the circuit selects it for display.

  19. Preference option randomized design (PORD) for comparative effectiveness research: Statistical power for testing comparative effect, preference effect, selection effect, intent-to-treat effect, and overall effect.

    PubMed

    Heo, Moonseong; Meissner, Paul; Litwin, Alain H; Arnsten, Julia H; McKee, M Diane; Karasz, Alison; McKinley, Paula; Rehm, Colin D; Chambers, Earle C; Yeh, Ming-Chin; Wylie-Rosett, Judith

    2017-01-01

    Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants' preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants' preference after randomization, which we call a "preference option randomized design (PORD)". In contrast to other preference designs, which ask whether or not participants consent to the assigned intervention after randomization, the crucial feature of preference option randomized design is its unique informed consent process before randomization. Specifically, the preference option randomized design consent process informs participants that they can opt out and switch to the other intervention only if after randomization they actively express the desire to do so. Participants who do not independently express explicit alternate preference or assent to the randomly assigned intervention are considered to not have an alternate preference. In sum, preference option randomized design intends to maximize retention, minimize possibility of forced assignment for any participants, and to maintain randomization by allowing participants with no or equal preference to represent random assignments. This design scheme enables to define five effects that are interconnected with each other through common design parameters-comparative, preference, selection, intent-to-treat, and overall/as-treated-to collectively guide decision making between interventions. Statistical power functions for testing all these effects are derived, and simulations verified the validity of the power functions under normal and binomial distributions.

  20. Iodine and mental development of children 5 years old and under: a systematic review and meta-analysis.

    PubMed

    Bougma, Karim; Aboud, Frances E; Harding, Kimberly B; Marquis, Grace S

    2013-04-22

    Several reviews and meta-analyses have examined the effects of iodine on mental development. None focused on young children, so they were incomplete in summarizing the effects on this important age group. The current systematic review therefore examined the relationship between iodine and mental development of children 5 years old and under. A systematic review of articles using Medline (1980-November 2011) was carried out. We organized studies according to four designs: (1) randomized controlled trial with iodine supplementation of mothers; (2) non-randomized trial with iodine supplementation of mothers and/or infants; (3) prospective cohort study stratified by pregnant women's iodine status; (4) prospective cohort study stratified by newborn iodine status. Average effect sizes for these four designs were 0.68 (2 RCT studies), 0.46 (8 non-RCT studies), 0.52 (9 cohort stratified by mothers' iodine status), and 0.54 (4 cohort stratified by infants' iodine status). This translates into 6.9 to 10.2 IQ points lower in iodine deficient children compared with iodine replete children. Thus, regardless of study design, iodine deficiency had a substantial impact on mental development. Methodological concerns included weak study designs, the omission of important confounders, small sample sizes, the lack of cluster analyses, and the lack of separate analyses of verbal and non-verbal subtests. Quantifying more precisely the contribution of iodine deficiency to delayed mental development in young children requires more well-designed randomized controlled trials, including ones on the role of iodized salt.

Top