Prevalence of Mixed-Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Collins, Kathleen M. T.
2006-01-01
The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
Adaptive sampling in research on risk-related behaviors.
Thompson, Steven K; Collins, Linda M
2002-11-01
This article introduces adaptive sampling designs to substance use researchers. Adaptive sampling is particularly useful when the population of interest is rare, unevenly distributed, hidden, or hard to reach. Examples of such populations are injection drug users, individuals at high risk for HIV/AIDS, and young adolescents who are nicotine dependent. In conventional sampling, the sampling design is based entirely on a priori information, and is fixed before the study begins. By contrast, in adaptive sampling, the sampling design adapts based on observations made during the survey; for example, drug users may be asked to refer other drug users to the researcher. In the present article several adaptive sampling designs are discussed. Link-tracing designs such as snowball sampling, random walk methods, and network sampling are described, along with adaptive allocation and adaptive cluster sampling. It is stressed that special estimation procedures taking the sampling design into account are needed when adaptive sampling has been used. These procedures yield estimates that are considerably better than conventional estimates. For rare and clustered populations adaptive designs can give substantial gains in efficiency over conventional designs, and for hidden populations link-tracing and other adaptive procedures may provide the only practical way to obtain a sample large enough for the study objectives.
Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo
2016-11-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.
Lu, Tsui-Shan; Longnecker, Matthew P.; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data and the general ODS design for a continuous response. While substantial work has been done for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome dependent sampling (Multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the Multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the Multivariate-ODS or the estimator from a simple random sample with the same sample size. The Multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of PCB exposure to hearing loss in children born to the Collaborative Perinatal Study. PMID:27966260
Statistical inference for the additive hazards model under outcome-dependent sampling.
Yu, Jichang; Liu, Yanyan; Sandler, Dale P; Zhou, Haibo
2015-09-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer.
Statistical inference for the additive hazards model under outcome-dependent sampling
Yu, Jichang; Liu, Yanyan; Sandler, Dale P.; Zhou, Haibo
2015-01-01
Cost-effective study design and proper inference procedures for data from such designs are always of particular interests to study investigators. In this article, we propose a biased sampling scheme, an outcome-dependent sampling (ODS) design for survival data with right censoring under the additive hazards model. We develop a weighted pseudo-score estimator for the regression parameters for the proposed design and derive the asymptotic properties of the proposed estimator. We also provide some suggestions for using the proposed method by evaluating the relative efficiency of the proposed method against simple random sampling design and derive the optimal allocation of the subsamples for the proposed design. Simulation studies show that the proposed ODS design is more powerful than other existing designs and the proposed estimator is more efficient than other estimators. We apply our method to analyze a cancer study conducted at NIEHS, the Cancer Incidence and Mortality of Uranium Miners Study, to study the risk of radon exposure to cancer. PMID:26379363
NASA Astrophysics Data System (ADS)
Ander, Louise; Lark, Murray; Smedley, Pauline; Watts, Michael; Hamilton, Elliott; Fletcher, Tony; Crabbe, Helen; Close, Rebecca; Studden, Mike; Leonardi, Giovanni
2015-04-01
Random sampling design is optimal in order to be able to assess outcomes, such as the mean of a given variable across an area. However, this optimal sampling design may be compromised to an unknown extent by unavoidable real-world factors: the extent to which the study design can still be considered random, and the influence this may have on the choice of appropriate statistical data analysis is examined in this work. We take a study which relied on voluntary participation for the sampling of private water tap chemical composition in England, UK. This study was designed and implemented as a categorical, randomised study. The local geological classes were grouped into 10 types, which were considered to be most important in likely effects on groundwater chemistry (the source of all the tap waters sampled). Locations of the users of private water supplies were made available to the study group from the Local Authority in the area. These were then assigned, based on location, to geological groups 1 to 10 and randomised within each group. However, the permission to collect samples then required active, voluntary participation by householders and thus, unlike many environmental studies, could not always follow the initial sample design. Impediments to participation ranged from 'willing but not available' during the designated sampling period, to a lack of response to requests to sample (assumed to be wholly unwilling or unable to participate). Additionally, a small number of unplanned samples were collected via new participants making themselves known to the sampling teams, during the sampling period. Here we examine the impact this has on the 'random' nature of the resulting data distribution, by comparison with the non-participating known supplies. We consider the implications this has on choice of statistical analysis methods to predict values and uncertainty at un-sampled locations.
Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model
Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo
2016-01-01
We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134
Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod
Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.
2008-01-01
To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Sampling Designs in Qualitative Research: Making the Sampling Process More Public
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Leech, Nancy L.
2007-01-01
The purpose of this paper is to provide a typology of sampling designs for qualitative researchers. We introduce the following sampling strategies: (a) parallel sampling designs, which represent a body of sampling strategies that facilitate credible comparisons of two or more different subgroups that are extracted from the same levels of study;…
Designing occupancy studies: general advice and allocating survey effort
MacKenzie, D.I.; Royle, J. Andrew
2005-01-01
1. The fraction of sampling units in a landscape where a target species is present (occupancy) is an extensively used concept in ecology. Yet in many applications the species will not always be detected in a sampling unit even when present, resulting in biased estimates of occupancy. Given that sampling units are surveyed repeatedly within a relatively short timeframe, a number of similar methods have now been developed to provide unbiased occupancy estimates. However, practical guidance on the efficient design of occupancy studies has been lacking. 2. In this paper we comment on a number of general issues related to designing occupancy studies, including the need for clear objectives that are explicitly linked to science or management, selection of sampling units, timing of repeat surveys and allocation of survey effort. Advice on the number of repeat surveys per sampling unit is considered in terms of the variance of the occupancy estimator, for three possible study designs. 3. We recommend that sampling units should be surveyed a minimum of three times when detection probability is high (> 0.5 survey-1), unless a removal design is used. 4. We found that an optimal removal design will generally be the most efficient, but we suggest it may be less robust to assumption violations than a standard design. 5. Our results suggest that for a rare species it is more efficient to survey more sampling units less intensively, while for a common species fewer sampling units should be surveyed more intensively. 6. Synthesis and applications. Reliable inferences can only result from quality data. To make the best use of logistical resources, study objectives must be clearly defined; sampling units must be selected, and repeated surveys timed appropriately; and a sufficient number of repeated surveys must be conducted. Failure to do so may compromise the integrity of the study. The guidance given here on study design issues is particularly applicable to studies of species occurrence and distribution, habitat selection and modelling, metapopulation studies and monitoring programmes.
Lu, Tsui-Shan; Longnecker, Matthew P; Zhou, Haibo
2017-03-15
Outcome-dependent sampling (ODS) scheme is a cost-effective sampling scheme where one observes the exposure with a probability that depends on the outcome. The well-known such design is the case-control design for binary response, the case-cohort design for the failure time data, and the general ODS design for a continuous response. While substantial work has been carried out for the univariate response case, statistical inference and design for the ODS with multivariate cases remain under-developed. Motivated by the need in biological studies for taking the advantage of the available responses for subjects in a cluster, we propose a multivariate outcome-dependent sampling (multivariate-ODS) design that is based on a general selection of the continuous responses within a cluster. The proposed inference procedure for the multivariate-ODS design is semiparametric where all the underlying distributions of covariates are modeled nonparametrically using the empirical likelihood methods. We show that the proposed estimator is consistent and developed the asymptotically normality properties. Simulation studies show that the proposed estimator is more efficient than the estimator obtained using only the simple-random-sample portion of the multivariate-ODS or the estimator from a simple random sample with the same sample size. The multivariate-ODS design together with the proposed estimator provides an approach to further improve study efficiency for a given fixed study budget. We illustrate the proposed design and estimator with an analysis of association of polychlorinated biphenyl exposure to hearing loss in children born to the Collaborative Perinatal Study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John
2012-01-01
Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
Adaptive sampling in behavioral surveys.
Thompson, S K
1997-01-01
Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.
Sample size for post-marketing safety studies based on historical controls.
Wu, Yu-te; Makuch, Robert W
2010-08-01
As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.
The sampling design for the National Children¿s Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach (visit our website for more information on the NCS : www.nationalchildrensstudy.gov). The full sample is designed to be representative of ...
Santamaría, Eva; Estévez, Javier Alejandro; Riba, Jordi; Izquierdo, Iñaki; Valle, Marta
2017-01-01
To optimise a pharmacokinetic (PK) study design of rupatadine for 2-5 year olds by using a population PK model developed with data from a study in 6-11 year olds. The design optimisation was driven by the need to avoid children's discomfort in the study. PK data from 6-11 year olds with allergic rhinitis available from a previous study were used to construct a population PK model which we used in simulations to assess the dose to administer in a study in 2-5 year olds. In addition, an optimal design approach was used to determine the most appropriate number of sampling groups, sampling days, total samples and sampling times. A two-compartmental model with first-order absorption and elimination, with clearance dependent on weight adequately described the PK of rupatadine for 6-11 year olds. The dose selected for a trial in 2-5 year olds was 2.5 mg, as it provided a Cmax below the 3 ng/ml threshold. The optimal study design consisted of four groups of children (10 children each), a maximum sampling window of 2 hours in two clinic visits for drawing three samples on day 14 and one on day 28 coinciding with the final examination of the study. A PK study design was optimised in order to prioritise avoidance of discomfort for enrolled 2-5 year olds by taking only four blood samples from each child and minimising the length of hospital stays.
Recent progresses in outcome-dependent sampling with failure time data.
Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo
2017-01-01
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case-cohort design, generalized case-cohort design, stratified case-cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.
Recent progresses in outcome-dependent sampling with failure time data
Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo
2016-01-01
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design. PMID:26759313
Sampling in ecology and evolution - bridging the gap between theory and practice
Albert, C.H.; Yoccoz, N.G.; Edwards, T.C.; Graham, C.H.; Zimmermann, N.E.; Thuiller, W.
2010-01-01
Sampling is a key issue for answering most ecological and evolutionary questions. The importance of developing a rigorous sampling design tailored to specific questions has already been discussed in the ecological and sampling literature and has provided useful tools and recommendations to sample and analyse ecological data. However, sampling issues are often difficult to overcome in ecological studies due to apparent inconsistencies between theory and practice, often leading to the implementation of simplified sampling designs that suffer from unknown biases. Moreover, we believe that classical sampling principles which are based on estimation of means and variances are insufficient to fully address many ecological questions that rely on estimating relationships between a response and a set of predictor variables over time and space. Our objective is thus to highlight the importance of selecting an appropriate sampling space and an appropriate sampling design. We also emphasize the importance of using prior knowledge of the study system to estimate models or complex parameters and thus better understand ecological patterns and processes generating these patterns. Using a semi-virtual simulation study as an illustration we reveal how the selection of the space (e.g. geographic, climatic), in which the sampling is designed, influences the patterns that can be ultimately detected. We also demonstrate the inefficiency of common sampling designs to reveal response curves between ecological variables and climatic gradients. Further, we show that response-surface methodology, which has rarely been used in ecology, is much more efficient than more traditional methods. Finally, we discuss the use of prior knowledge, simulation studies and model-based designs in defining appropriate sampling designs. We conclude by a call for development of methods to unbiasedly estimate nonlinear ecologically relevant parameters, in order to make inferences while fulfilling requirements of both sampling theory and field work logistics. ?? 2010 The Authors.
Jamsen, Kris M; Duffull, Stephen B; Tarning, Joel; Lindegardh, Niklas; White, Nicholas J; Simpson, Julie A
2012-07-11
Artemisinin-based combination therapy (ACT) is currently recommended as first-line treatment for uncomplicated malaria, but of concern, it has been observed that the effectiveness of the main artemisinin derivative, artesunate, has been diminished due to parasite resistance. This reduction in effect highlights the importance of the partner drugs in ACT and provides motivation to gain more knowledge of their pharmacokinetic (PK) properties via population PK studies. Optimal design methodology has been developed for population PK studies, which analytically determines a sampling schedule that is clinically feasible and yields precise estimation of model parameters. In this work, optimal design methodology was used to determine sampling designs for typical future population PK studies of the partner drugs (mefloquine, lumefantrine, piperaquine and amodiaquine) co-administered with artemisinin derivatives. The optimal designs were determined using freely available software and were based on structural PK models from the literature and the key specifications of 100 patients with five samples per patient, with one sample taken on the seventh day of treatment. The derived optimal designs were then evaluated via a simulation-estimation procedure. For all partner drugs, designs consisting of two sampling schedules (50 patients per schedule) with five samples per patient resulted in acceptable precision of the model parameter estimates. The sampling schedules proposed in this paper should be considered in future population pharmacokinetic studies where intensive sampling over many days or weeks of follow-up is not possible due to either ethical, logistic or economical reasons.
THE NORTH CAROLINA HERALD PILOT STUDY
The sampling design for the National Children's Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach. The full sample is designed to be representative of both urban and rural births in the United States, 2007-2011. While other sur...
Ji, Yuan; Wang, Sue-Jane
2013-01-01
The 3 + 3 design is the most common choice among clinicians for phase I dose-escalation oncology trials. In recent reviews, more than 95% of phase I trials have been based on the 3 + 3 design. Given that it is intuitive and its implementation does not require a computer program, clinicians can conduct 3 + 3 dose escalations in practice with virtually no logistic cost, and trial protocols based on the 3 + 3 design pass institutional review board and biostatistics reviews quickly. However, the performance of the 3 + 3 design has rarely been compared with model-based designs in simulation studies with matched sample sizes. In the vast majority of statistical literature, the 3 + 3 design has been shown to be inferior in identifying true maximum-tolerated doses (MTDs), although the sample size required by the 3 + 3 design is often orders-of-magnitude smaller than model-based designs. In this article, through comparative simulation studies with matched sample sizes, we demonstrate that the 3 + 3 design has higher risks of exposing patients to toxic doses above the MTD than the modified toxicity probability interval (mTPI) design, a newly developed adaptive method. In addition, compared with the mTPI design, the 3 + 3 design does not yield higher probabilities in identifying the correct MTD, even when the sample size is matched. Given that the mTPI design is equally transparent, costless to implement with free software, and more flexible in practical situations, we highly encourage its adoption in early dose-escalation studies whenever the 3 + 3 design is also considered. We provide free software to allow direct comparisons of the 3 + 3 design with other model-based designs in simulation studies with matched sample sizes. PMID:23569307
Youth Attitude Tracking Study II Wave 17 -- Fall 1986.
1987-06-01
decision, unless so designated by other official documentation. TABLE OF CONTENTS Page PREFACE ................................................. xi...Segmentation Analyses .......................... 2-7 .3. METHODOLOGY OF YATS II....................................... 3-1 A. Sampling Design Overview...Sampling Design , Estimation Procedures and Estimated Sampling Errors ................................. A-i Appendix B: Data Collection Procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Hatzell, H.H.; Oaksford, E.T.; Asbury, C.E.
1995-01-01
The implementation of design guidelines for the National Water-Quality Assessment (NAWQA) Program has resulted in the development of new sampling procedures and the modification of existing procedures commonly used in the Water Resources Division of the U.S. Geological Survey. The Georgia-Florida Coastal Plain (GAFL) study unit began the intensive data collection phase of the program in October 1992. This report documents the implementation of the NAWQA guidelines by describing the sampling design and procedures for collecting surface-water samples in the GAFL study unit in 1993. This documentation is provided for agencies that use water-quality data and for future study units that will be entering the intensive phase of data collection. The sampling design is intended to account for large- and small-scale spatial variations, and temporal variations in water quality for the study area. Nine fixed sites were selected in drainage basins of different sizes and different land-use characteristics located in different land-resource provinces. Each of the nine fixed sites was sampled regularly for a combination of six constituent groups composed of physical and chemical constituents: field measurements, major ions and metals, nutrients, organic carbon, pesticides, and suspended sediments. Some sites were also sampled during high-flow conditions and storm events. Discussion of the sampling procedure is divided into three phases: sample collection, sample splitting, and sample processing. A cone splitter was used to split water samples for the analysis of the sampling constituent groups except organic carbon from approximately nine liters of stream water collected at four fixed sites that were sampled intensively. An example of the sample splitting schemes designed to provide the sample volumes required for each sample constituent group is described in detail. Information about onsite sample processing has been organized into a flowchart that describes a pathway for each of the constituent groups.
Sampling designs for HIV molecular epidemiology with application to Honduras.
Shepherd, Bryan E; Rossini, Anthony J; Soto, Ramon Jeremias; De Rivera, Ivette Lorenzana; Mullins, James I
2005-11-01
Proper sampling is essential to characterize the molecular epidemiology of human immunodeficiency virus (HIV). HIV sampling frames are difficult to identify, so most studies use convenience samples. We discuss statistically valid and feasible sampling techniques that overcome some of the potential for bias due to convenience sampling and ensure better representation of the study population. We employ a sampling design called stratified cluster sampling. This first divides the population into geographical and/or social strata. Within each stratum, a population of clusters is chosen from groups, locations, or facilities where HIV-positive individuals might be found. Some clusters are randomly selected within strata and individuals are randomly selected within clusters. Variation and cost help determine the number of clusters and the number of individuals within clusters that are to be sampled. We illustrate the approach through a study designed to survey the heterogeneity of subtype B strains in Honduras.
Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai
2017-01-01
Abstract Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3‐by‐1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration–recommended 3‐by‐1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3‐by‐1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90‐μg test dose and a 720‐μg reference dose (42% cost reduction). Combining a 180‐μg test dose and a 720‐μg reference dose produced an estimated 36% cost reduction. PMID:29281130
Silber, Hanna E; Nyberg, Joakim; Hooker, Andrew C; Karlsson, Mats O
2009-06-01
Intravenous glucose tolerance test (IVGTT) provocations are informative, but complex and laborious, for studying the glucose-insulin system. The objective of this study was to evaluate, through optimal design methodology, the possibilities of more informative and/or less laborious study design of the insulin modified IVGTT in type 2 diabetic patients. A previously developed model for glucose and insulin regulation was implemented in the optimal design software PopED 2.0. The following aspects of the study design of the insulin modified IVGTT were evaluated; (1) glucose dose, (2) insulin infusion, (3) combination of (1) and (2), (4) sampling times, (5) exclusion of labeled glucose. Constraints were incorporated to avoid prolonged hyper- and/or hypoglycemia and a reduced design was used to decrease run times. Design efficiency was calculated as a measure of the improvement with an optimal design compared to the basic design. The results showed that the design of the insulin modified IVGTT could be substantially improved by the use of an optimized design compared to the standard design and that it was possible to use a reduced number of samples. Optimization of sample times gave the largest improvement followed by insulin dose. The results further showed that it was possible to reduce the total sample time with only a minor loss in efficiency. Simulations confirmed the predictions from PopED. The predicted uncertainty of parameter estimates (CV) was low in all tested cases, despite the reduction in the number of samples/subject. The best design had a predicted average CV of parameter estimates of 19.5%. We conclude that improvement can be made to the design of the insulin modified IVGTT and that the most important design factor was the placement of sample times followed by the use of an optimal insulin dose. This paper illustrates how complex provocation experiments can be improved by sequential modeling and optimal design.
Westfall, Jacob; Kenny, David A; Judd, Charles M
2014-10-01
Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.
Lee, Hye-Seung; Burkhardt, Brant R; McLeod, Wendy; Smith, Susan; Eberhard, Chris; Lynch, Kristian; Hadley, David; Rewers, Marian; Simell, Olli; She, Jin-Xiong; Hagopian, Bill; Lernmark, Ake; Akolkar, Beena; Ziegler, Anette G; Krischer, Jeffrey P
2014-07-01
The Environmental Determinants of Diabetes in the Young planned biomarker discovery studies on longitudinal samples for persistent confirmed islet cell autoantibodies and type 1 diabetes using dietary biomarkers, metabolomics, microbiome/viral metagenomics and gene expression. This article describes the details of planning The Environmental Determinants of Diabetes in the Young biomarker discovery studies using a nested case-control design that was chosen as an alternative to the full cohort analysis. In the frame of a nested case-control design, it guides the choice of matching factors, selection of controls, preparation of external quality control samples and reduction of batch effects along with proper sample allocation. Our design is to reduce potential bias and retain study power while reducing the costs by limiting the numbers of samples requiring laboratory analyses. It also covers two primary end points (the occurrence of diabetes-related autoantibodies and the diagnosis of type 1 diabetes). The resulting list of case-control matched samples for each laboratory was augmented with external quality control samples. Copyright © 2013 John Wiley & Sons, Ltd.
Planetary Sample Caching System Design Options
NASA Technical Reports Server (NTRS)
Collins, Curtis; Younse, Paulo; Backes, Paul
2009-01-01
Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.
NASA Astrophysics Data System (ADS)
Rieben, James C., Jr.
This study focuses on the effects of relevance and lab design on student learning within the chemistry laboratory environment. A general chemistry conductivity of solutions experiment and an upper level organic chemistry cellulose regeneration experiment were employed. In the conductivity experiment, the two main variables studied were the effect of relevant (or "real world") samples on student learning and a verification-based lab design versus a discovery-based lab design. With the cellulose regeneration experiment, the effect of a discovery-based lab design vs. a verification-based lab design was the sole focus. Evaluation surveys consisting of six questions were used at three different times to assess student knowledge of experimental concepts. In the general chemistry laboratory portion of this study, four experimental variants were employed to investigate the effect of relevance and lab design on student learning. These variants consisted of a traditional (or verification) lab design, a traditional lab design using "real world" samples, a new lab design employing real world samples/situations using unknown samples, and the new lab design using real world samples/situations that were known to the student. Data used in this analysis were collected during the Fall 08, Winter 09, and Fall 09 terms. For the second part of this study a cellulose regeneration experiment was employed to investigate the effects of lab design. A demonstration creating regenerated cellulose "rayon" was modified and converted to an efficient and low-waste experiment. In the first variant students tested their products and verified a list of physical properties. In the second variant, students filled in a blank physical property chart with their own experimental results for the physical properties. Results from the conductivity experiment show significant student learning of the effects of concentration on conductivity and how to use conductivity to differentiate solution types with the use of real world samples. In the organic chemistry experiment, results suggest that the discovery-based design improved student retention of the chain length differentiation by physical properties relative to the verification-based design.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.
The Sampling Design of the China Family Panel Studies (CFPS)
Xie, Yu; Lu, Ping
2018-01-01
The China Family Panel Studies (CFPS) is an on-going, nearly nationwide, comprehensive, longitudinal social survey that is intended to serve research needs on a large variety of social phenomena in contemporary China. In this paper, we describe the sampling design of the CFPS sample for its 2010 baseline survey and methods for constructing weights to adjust for sampling design and survey nonresponses. Specifically, the CFPS used a multi-stage probability strategy to reduce operation costs and implicit stratification to increase efficiency. Respondents were oversampled in five provinces or administrative equivalents for regional comparisons. We provide operation details for both sampling and weights construction. PMID:29854418
Zhu, Hong; Xu, Xiaohan; Ahn, Chul
2017-01-01
Paired experimental design is widely used in clinical and health behavioral studies, where each study unit contributes a pair of observations. Investigators often encounter incomplete observations of paired outcomes in the data collected. Some study units contribute complete pairs of observations, while the others contribute either pre- or post-intervention observations. Statistical inference for paired experimental design with incomplete observations of continuous outcomes has been extensively studied in literature. However, sample size method for such study design is sparsely available. We derive a closed-form sample size formula based on the generalized estimating equation approach by treating the incomplete observations as missing data in a linear model. The proposed method properly accounts for the impact of mixed structure of observed data: a combination of paired and unpaired outcomes. The sample size formula is flexible to accommodate different missing patterns, magnitude of missingness, and correlation parameter values. We demonstrate that under complete observations, the proposed generalized estimating equation sample size estimate is the same as that based on the paired t-test. In the presence of missing data, the proposed method would lead to a more accurate sample size estimate comparing with the crude adjustment. Simulation studies are conducted to evaluate the finite-sample performance of the generalized estimating equation sample size formula. A real application example is presented for illustration.
Yilmaz, Yildiz E; Bull, Shelley B
2011-11-29
Use of trait-dependent sampling designs in whole-genome association studies of sequence data can reduce total sequencing costs with modest losses of statistical efficiency. In a quantitative trait (QT) analysis of data from the Genetic Analysis Workshop 17 mini-exome for unrelated individuals in the Asian subpopulation, we investigate alternative designs that sequence only 50% of the entire cohort. In addition to a simple random sampling design, we consider extreme-phenotype designs that are of increasing interest in genetic association analysis of QTs, especially in studies concerned with the detection of rare genetic variants. We also evaluate a novel sampling design in which all individuals have a nonzero probability of being selected into the sample but in which individuals with extreme phenotypes have a proportionately larger probability. We take differential sampling of individuals with informative trait values into account by inverse probability weighting using standard survey methods which thus generalizes to the source population. In replicate 1 data, we applied the designs in association analysis of Q1 with both rare and common variants in the FLT1 gene, based on knowledge of the generating model. Using all 200 replicate data sets, we similarly analyzed Q1 and Q4 (which is known to be free of association with FLT1) to evaluate relative efficiency, type I error, and power. Simulation study results suggest that the QT-dependent selection designs generally yield greater than 50% relative efficiency compared to using the entire cohort, implying cost-effectiveness of 50% sample selection and worthwhile reduction of sequencing costs.
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries
McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.
2013-01-01
Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.
NASA Astrophysics Data System (ADS)
Beaty, David W.; Allen, Carlton C.; Bass, Deborah S.; Buxbaum, Karen L.; Campbell, James K.; Lindstrom, David J.; Miller, Sylvia L.; Papanastassiou, Dimitri A.
2009-10-01
It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.
Beaty, David W; Allen, Carlton C; Bass, Deborah S; Buxbaum, Karen L; Campbell, James K; Lindstrom, David J; Miller, Sylvia L; Papanastassiou, Dimitri A
2009-10-01
It has been widely understood for many years that an essential component of a Mars Sample Return mission is a Sample Receiving Facility (SRF). The purpose of such a facility would be to take delivery of the flight hardware that lands on Earth, open the spacecraft and extract the sample container and samples, and conduct an agreed-upon test protocol, while ensuring strict containment and contamination control of the samples while in the SRF. Any samples that are found to be non-hazardous (or are rendered non-hazardous by sterilization) would then be transferred to long-term curation. Although the general concept of an SRF is relatively straightforward, there has been considerable discussion about implementation planning. The Mars Exploration Program carried out an analysis of the attributes of an SRF to establish its scope, including minimum size and functionality, budgetary requirements (capital cost, operating costs, cost profile), and development schedule. The approach was to arrange for three independent design studies, each led by an architectural design firm, and compare the results. While there were many design elements in common identified by each study team, there were significant differences in the way human operators were to interact with the systems. In aggregate, the design studies provided insight into the attributes of a future SRF and the complex factors to consider for future programmatic planning.
Zeng, Yaohui; Singh, Sachinkumar; Wang, Kai; Ahrens, Richard C
2018-04-01
Pharmacodynamic studies that use methacholine challenge to assess bioequivalence of generic and innovator albuterol formulations are generally designed per published Food and Drug Administration guidance, with 3 reference doses and 1 test dose (3-by-1 design). These studies are challenging and expensive to conduct, typically requiring large sample sizes. We proposed 14 modified study designs as alternatives to the Food and Drug Administration-recommended 3-by-1 design, hypothesizing that adding reference and/or test doses would reduce sample size and cost. We used Monte Carlo simulation to estimate sample size. Simulation inputs were selected based on published studies and our own experience with this type of trial. We also estimated effects of these modified study designs on study cost. Most of these altered designs reduced sample size and cost relative to the 3-by-1 design, some decreasing cost by more than 40%. The most effective single study dose to add was 180 μg of test formulation, which resulted in an estimated 30% relative cost reduction. Adding a single test dose of 90 μg was less effective, producing only a 13% cost reduction. Adding a lone reference dose of either 180, 270, or 360 μg yielded little benefit (less than 10% cost reduction), whereas adding 720 μg resulted in a 19% cost reduction. Of the 14 study design modifications we evaluated, the most effective was addition of both a 90-μg test dose and a 720-μg reference dose (42% cost reduction). Combining a 180-μg test dose and a 720-μg reference dose produced an estimated 36% cost reduction. © 2017, The Authors. The Journal of Clinical Pharmacology published by Wiley Periodicals, Inc. on behalf of American College of Clinical Pharmacology.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
77 FR 5519 - Agency Forms Undergoing Paperwork Reduction Act Review
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-03
... sample frame will be generated from the U.S. Census of Governments. The proposed pilot study is designed to address three key methodological objectives. The first objective is to test the feasibility of the proposed sampling frame and to answer sample design issues related to determining sampling criteria for...
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Bartsch, L.A.; Richardson, W.B.; Naimo, T.J.
1998-01-01
Estimation of benthic macroinvertebrate populations over large spatial scales is difficult due to the high variability in abundance and the cost of sample processing and taxonomic analysis. To determine a cost-effective, statistically powerful sample design, we conducted an exploratory study of the spatial variation of benthic macroinvertebrates in a 37 km reach of the Upper Mississippi River. We sampled benthos at 36 sites within each of two strata, contiguous backwater and channel border. Three standard ponar (525 cm(2)) grab samples were obtained at each site ('Original Design'). Analysis of variance and sampling cost of strata-wide estimates for abundance of Oligochaeta, Chironomidae, and total invertebrates showed that only one ponar sample per site ('Reduced Design') yielded essentially the same abundance estimates as the Original Design, while reducing the overall cost by 63%. A posteriori statistical power analysis (alpha = 0.05, beta = 0.20) on the Reduced Design estimated that at least 18 sites per stratum were needed to detect differences in mean abundance between contiguous backwater and channel border areas for Oligochaeta, Chironomidae, and total invertebrates. Statistical power was nearly identical for the three taxonomic groups. The abundances of several taxa of concern (e.g., Hexagenia mayflies and Musculium fingernail clams) were too spatially variable to estimate power with our method. Resampling simulations indicated that to achieve adequate sampling precision for Oligochaeta, at least 36 sample sites per stratum would be required, whereas a sampling precision of 0.2 would not be attained with any sample size for Hexagenia in channel border areas, or Chironomidae and Musculium in both strata given the variance structure of the original samples. Community-wide diversity indices (Brillouin and 1-Simpsons) increased as sample area per site increased. The backwater area had higher diversity than the channel border area. The number of sampling sites required to sample benthic macroinvertebrates during our sampling period depended on the study objective and ranged from 18 to more than 40 sites per stratum. No single sampling regime would efficiently and adequately sample all components of the macroinvertebrate community.
Study of sample drilling techniques for Mars sample return missions
NASA Technical Reports Server (NTRS)
Mitchell, D. C.; Harris, P. T.
1980-01-01
To demonstrate the feasibility of acquiring various surface samples for a Mars sample return mission the following tasks were performed: (1) design of a Mars rover-mounted drill system capable of acquiring crystalline rock cores; prediction of performance, mass, and power requirements for various size systems, and the generation of engineering drawings; (2) performance of simulated permafrost coring tests using a residual Apollo lunar surface drill, (3) design of a rock breaker system which can be used to produce small samples of rock chips from rocks which are too large to return to Earth, but too small to be cored with the Rover-mounted drill; (4)design of sample containers for the selected regolith cores, rock cores, and small particulate or rock samples; and (5) design of sample handling and transfer techniques which will be required through all phase of sample acquisition, processing, and stowage on-board the Earth return vehicle. A preliminary design of a light-weight Rover-mounted sampling scoop was also developed.
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey
2013-01-01
We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.
Corn blight review: Sampling model and ground data measurements program
NASA Technical Reports Server (NTRS)
Allen, R. D.
1972-01-01
The sampling plan involved the selection of the study area, determination of the flightline and segment sample design within the study area, and determination of a field sample design. Initial interview survey data consisting of crop species acreage and land use were collected. On all corn fields, additional information such as seed type, row direction, population, planting date, ect. were also collected. From this information, sample corn fields were selected to be observed through the growing season on a biweekly basis by county extension personnel.
Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.
Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo
2012-01-01
The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.
On the importance of incorporating sampling weights in ...
Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h
The Neighbourhood Effects on Health and Well-being (NEHW) study.
O'Campo, Patricia; Wheaton, Blair; Nisenbaum, Rosane; Glazier, Richard H; Dunn, James R; Chambers, Catharine
2015-01-01
Many cross-sectional studies of neighbourhood effects on health do not employ strong study design elements. The Neighbourhood Effects on Health and Well-being (NEHW) study, a random sample of 2412 English-speaking Toronto residents (age 25-64), utilises strong design features for sampling neighbourhoods and individuals, characterising neighbourhoods using a variety of data sources, measuring a wide range of health outcomes, and for analysing cross-level interactions. We describe here methodological issues that shaped the design and analysis features of the NEHW study to ensure that, while a cross-sectional sample, it will advance the quality of evidence emerging from observational studies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742
Sample size determination for mediation analysis of longitudinal data.
Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying
2018-03-27
Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.
Practical Guidelines for Evaluating Sampling Designs in Survey Studies.
ERIC Educational Resources Information Center
Fan, Xitao; Wang, Lin
The popularity of sample surveys in evaluation and research makes it necessary for consumers to tell a good survey from a poor one. Several sources were identified that gave advice on how to evaluate a sample design used in a survey study. The sources are either too limited or too extensive to be useful practically. The purpose of this paper is to…
Low-cost floating emergence net and bottle trap: Comparison of two designs
Cadmus, Pete; Pomeranz, Justin; Kraus, Johanna M.
2016-01-01
Sampling emergent aquatic insects is of interest to many freshwater ecologists. Many quantitative emergence traps require the use of aspiration for collection. However, aspiration is infeasible in studies with large amounts of replication that is often required in large biomonitoring projects. We designed an economic, collapsible pyramid-shaped floating emergence trap with an external collection bottle that avoids the need for aspiration. This design was compared experimentally to a design of similar dimensions that relied on aspiration to ensure comparable results. The pyramid-shaped design captured twice as many total emerging insects. When a preservative was used in bottle collectors, >95% of the emergent abundance was collected in the bottle. When no preservative was used, >81% of the total insects were collected from the bottle. In addition to capturing fewer emergent insects, the traps that required aspiration took significantly longer to sample. Large studies and studies sampling remote locations could benefit from the economical construction, speed of sampling, and capture efficiency.
Sun, Zhichao; Mukherjee, Bhramar; Estes, Jason P; Vokonas, Pantel S; Park, Sung Kyun
2017-08-15
Joint effects of genetic and environmental factors have been increasingly recognized in the development of many complex human diseases. Despite the popularity of case-control and case-only designs, longitudinal cohort studies that can capture time-varying outcome and exposure information have long been recommended for gene-environment (G × E) interactions. To date, literature on sampling designs for longitudinal studies of G × E interaction is quite limited. We therefore consider designs that can prioritize a subsample of the existing cohort for retrospective genotyping on the basis of currently available outcome, exposure, and covariate data. In this work, we propose stratified sampling based on summaries of individual exposures and outcome trajectories and develop a full conditional likelihood approach for estimation that adjusts for the biased sample. We compare the performance of our proposed design and analysis with combinations of different sampling designs and estimation approaches via simulation. We observe that the full conditional likelihood provides improved estimates for the G × E interaction and joint exposure effects over uncorrected complete-case analysis, and the exposure enriched outcome trajectory dependent design outperforms other designs in terms of estimation efficiency and power for detection of the G × E interaction. We also illustrate our design and analysis using data from the Normative Aging Study, an ongoing longitudinal cohort study initiated by the Veterans Administration in 1963. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Optimal two-phase sampling design for comparing accuracies of two binary classification rules.
Xu, Huiping; Hui, Siu L; Grannis, Shaun
2014-02-10
In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.
Prediction of reinforced concrete strength by ultrasonic velocities
NASA Astrophysics Data System (ADS)
Sabbağ, Nevbahar; Uyanık, Osman
2017-06-01
This study was aimed to determine the strength of the reinforced concrete and to reveal the reinforcement effect on the concrete strength by Ultrasonic P and S wave velocities. Studies were conducted with prepared 9 different concrete designs of showing low, medium and high strength features. 4 kinds of cubic samples which unreinforced and including 10, 14 or 20 mm diameter reinforcement were prepared for these designs. Studies were carried out on total 324 samples including 9 samples for each design of these 4 kinds. The prepared samples of these designs were subjected to water curing. On some days of the 90-day period, P and S wave measurements were repeated to reveal the changes in seismic velocities of samples depending on whether reinforced or unreinforced of samples and diameter of reinforcement. Besides, comparisons were done by performing uniaxial compressive strength test with crushing of 3 samples on 7th, 28th and 90th days. As a result of studies and evaluations, it was seen that values of seismic velocities and uniaxial compressive strength increased depending on reinforcement and diameter of reinforcement in low strength concretes. However, while the seismic velocities were not markedly affected from reinforcement or reinforcement diameter in high strength concrete, uniaxial compressive strength values were negatively affected.
Whitmore, Roy W; Chen, Wenlin
2013-12-04
The ability to infer human exposure to substances from drinking water using monitoring data helps determine and/or refine potential risks associated with drinking water consumption. We describe a survey sampling approach and its application to an atrazine groundwater monitoring study to adequately characterize upper exposure centiles and associated confidence intervals with predetermined precision. Study design and data analysis included sampling frame definition, sample stratification, sample size determination, allocation to strata, analysis weights, and weighted population estimates. Sampling frame encompassed 15 840 groundwater community water systems (CWS) in 21 states throughout the U. S. Median, and 95th percentile atrazine concentrations were 0.0022 and 0.024 ppb, respectively, for all CWS. Statistical estimates agreed with historical monitoring results, suggesting that the study design was adequate and robust. This methodology makes no assumptions regarding the occurrence distribution (e.g., lognormality); thus analyses based on the design-induced distribution provide the most robust basis for making inferences from the sample to target population.
ELECTROFISHING IN BOATABLE RIVERS: DOES SAMPLING DESIGN AFFECT BIOASSESSMENT METRICS?
The accurate bioassessment of boatable rivers using fish assemblage data requires that a representative sample of the assemblage be collected. In this study, data were collected using an electrofishing design that permitted comparisons of the effects of designs and distances on ...
An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
Narragansett Bay (NB) has been extensively sampled over the last 50 years by various government agencies, academic institutions, and private groups. To date, most spatial research conducted within the estuary has employed deterministic sampling designs. Several studies have used ...
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J; Murcray, Cassandra Elizabeth; Conti, David
2011-12-01
Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. © 2011 Wiley Periodicals, Inc.
Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J.; Murcray, Cassandra Elizabeth; Conti, David
2014-01-01
Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. PMID:21922541
Sample size in studies on diagnostic accuracy in ophthalmology: a literature survey.
Bochmann, Frank; Johnson, Zoe; Azuara-Blanco, Augusto
2007-07-01
To assess the sample sizes used in studies on diagnostic accuracy in ophthalmology. Design and sources: A survey literature published in 2005. The frequency of reporting calculations of sample sizes and the samples' sizes were extracted from the published literature. A manual search of five leading clinical journals in ophthalmology with the highest impact (Investigative Ophthalmology and Visual Science, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology and British Journal of Ophthalmology) was conducted by two independent investigators. A total of 1698 articles were identified, of which 40 studies were on diagnostic accuracy. One study reported that sample size was calculated before initiating the study. Another study reported consideration of sample size without calculation. The mean (SD) sample size of all diagnostic studies was 172.6 (218.9). The median prevalence of the target condition was 50.5%. Only a few studies consider sample size in their methods. Inadequate sample sizes in diagnostic accuracy studies may result in misleading estimates of test accuracy. An improvement over the current standards on the design and reporting of diagnostic studies is warranted.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-04-01
In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.
The application of mixed methods designs to trauma research.
Creswell, John W; Zhang, Wanqing
2009-12-01
Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.
Ding, Jieli; Zhou, Haibo; Liu, Yanyan; Cai, Jianwen; Longnecker, Matthew P.
2014-01-01
Motivated by the need from our on-going environmental study in the Norwegian Mother and Child Cohort (MoBa) study, we consider an outcome-dependent sampling (ODS) scheme for failure-time data with censoring. Like the case-cohort design, the ODS design enriches the observed sample by selectively including certain failure subjects. We present an estimated maximum semiparametric empirical likelihood estimation (EMSELE) under the proportional hazards model framework. The asymptotic properties of the proposed estimator were derived. Simulation studies were conducted to evaluate the small-sample performance of our proposed method. Our analyses show that the proposed estimator and design is more efficient than the current default approach and other competing approaches. Applying the proposed approach with the data set from the MoBa study, we found a significant effect of an environmental contaminant on fecundability. PMID:24812419
Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods
Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.
2012-01-01
Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
ERIC Educational Resources Information Center
Puma, Michael J.; Ellis, Richard
Part of a study of program management procedures in the campus-based and Basic Educational Opportunity Grant programs reports on the design of the site visit component of the study and the results of the student survey, both in terms of the yield obtained and the quality of the data. Chapter 2 describes the design of sampling methodology employed…
A Simple and Robust Method for Partially Matched Samples Using the P-Values Pooling Approach
Kuan, Pei Fen; Huang, Bo
2013-01-01
This paper focuses on statistical analyses in scenarios where some samples from the matched pairs design are missing, resulting in partially matched samples. Motivated by the idea of meta-analysis, we recast the partially matched samples as coming from two experimental designs, and propose a simple yet robust approach based on the weighted Z-test to integrate the p-values computed from these two designs. We show that the proposed approach achieves better operating characteristics in simulations and a case study, compared to existing methods for partially matched samples. PMID:23417968
A note on sample size calculation for mean comparisons based on noncentral t-statistics.
Chow, Shein-Chung; Shao, Jun; Wang, Hansheng
2002-11-01
One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.
Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia
2018-01-01
Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between the two designs. Modelling these results for higher response rates and larger net sample sizes indicated that the sequential design was more cost and time-effective. This study contributes to the research available on implementing mixed-mode designs as part of public health surveys. Our findings show that SAQ-Paper and SAQ-Web questionnaires can be combined effectively. Sequential mixed-mode designs with higher rates of online respondents may be of greater benefit to studies with larger net sample sizes than concurrent mixed-mode designs.
Simulation of Wind Profile Perturbations for Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
2004-01-01
Ideally, a statistically representative sample of measured high-resolution wind profiles with wavelengths as small as tens of meters is required in design studies to establish aerodynamic load indicator dispersions and vehicle control system capability. At most potential launch sites, high- resolution wind profiles may not exist. Representative samples of Rawinsonde wind profiles to altitudes of 30 km are more likely to be available from the extensive network of measurement sites established for routine sampling in support of weather observing and forecasting activity. Such a sample, large enough to be statistically representative of relatively large wavelength perturbations, would be inadequate for launch vehicle design assessments because the Rawinsonde system accurately measures wind perturbations with wavelengths no smaller than 2000 m (1000 m altitude increment). The Kennedy Space Center (KSC) Jimsphere wind profiles (150/month and seasonal 2 and 3.5-hr pairs) are the only adequate samples of high resolution profiles approx. 150 to 300 m effective resolution, but over-sampled at 25 m intervals) that have been used extensively for launch vehicle design assessments. Therefore, a simulation process has been developed for enhancement of measured low-resolution Rawinsonde profiles that would be applicable in preliminary launch vehicle design studies at launch sites other than KSC.
Sample design effects in landscape genetics
Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.
2012-01-01
An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.
Mays, Darren; Gatti, Margaret E; Thompson, Nancy J
2011-06-01
Sports participation, while offering numerous developmental benefits for adolescents, has been associated with alcohol use in prior research. However, the relationship between sports participation and alcohol use among adolescents remains unclear, particularly how research design elements impact evidence of this relationship. We reviewed the evidence regarding sports participation and alcohol use among adolescents, with a focus on examining the potential impact of research design elements on this evidence. Studies were assessed for eligibility and coded based on research design elements including: study design, sampling method, sample size, and measures of sports participation and alcohol use. Fifty-four studies were assessed for eligibility, 29 of which were included in the review. Nearly two-thirds used a cross-sectional design and a random sampling method, with sample sizes ranging from 178 to 50,168 adolescents (Median = 1,769). Sixteen studies used a categorical measure of sports participation, while 7 applied an index-type measure and 6 employed some other measure of sports participation. Most studies assessed alcohol-related behaviors (n = 18) through categorical measures, while only 6 applied frequency only measures of alcohol use, 1 study applied quantity only measures, and 3 studies used quantity and frequency measures. Sports participation has been defined and measured in various ways, most of which do not differentiate between interscholastic and community-based contexts, confounding this relationship. Stronger measures of both sports participation and alcohol use need to be applied in future studies to advance our understanding of this relationship among youths.
ERIC Educational Resources Information Center
Al-Balushi, Sulaiman M.; Al-Aamri, Shamsa S.
2014-01-01
The current study explores the effectiveness of involving students in environmental science projects for their environmental knowledge and attitudes towards science. The study design is a quasi-experimental pre-post control group design. The sample was 62 11th-grade female students studying at a public school in Oman. The sample was divided into…
ERIC Educational Resources Information Center
Courtney, E. Wayne
This report was designed to present an example of a research study involving the use of coefficients of orthogonal comparisons in analysis of variance tests of significance. A sample research report and analysis was included so as to lead the reader through the design steps. The sample study was designed to determine the extent of attitudinal…
Information content of household-stratified epidemics.
Kinyanjui, T M; Pellis, L; House, T
2016-09-01
Household structure is a key driver of many infectious diseases, as well as a natural target for interventions such as vaccination programs. Many theoretical and conceptual advances on household-stratified epidemic models are relatively recent, but have successfully managed to increase the applicability of such models to practical problems. To be of maximum realism and hence benefit, they require parameterisation from epidemiological data, and while household-stratified final size data has been the traditional source, increasingly time-series infection data from households are becoming available. This paper is concerned with the design of studies aimed at collecting time-series epidemic data in order to maximize the amount of information available to calibrate household models. A design decision involves a trade-off between the number of households to enrol and the sampling frequency. Two commonly used epidemiological study designs are considered: cross-sectional, where different households are sampled at every time point, and cohort, where the same households are followed over the course of the study period. The search for an optimal design uses Bayesian computationally intensive methods to explore the joint parameter-design space combined with the Shannon entropy of the posteriors to estimate the amount of information in each design. For the cross-sectional design, the amount of information increases with the sampling intensity, i.e., the designs with the highest number of time points have the most information. On the other hand, the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing epidemiological data collection studies. Prospective problem-specific use of our computational methods can bring significant benefits in guiding future study designs. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
2017-09-01
performed on pre -collected plasma samples from a study that had a two- group cross-sectional design in which main comparisons were with medically...controls. Approach Metabolomic analysis will be performed on pre -collected plasma samples from a study that had a two- group cross-sectional design in...disturbances, and health. Metabolomic analysis will be performed on pre -collected plasma samples from a study that had a two- group cross-sectional
Ohneberg, K; Wolkewitz, M; Beyersmann, J; Palomar-Martinez, M; Olaechea-Astigarraga, P; Alvarez-Lerma, F; Schumacher, M
2015-01-01
Sampling from a large cohort in order to derive a subsample that would be sufficient for statistical analysis is a frequently used method for handling large data sets in epidemiological studies with limited resources for exposure measurement. For clinical studies however, when interest is in the influence of a potential risk factor, cohort studies are often the first choice with all individuals entering the analysis. Our aim is to close the gap between epidemiological and clinical studies with respect to design and power considerations. Schoenfeld's formula for the number of events required for a Cox' proportional hazards model is fundamental. Our objective is to compare the power of analyzing the full cohort and the power of a nested case-control and a case-cohort design. We compare formulas for power for sampling designs and cohort studies. In our data example we simultaneously apply a nested case-control design with a varying number of controls matched to each case, a case cohort design with varying subcohort size, a random subsample and a full cohort analysis. For each design we calculate the standard error for estimated regression coefficients and the mean number of distinct persons, for whom covariate information is required. The formula for the power of a nested case-control design and the power of a case-cohort design is directly connected to the power of a cohort study using the well known Schoenfeld formula. The loss in precision of parameter estimates is relatively small compared to the saving in resources. Nested case-control and case-cohort studies, but not random subsamples yield an attractive alternative for analyzing clinical studies in the situation of a low event rate. Power calculations can be conducted straightforwardly to quantify the loss of power compared to the savings in the num-ber of patients using a sampling design instead of analyzing the full cohort.
Improving power and robustness for detecting genetic association with extreme-value sampling design.
Chen, Hua Yun; Li, Mingyao
2011-12-01
Extreme-value sampling design that samples subjects with extremely large or small quantitative trait values is commonly used in genetic association studies. Samples in such designs are often treated as "cases" and "controls" and analyzed using logistic regression. Such a case-control analysis ignores the potential dose-response relationship between the quantitative trait and the underlying trait locus and thus may lead to loss of power in detecting genetic association. An alternative approach to analyzing such data is to model the dose-response relationship by a linear regression model. However, parameter estimation from this model can be biased, which may lead to inflated type I errors. We propose a robust and efficient approach that takes into consideration of both the biased sampling design and the potential dose-response relationship. Extensive simulations demonstrate that the proposed method is more powerful than the traditional logistic regression analysis and is more robust than the linear regression analysis. We applied our method to the analysis of a candidate gene association study on high-density lipoprotein cholesterol (HDL-C) which includes study subjects with extremely high or low HDL-C levels. Using our method, we identified several SNPs showing a stronger evidence of association with HDL-C than the traditional case-control logistic regression analysis. Our results suggest that it is important to appropriately model the quantitative traits and to adjust for the biased sampling when dose-response relationship exists in extreme-value sampling designs. © 2011 Wiley Periodicals, Inc.
Uyei, Jennifer; Braithwaite, R Scott
2016-01-01
Despite the benefits of the placebo-controlled trial design, it is limited by its inability to quantify total benefits and harms. Such trials, for example, are not designed to detect an intervention's placebo or nocebo effects, which if detected could alter the benefit-to-harm balance and change a decision to adopt or reject an intervention. In this article, we explore scenarios in which alternative experimental trial designs, which differ in the type of control used, influence expected value across a range of pretest assumptions and study sample sizes. We developed a decision model to compare 3 trial designs and their implications for decision making: 2-arm placebo-controlled trial ("placebo-control"), 2-arm intervention v. do nothing trial ("null-control"), and an innovative 3-arm trial design: intervention v. do nothing v. placebo trial ("novel design"). Four scenarios were explored regarding particular attributes of a hypothetical intervention: 1) all benefits and no harm, 2) no biological effect, 3) only biological effects, and 4) surreptitious harm (no biological benefit or nocebo effect). Scenario 1: When sample sizes were very small, the null-control was preferred, but as sample sizes increased, expected value of all 3 designs converged. Scenario 2: The null-control was preferred regardless of sample size when the ratio of placebo to nocebo effect was >1; otherwise, the placebo-control was preferred. Scenario 3: When sample size was very small, the placebo-control was preferred when benefits outweighed harms, but the novel design was preferred when harms outweighed benefits. Scenario 4: The placebo-control was preferred when harms outweighed placebo benefits; otherwise, preference went to the null-control. Scenarios are hypothetical, study designs have not been tested in a real-world setting, blinding is not possible in all designs, and some may argue the novel design poses ethical concerns. We identified scenarios in which alternative experimental study designs would confer greater expected value than the placebo-controlled trial design. The likelihood and prevalence of such situations warrant further study. © The Author(s) 2015.
Gopalakrishnan, V; Subramanian, V; Baskaran, R; Venkatraman, B
2015-07-01
Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.
2015-07-15
Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in amore » preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.« less
NASA Technical Reports Server (NTRS)
Kalton, G.
1983-01-01
A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.
Outcome-Dependent Sampling with Interval-Censored Failure Time Data
Zhou, Qingning; Cai, Jianwen; Zhou, Haibo
2017-01-01
Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng
2015-01-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.
Motivational Factors and Teachers Commitment in Public Secondary Schools in Mbale Municipality
ERIC Educational Resources Information Center
Olurotimi, Ogunlade Joseph; Asad, Kamonges Wahab; Abdulrauf, Abdulkadir
2015-01-01
The study investigated the influence of motivational factors on teachers' commitment in public Secondary School in Mbale Municipality. The study employed Cross-sectional survey design. The sampling technique used to select was simple random sampling technique. The instrument used to collect data was a self designed questionnaire. The data…
Selection within households in health surveys
Alves, Maria Cecilia Goi Porto; Escuder, Maria Mercedes Loureiro; Claro, Rafael Moreira; da Silva, Nilza Nunes
2014-01-01
OBJECTIVE To compare the efficiency and accuracy of sampling designs including and excluding the sampling of individuals within sampled households in health surveys. METHODS From a population survey conducted in Baixada Santista Metropolitan Area, SP, Southeastern Brazil, lowlands between 2006 and 2007, 1,000 samples were drawn for each design and estimates for people aged 18 to 59 and 18 and over were calculated for each sample. In the first design, 40 census tracts, 12 households per sector, and one person per household were sampled. In the second, no sampling within the household was performed and 40 census sectors and 6 households for the 18 to 59-year old group and 5 or 6 for the 18 and over age group or more were sampled. Precision and bias of proportion estimates for 11 indicators were assessed in the two final sets of the 1000 selected samples with the two types of design. They were compared by means of relative measurements: coefficient of variation, bias/mean ratio, bias/standard error ratio, and relative mean square error. Comparison of costs contrasted basic cost per person, household cost, number of people, and households. RESULTS Bias was found to be negligible for both designs. A lower precision was found in the design including individuals sampling within households, and the costs were higher. CONCLUSIONS The design excluding individual sampling achieved higher levels of efficiency and accuracy and, accordingly, should be first choice for investigators. Sampling of household dwellers should be adopted when there are reasons related to the study subject that may lead to bias in individual responses if multiple dwellers answer the proposed questionnaire. PMID:24789641
ADAPTIVE MATCHING IN RANDOMIZED TRIALS AND OBSERVATIONAL STUDIES
van der Laan, Mark J.; Balzer, Laura B.; Petersen, Maya L.
2014-01-01
SUMMARY In many randomized and observational studies the allocation of treatment among a sample of n independent and identically distributed units is a function of the covariates of all sampled units. As a result, the treatment labels among the units are possibly dependent, complicating estimation and posing challenges for statistical inference. For example, cluster randomized trials frequently sample communities from some target population, construct matched pairs of communities from those included in the sample based on some metric of similarity in baseline community characteristics, and then randomly allocate a treatment and a control intervention within each matched pair. In this case, the observed data can neither be represented as the realization of n independent random variables, nor, contrary to current practice, as the realization of n/2 independent random variables (treating the matched pair as the independent sampling unit). In this paper we study estimation of the average causal effect of a treatment under experimental designs in which treatment allocation potentially depends on the pre-intervention covariates of all units included in the sample. We define efficient targeted minimum loss based estimators for this general design, present a theorem that establishes the desired asymptotic normality of these estimators and allows for asymptotically valid statistical inference, and discuss implementation of these estimators. We further investigate the relative asymptotic efficiency of this design compared with a design in which unit-specific treatment assignment depends only on the units’ covariates. Our findings have practical implications for the optimal design and analysis of pair matched cluster randomized trials, as well as for observational studies in which treatment decisions may depend on characteristics of the entire sample. PMID:25097298
Introductory Statistics Students' Conceptual Understanding of Study Design and Conclusions
NASA Astrophysics Data System (ADS)
Fry, Elizabeth Brondos
Recommended learning goals for students in introductory statistics courses include the ability to recognize and explain the key role of randomness in designing studies and in drawing conclusions from those studies involving generalizations to a population or causal claims (GAISE College Report ASA Revision Committee, 2016). The purpose of this study was to explore introductory statistics students' understanding of the distinct roles that random sampling and random assignment play in study design and the conclusions that can be made from each. A study design unit lasting two and a half weeks was designed and implemented in four sections of an undergraduate introductory statistics course based on modeling and simulation. The research question that this study attempted to answer is: How does introductory statistics students' conceptual understanding of study design and conclusions (in particular, unbiased estimation and establishing causation) change after participating in a learning intervention designed to promote conceptual change in these areas? In order to answer this research question, a forced-choice assessment called the Inferences from Design Assessment (IDEA) was developed as a pretest and posttest, along with two open-ended assignments, a group quiz and a lab assignment. Quantitative analysis of IDEA results and qualitative analysis of the group quiz and lab assignment revealed that overall, students' mastery of study design concepts significantly increased after the unit, and the great majority of students successfully made the appropriate connections between random sampling and generalization, and between random assignment and causal claims. However, a small, but noticeable portion of students continued to demonstrate misunderstandings, such as confusion between random sampling and random assignment.
Stark, James R.; Fallon, J.D.; Fong, A.L.; Goldstein, R.M.; Hanson, P.E.; Kroening, S.E.; Lee, K.E.
1999-01-01
This report describes the design, site-selection, and implementation of the study. Methods used to collect, process, and analyze samples; characterize sites; and assess habitat are described. A comprehensive list of sample sites is provided. Sample analyses for water-quality studies included chlorophyll a, major inorganic constituents, nutrients, trace elements, tritium, radon, environmental isotopes, organic carbon, pesticides, volatile organic compounds, and other synthetic and naturallyoccurring organic compounds. Aquatic-biological samples included fish, benthic macroinvertebrates, and algal enumeration and identification, as well as synthetic-organic compounds and trace elements in fish tissue.
Sample size calculations for stepped wedge and cluster randomised trials: a unified approach
Hemming, Karla; Taljaard, Monica
2016-01-01
Objectives To clarify and illustrate sample size calculations for the cross-sectional stepped wedge cluster randomized trial (SW-CRT) and to present a simple approach for comparing the efficiencies of competing designs within a unified framework. Study Design and Setting We summarize design effects for the SW-CRT, the parallel cluster randomized trial (CRT), and the parallel cluster randomized trial with before and after observations (CRT-BA), assuming cross-sectional samples are selected over time. We present new formulas that enable trialists to determine the required cluster size for a given number of clusters. We illustrate by example how to implement the presented design effects and give practical guidance on the design of stepped wedge studies. Results For a fixed total cluster size, the choice of study design that provides the greatest power depends on the intracluster correlation coefficient (ICC) and the cluster size. When the ICC is small, the CRT tends to be more efficient; when the ICC is large, the SW-CRT tends to be more efficient and can serve as an alternative design when the CRT is an infeasible design. Conclusion Our unified approach allows trialists to easily compare the efficiencies of three competing designs to inform the decision about the most efficient design in a given scenario. PMID:26344808
NASA Astrophysics Data System (ADS)
Carta, R.; Filippetto, D.; Lavagna, M.; Mailland, F.; Falkner, P.; Larranaga, J.
2015-12-01
The paper provides recent updates about the ESA study: Sample Canister Capture Mechanism Design and Breadboard developed under the Mars Robotic Exploration Preparation (MREP) program. The study is part of a set of feasibility studies aimed at identifying, analysing and developing technology concepts enabling the future international Mars Sample Return (MSR) mission. The MSR is a challenging mission with the purpose of sending a Lander to Mars, acquire samples from its surface/subsurface and bring them back to Earth for further, more in depth, analyses. In particular, the technology object of the Study is relevant to the Capture Mechanism that, mounted on the Orbiter, is in charge of capturing and securing the Sample Canister, or Orbiting Sample, accommodating the Martian soil samples, previously delivered in Martian orbit by the Mars Ascent Vehicle. An elegant breadboard of such a device was implemented and qualified under an ESA contract primed by OHB-CGS S.p.A. and supported by Politecnico di Milano, Department of Aerospace Science and Technology: in particular, functional tests were conducted at PoliMi-DAST and thermal and mechanical test campaigns occurred at Serms s.r.l. facility. The effectiveness of the breadboard design was demonstrated and the obtained results, together with the design challenges, issues and adopted solutions are critically presented in the paper. The breadboard was also tested on a parabolic flight to raise its Technology Readiness Level to 6; the microgravity experiment design, adopted solutions and results are presented as well in the paper.
Todd Trench, Elaine C.
2004-01-01
A time-series analysis approach developed by the U.S. Geological Survey was used to analyze trends in total phosphorus and evaluate optimal sampling designs for future trend detection, using long-term data for two water-quality monitoring stations on the Quinebaug River in eastern Connecticut. Trend-analysis results for selected periods of record during 1971?2001 indicate that concentrations of total phosphorus in the Quinebaug River have varied over time, but have decreased significantly since the 1970s and 1980s. Total phosphorus concentrations at both stations increased in the late 1990s and early 2000s, but were still substantially lower than historical levels. Drainage areas for both stations are primarily forested, but water quality at both stations is affected by point discharges from municipal wastewater-treatment facilities. Various designs with sampling frequencies ranging from 4 to 11 samples per year were compared to the trend-detection power of the monthly (12-sample) design to determine the most efficient configuration of months to sample for a given annual sampling frequency. Results from this evaluation indicate that the current (2004) 8-sample schedule for the two Quinebaug stations, with monthly sampling from May to September and bimonthly sampling for the remainder of the year, is not the most efficient 8-sample design for future detection of trends in total phosphorus. Optimal sampling schedules for the two stations differ, but in both cases, trend-detection power generally is greater among 8-sample designs that include monthly sampling in fall and winter. Sampling designs with fewer than 8 samples per year generally provide a low level of probability for detection of trends in total phosphorus. Managers may determine an acceptable level of probability for trend detection within the context of the multiple objectives of the state?s water-quality management program and the scientific understanding of the watersheds in question. Managers may identify a threshold of probability for trend detection that is high enough to justify the agency?s investment in the water-quality sampling program. Results from an analysis of optimal sampling designs can provide an important component of information for the decision-making process in which sampling schedules are periodically reviewed and revised. Results from the study described in this report and previous studies indicate that optimal sampling schedules for trend detection may differ substantially for different stations and constituents. A more comprehensive statewide evaluation of sampling schedules for key stations and constituents could provide useful information for any redesign of the schedule for water-quality monitoring in the Quinebaug River Basin and elsewhere in the state.
Sampling design trade-offs in occupancy studies with imperfect detection: examples and software
Bailey, L.L.; Hines, J.E.; Nichols, J.D.
2007-01-01
Researchers have used occupancy, or probability of occupancy, as a response or state variable in a variety of studies (e.g., habitat modeling), and occupancy is increasingly favored by numerous state, federal, and international agencies engaged in monitoring programs. Recent advances in estimation methods have emphasized that reliable inferences can be made from these types of studies if detection and occupancy probabilities are simultaneously estimated. The need for temporal replication at sampled sites to estimate detection probability creates a trade-off between spatial replication (number of sample sites distributed within the area of interest/inference) and temporal replication (number of repeated surveys at each site). Here, we discuss a suite of questions commonly encountered during the design phase of occupancy studies, and we describe software (program GENPRES) developed to allow investigators to easily explore design trade-offs focused on particularities of their study system and sampling limitations. We illustrate the utility of program GENPRES using an amphibian example from Greater Yellowstone National Park, USA.
Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar
2016-02-01
The design of surface water quality sampling location is a crucial decision-making process for rationalization of monitoring network. The quantity, quality, and types of available dataset (watershed characteristics and water quality data) may affect the selection of appropriate design methodology. The modified Sanders approach and multivariate statistical techniques [particularly factor analysis (FA)/principal component analysis (PCA)] are well-accepted and widely used techniques for design of sampling locations. However, their performance may vary significantly with quantity, quality, and types of available dataset. In this paper, an attempt has been made to evaluate performance of these techniques by accounting the effect of seasonal variation, under a situation of limited water quality data but extensive watershed characteristics information, as continuous and consistent river water quality data is usually difficult to obtain, whereas watershed information may be made available through application of geospatial techniques. A case study of Kali River, Western Uttar Pradesh, India, is selected for the analysis. The monitoring was carried out at 16 sampling locations. The discrete and diffuse pollution loads at different sampling sites were estimated and accounted using modified Sanders approach, whereas the monitored physical and chemical water quality parameters were utilized as inputs for FA/PCA. The designed optimum number of sampling locations for monsoon and non-monsoon seasons by modified Sanders approach are eight and seven while that for FA/PCA are eleven and nine, respectively. Less variation in the number and locations of designed sampling sites were obtained by both techniques, which shows stability of results. A geospatial analysis has also been carried out to check the significance of designed sampling location with respect to river basin characteristics and land use of the study area. Both methods are equally efficient; however, modified Sanders approach outperforms FA/PCA when limited water quality and extensive watershed information is available. The available water quality dataset is limited and FA/PCA-based approach fails to identify monitoring locations with higher variation, as these multivariate statistical approaches are data-driven. The priority/hierarchy and number of sampling sites designed by modified Sanders approach are well justified by the land use practices and observed river basin characteristics of the study area.
Internal pilots for a class of linear mixed models with Gaussian and compound symmetric data
Gurka, Matthew J.; Coffey, Christopher S.; Muller, Keith E.
2015-01-01
SUMMARY An internal pilot design uses interim sample size analysis, without interim data analysis, to adjust the final number of observations. The approach helps to choose a sample size sufficiently large (to achieve the statistical power desired), but not too large (which would waste money and time). We report on recent research in cerebral vascular tortuosity (curvature in three dimensions) which would benefit greatly from internal pilots due to uncertainty in the parameters of the covariance matrix used for study planning. Unfortunately, observations correlated across the four regions of the brain and small sample sizes preclude using existing methods. However, as in a wide range of medical imaging studies, tortuosity data have no missing or mistimed data, a factorial within-subject design, the same between-subject design for all responses, and a Gaussian distribution with compound symmetry. For such restricted models, we extend exact, small sample univariate methods for internal pilots to linear mixed models with any between-subject design (not just two groups). Planning a new tortuosity study illustrates how the new methods help to avoid sample sizes that are too small or too large while still controlling the type I error rate. PMID:17318914
Li, Y; Chappell, A; Nyamdavaa, B; Yu, H; Davaasuren, D; Zoljargal, K
2015-03-01
The (137)Cs technique for estimating net time-integrated soil redistribution is valuable for understanding the factors controlling soil redistribution by all processes. The literature on this technique is dominated by studies of individual fields and describes its typically time-consuming nature. We contend that the community making these studies has inappropriately assumed that many (137)Cs measurements are required and hence estimates of net soil redistribution can only be made at the field scale. Here, we support future studies of (137)Cs-derived net soil redistribution to apply their often limited resources across scales of variation (field, catchment, region etc.) without compromising the quality of the estimates at any scale. We describe a hybrid, design-based and model-based, stratified random sampling design with composites to estimate the sampling variance and a cost model for fieldwork and laboratory measurements. Geostatistical mapping of net (1954-2012) soil redistribution as a case study on the Chinese Loess Plateau is compared with estimates for several other sampling designs popular in the literature. We demonstrate the cost-effectiveness of the hybrid design for spatial estimation of net soil redistribution. To demonstrate the limitations of current sampling approaches to cut across scales of variation, we extrapolate our estimate of net soil redistribution across the region, show that for the same resources, estimates from many fields could have been provided and would elucidate the cause of differences within and between regional estimates. We recommend that future studies evaluate carefully the sampling design to consider the opportunity to investigate (137)Cs-derived net soil redistribution across scales of variation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chenel, Marylore; Bouzom, François; Aarons, Leon; Ogungbenro, Kayode
2008-12-01
To determine the optimal sampling time design of a drug-drug interaction (DDI) study for the estimation of apparent clearances (CL/F) of two co-administered drugs (SX, a phase I compound, potentially a CYP3A4 inhibitor, and MDZ, a reference CYP3A4 substrate) without any in vivo data using physiologically based pharmacokinetic (PBPK) predictions, population PK modelling and multiresponse optimal design. PBPK models were developed with AcslXtreme using only in vitro data to simulate PK profiles of both drugs when they were co-administered. Then, using simulated data, population PK models were developed with NONMEM and optimal sampling times were determined by optimizing the determinant of the population Fisher information matrix with PopDes using either two uniresponse designs (UD) or a multiresponse design (MD) with joint sampling times for both drugs. Finally, the D-optimal sampling time designs were evaluated by simulation and re-estimation with NONMEM by computing the relative root mean squared error (RMSE) and empirical relative standard errors (RSE) of CL/F. There were four and five optimal sampling times (=nine different sampling times) in the UDs for SX and MDZ, respectively, whereas there were only five sampling times in the MD. Whatever design and compound, CL/F was well estimated (RSE < 20% for MDZ and <25% for SX) and expected RSEs from PopDes were in the same range as empirical RSEs. Moreover, there was no bias in CL/F estimation. Since MD required only five sampling times compared to the two UDs, D-optimal sampling times of the MD were included into a full empirical design for the proposed clinical trial. A joint paper compares the designs with real data. This global approach including PBPK simulations, population PK modelling and multiresponse optimal design allowed, without any in vivo data, the design of a clinical trial, using sparse sampling, capable of estimating CL/F of the CYP3A4 substrate and potential inhibitor when co-administered together.
Design-based stereology: Planning, volumetry and sampling are crucial steps for a successful study.
Tschanz, Stefan; Schneider, Jan Philipp; Knudsen, Lars
2014-01-01
Quantitative data obtained by means of design-based stereology can add valuable information to studies performed on a diversity of organs, in particular when correlated to functional/physiological and biochemical data. Design-based stereology is based on a sound statistical background and can be used to generate accurate data which are in line with principles of good laboratory practice. In addition, by adjusting the study design an appropriate precision can be achieved to find relevant differences between groups. For the success of the stereological assessment detailed planning is necessary. In this review we focus on common pitfalls encountered during stereological assessment. An exemplary workflow is included, and based on authentic examples, we illustrate a number of sampling principles which can be implemented to obtain properly sampled tissue blocks for various purposes. Copyright © 2013 Elsevier GmbH. All rights reserved.
Mi, Michael Y; Betensky, Rebecca A
2013-04-01
Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample-size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Because the basic SPCD already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and whether we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample-size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample-size re-estimation, up to 25% power was recovered from underestimated sample-size scenarios. Given the numerous possible test parameters that could have been chosen for the simulations, the study's results are limited to situations described by the parameters that were used and may not generalize to all possible scenarios. Furthermore, dropout of patients is not considered in this study. It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments.
A field test of three LQAS designs to assess the prevalence of acute malnutrition.
Deitchler, Megan; Valadez, Joseph J; Egge, Kari; Fernandez, Soledad; Hennigan, Mary
2007-08-01
The conventional method for assessing the prevalence of Global Acute Malnutrition (GAM) in emergency settings is the 30 x 30 cluster-survey. This study describes alternative approaches: three Lot Quality Assurance Sampling (LQAS) designs to assess GAM. The LQAS designs were field-tested and their results compared with those from a 30 x 30 cluster-survey. Computer simulations confirmed that small clusters instead of a simple random sample could be used for LQAS assessments of GAM. Three LQAS designs were developed (33 x 6, 67 x 3, Sequential design) to assess GAM thresholds of 10, 15 and 20%. The designs were field-tested simultaneously with a 30 x 30 cluster-survey in Siraro, Ethiopia during June 2003. Using a nested study design, anthropometric, morbidity and vaccination data were collected on all children 6-59 months in sampled households. Hypothesis tests about GAM thresholds were conducted for each LQAS design. Point estimates were obtained for the 30 x 30 cluster-survey and the 33 x 6 and 67 x 3 LQAS designs. Hypothesis tests showed GAM as <10% for the 33 x 6 design and GAM as > or =10% for the 67 x 3 and Sequential designs. Point estimates for the 33 x 6 and 67 x 3 designs were similar to those of the 30 x 30 cluster-survey for GAM (6.7%, CI = 3.2-10.2%; 8.2%, CI = 4.3-12.1%, 7.4%, CI = 4.8-9.9%) and all other indicators. The CIs for the LQAS designs were only slightly wider than the CIs for the 30 x 30 cluster-survey; yet the LQAS designs required substantially less time to administer. The LQAS designs provide statistically appropriate alternatives to the more time-consuming 30 x 30 cluster-survey. However, additional field-testing is needed using independent samples rather than a nested study design.
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Stemflow estimation in a redwood forest using model-based stratified random sampling
Jack Lewis
2003-01-01
Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...
Six Criteria for Survey Sample Design Evaluation.
ERIC Educational Resources Information Center
Wang, Lin; Fan, Xitao
The popularity of the sample survey in educational research makes it necessary for consumers to tell a good study from a poor one. Several sources were identified that gave advice on how to evaluate a sample design. The sources are either limited or too extensive to use in a practical sense. The purpose of this paper is to recommend six important…
Latin American Study of Nutrition and Health (ELANS): rationale and study design.
Fisberg, M; Kovalskys, I; Gómez, G; Rigotti, A; Cortés, L Y; Herrera-Cuenca, M; Yépez, M C; Pareja, R G; Guajardo, V; Zimberg, I Z; Chiavegatto Filho, A D P; Pratt, M; Koletzko, B; Tucker, K L
2016-01-30
Obesity is growing at an alarming rate in Latin America. Lifestyle behaviours such as physical activity and dietary intake have been largely associated with obesity in many countries; however studies that combine nutrition and physical activity assessment in representative samples of Latin American countries are lacking. The aim of this study is to present the design rationale of the Latin American Study of Nutrition and Health/Estudio Latinoamericano de Nutrición y Salud (ELANS) with a particular focus on its quality control procedures and recruitment processes. The ELANS is a multicenter cross-sectional nutrition and health surveillance study of a nationally representative sample of urban populations from eight Latin American countries (Argentina, Brazil, Chile, Colombia, Costa Rica, Ecuador, Perú and Venezuela). A standard study protocol was designed to evaluate the nutritional intakes, physical activity levels, and anthropometric measurements of 9000 enrolled participants. The study was based on a complex, multistage sample design and the sample was stratified by gender, age (15 to 65 years old) and socioeconomic level. A small-scale pilot study was performed in each country to test the procedures and tools. This study will provide valuable information and a unique dataset regarding Latin America that will enable cross-country comparisons of nutritional statuses that focus on energy and macro- and micronutrient intakes, food patterns, and energy expenditure. Clinical Trials NCT02226627.
Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max
2014-01-01
Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. PMID:24465377
Olives, Casey; Valadez, Joseph J; Brooker, Simon J; Pagano, Marcello
2012-01-01
Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n=15 and n=25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n=15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools.
Study design requirements for RNA sequencing-based breast cancer diagnostics.
Mer, Arvind Singh; Klevebring, Daniel; Grönberg, Henrik; Rantalainen, Mattias
2016-02-01
Sequencing-based molecular characterization of tumors provides information required for individualized cancer treatment. There are well-defined molecular subtypes of breast cancer that provide improved prognostication compared to routine biomarkers. However, molecular subtyping is not yet implemented in routine breast cancer care. Clinical translation is dependent on subtype prediction models providing high sensitivity and specificity. In this study we evaluate sample size and RNA-sequencing read requirements for breast cancer subtyping to facilitate rational design of translational studies. We applied subsampling to ascertain the effect of training sample size and the number of RNA sequencing reads on classification accuracy of molecular subtype and routine biomarker prediction models (unsupervised and supervised). Subtype classification accuracy improved with increasing sample size up to N = 750 (accuracy = 0.93), although with a modest improvement beyond N = 350 (accuracy = 0.92). Prediction of routine biomarkers achieved accuracy of 0.94 (ER) and 0.92 (Her2) at N = 200. Subtype classification improved with RNA-sequencing library size up to 5 million reads. Development of molecular subtyping models for cancer diagnostics requires well-designed studies. Sample size and the number of RNA sequencing reads directly influence accuracy of molecular subtyping. Results in this study provide key information for rational design of translational studies aiming to bring sequencing-based diagnostics to the clinic.
Precision, time, and cost: a comparison of three sampling designs in an emergency setting.
Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles
2008-05-02
The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 x 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 x 30 cluster survey with two alternative sampling designs: a 33 x 6 cluster design (33 clusters, 6 observations per cluster) and a 67 x 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 x 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 x 6 and 67 x 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 x 6 and 67 x 3 designs provide wider confidence intervals than the 30 x 30 design for child anthropometric indicators, the 33 x 6 and 67 x 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 x 30 design does not. For the household-level indicators tested in this study, the 67 x 3 design provides the most precise results. However, our results show that neither the 33 x 6 nor the 67 x 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 x 6 and 67 x 3 designs required substantially less time and cost than that required for the 30 x 30 design. The findings of this study suggest the 33 x 6 and 67 x 3 designs can provide useful time- and resource-saving alternatives to the 30 x 30 method of data collection in emergency settings.
Precision, time, and cost: a comparison of three sampling designs in an emergency setting
Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles
2008-01-01
The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency settings. PMID:18454866
Sample environment for in situ synchrotron corrosion studies of materials in extreme environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elbakhshwan, Mohamed S.; Gill, Simerjeet K.; Motta, Arthur T.
A new in situ sample environment has been designed and developed to study the interfacial interactions of nuclear cladding alloys with high temperature steam. The sample environment is particularly optimized for synchrotron X-ray diffraction (XRD) studies for in situ structural analysis. The sample environment is highly corrosion resistant and can be readily adapted for steam environments. The in situ sample environment design complies with G2 ASTM standards for studying corrosion in zirconium and its alloys and offers remote temperature and pressure monitoring during the in situ data collection. The use of the in situ sample environment is exemplified by monitoringmore » the oxidation of metallic zirconium during exposure to steam at 350°C. Finally, the in situ sample environment provides a powerful tool for fundamental understanding of corrosion mechanisms by elucidating the substoichiometric oxide phases formed during early stages of corrosion, which can provide a better understanding the oxidation process.« less
Sample environment for in situ synchrotron corrosion studies of materials in extreme environments
Elbakhshwan, Mohamed S.; Gill, Simerjeet K.; Motta, Arthur T.; ...
2016-10-25
A new in situ sample environment has been designed and developed to study the interfacial interactions of nuclear cladding alloys with high temperature steam. The sample environment is particularly optimized for synchrotron X-ray diffraction (XRD) studies for in situ structural analysis. The sample environment is highly corrosion resistant and can be readily adapted for steam environments. The in situ sample environment design complies with G2 ASTM standards for studying corrosion in zirconium and its alloys and offers remote temperature and pressure monitoring during the in situ data collection. The use of the in situ sample environment is exemplified by monitoringmore » the oxidation of metallic zirconium during exposure to steam at 350°C. Finally, the in situ sample environment provides a powerful tool for fundamental understanding of corrosion mechanisms by elucidating the substoichiometric oxide phases formed during early stages of corrosion, which can provide a better understanding the oxidation process.« less
ERIC Educational Resources Information Center
Zaniboni, Sara; Fraccaroli, Franco; Truxillo, Donald M.; Bertolino, Marilena; Bauer, Talya N.
2011-01-01
Purpose: The purpose of this study is to validate, in an Italian sample, a multidimensional training motivation measure (T-VIES-it) based on expectancy (VIE) theory, and to examine the nomological network surrounding the construct. Design/methodology/approach: Using a cross-sectional design study, 258 public sector employees in Northeast Italy…
WV R-EMAP STUDY: MULTIPLE-OBJECTIVE SAMPLING DESIGN FRAMEWORK
A multi-objective sampling design has been implemented through Regional Monitoring and Assessment Program (R-EMAP) support of a cooperative agreement with the state of West Virginia. Goals of the project include: 1) development and testing of a temperature-adjusted fish IBI for t...
Mi, Michael Y.; Betensky, Rebecca A.
2013-01-01
Background Currently, a growing placebo response rate has been observed in clinical trials for antidepressant drugs, a phenomenon that has made it increasingly difficult to demonstrate efficacy. The sequential parallel comparison design (SPCD) is a clinical trial design that was proposed to address this issue. The SPCD theoretically has the potential to reduce the sample size requirement for a clinical trial and to simultaneously enrich the study population to be less responsive to the placebo. Purpose Because the basic SPCD design already reduces the placebo response by removing placebo responders between the first and second phases of a trial, the purpose of this study was to examine whether we can further improve the efficiency of the basic SPCD and if we can do so when the projected underlying drug and placebo response rates differ considerably from the actual ones. Methods Three adaptive designs that used interim analyses to readjust the length of study duration for individual patients were tested to reduce the sample size requirement or increase the statistical power of the SPCD. Various simulations of clinical trials using the SPCD with interim analyses were conducted to test these designs through calculations of empirical power. Results From the simulations, we found that the adaptive designs can recover unnecessary resources spent in the traditional SPCD trial format with overestimated initial sample sizes and provide moderate gains in power. Under the first design, results showed up to a 25% reduction in person-days, with most power losses below 5%. In the second design, results showed up to a 8% reduction in person-days with negligible loss of power. In the third design using sample size re-estimation, up to 25% power was recovered from underestimated sample size scenarios. Limitations Given the numerous possible test parameters that could have been chosen for the simulations, the study’s results are limited to situations described by the parameters that were used, and may not generalize to all possible scenarios. Furthermore, drop-out of patients is not considered in this study. Conclusions It is possible to make an already complex design such as the SPCD adaptive, and thus more efficient, potentially overcoming the problem of placebo response at lower cost. Ultimately, such a design may expedite the approval of future effective treatments. PMID:23283576
Knopman, Debra S.; Voss, Clifford I.
1989-01-01
Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.
(PRESENTED NAQC SAN FRANCISCO, CA) COARSE PM METHODS STUDY: STUDY DESIGN AND RESULTS
Comprehensive field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 in ambient air. Five separate sampling approaches were evaluated at each of three sampling sites. As the primary basis of comparison, a discrete ...
Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.
Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils
2017-09-15
A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling procedure was reproducible with results comparable to the collected sample. However, the sampling procedure favoured sampling of large farms. Furthermore, both under-sampled and over-sampled areas were found using scan statistics. In conclusion, sampling conducted at abattoirs can provide a spatially representative sample. Hence it is a possible cost-effective alternative to simple random sampling. However, it is important to assess the properties of the resulting sample so that any potential selection bias can be addressed when reporting the findings. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Naesset, Erik; Gobakken, Terje; Bollandsas, Ole Martin; Gregoire, Timothy G.; Nelson, Ross; Stahl, Goeran
2013-01-01
Airborne scanning LiDAR (Light Detection and Ranging) has emerged as a promising tool to provide auxiliary data for sample surveys aiming at estimation of above-ground tree biomass (AGB), with potential applications in REDD forest monitoring. For larger geographical regions such as counties, states or nations, it is not feasible to collect airborne LiDAR data continuously ("wall-to-wall") over the entire area of interest. Two-stage cluster survey designs have therefore been demonstrated by which LiDAR data are collected along selected individual flight-lines treated as clusters and with ground plots sampled along these LiDAR swaths. Recently, analytical AGB estimators and associated variance estimators that quantify the sampling variability have been proposed. Empirical studies employing these estimators have shown a seemingly equal or even larger uncertainty of the AGB estimates obtained with extensive use of LiDAR data to support the estimation as compared to pure field-based estimates employing estimators appropriate under simple random sampling (SRS). However, comparison of uncertainty estimates under SRS and sophisticated two-stage designs is complicated by large differences in the designs and assumptions. In this study, probability-based principles to estimation and inference were followed. We assumed designs of a field sample and a LiDAR-assisted survey of Hedmark County (HC) (27,390 km2), Norway, considered to be more comparable than those assumed in previous studies. The field sample consisted of 659 systematically distributed National Forest Inventory (NFI) plots and the airborne scanning LiDAR data were collected along 53 parallel flight-lines flown over the NFI plots. We compared AGB estimates based on the field survey only assuming SRS against corresponding estimates assuming two-phase (double) sampling with LiDAR and employing model-assisted estimators. We also compared AGB estimates based on the field survey only assuming two-stage sampling (the NFI plots being grouped in clusters) against corresponding estimates assuming two-stage sampling with the LiDAR and employing model-assisted estimators. For each of the two comparisons, the standard errors of the AGB estimates were consistently lower for the LiDAR-assisted designs. The overall reduction of the standard errors in the LiDAR-assisted estimation was around 40-60% compared to the pure field survey. We conclude that the previously proposed two-stage model-assisted estimators are inappropriate for surveys with unequal lengths of the LiDAR flight-lines and new estimators are needed. Some options for design of LiDAR-assisted sample surveys under REDD are also discussed, which capitalize on the flexibility offered when the field survey is designed as an integrated part of the overall survey design as opposed to previous LiDAR-assisted sample surveys in the boreal and temperate zones which have been restricted by the current design of an existing NFI.
Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shine, E. P.; Poirier, M. R.
2013-10-29
Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and datamore » interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy's extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative sampling directly from the large Tank Farm tanks is a difficult, if not unsolvable enterprise due to limited accessibility. However, the consistency and the adequacy of sampling and mixing at SRS could at least be studied under the controlled process conditions based on samples discussed by Ray and others [2012a] in Waste Form Qualification Report (WQR) Volume 2 and the transfers from Tanks 40H and 51H to the Sludge Receipt and Adjustment Tank (SRAT) within DWPF. It is important to realize that the need for sample representativeness becomes more stringent as the material gets closer to the melter, and the tanks within DWPF have been studied extensively to meet those needs.« less
al-Wahish, Amal; Armitage, D; al-Binni, U; Hill, B; Mills, R; Jalarvo, N; Santodonato, L; Herwig, K W; Mandrus, D
2015-09-01
A design for a sample cell system suitable for high temperature Quasi-Elastic Neutron Scattering (QENS) experiments is presented. The apparatus was developed at the Spallation Neutron Source in Oak Ridge National Lab where it is currently in use. The design provides a special sample cell environment under controlled humid or dry gas flow over a wide range of temperature up to 950 °C. Using such a cell, chemical, dynamical, and physical changes can be studied in situ under various operating conditions. While the cell combined with portable automated gas environment system is especially useful for in situ studies of microscopic dynamics under operational conditions that are similar to those of solid oxide fuel cells, it can additionally be used to study a wide variety of materials, such as high temperature proton conductors. The cell can also be used in many different neutron experiments when a suitable sample holder material is selected. The sample cell system has recently been used to reveal fast dynamic processes in quasi-elastic neutron scattering experiments, which standard probes (such as electrochemical impedance spectroscopy) could not detect. In this work, we outline the design of the sample cell system and present results demonstrating its abilities in high temperature QENS experiments.
Shelton, Larry R.
1994-01-01
The U.S. Geological Survey's National Water-Quality Assessment program includes extensive data- collection efforts to assess the quality of the Nations's streams. These studies require analyses of stream samples for major ions, nutrients, sediments, and organic contaminants. For the information to be comparable among studies in different parts of the Nation, consistent procedures specifically designed to produce uncontaminated samples for trace analysis in the laboratory are critical. This field guide describes the standard procedures for collecting and processing samples for major ions, nutrients, organic contaminants, sediment, and field analyses of conductivity, pH, alkalinity, and dissolved oxygen. Samples are collected and processed using modified and newly designed equipment made of Teflon to avoid contamination, including nonmetallic samplers (D-77 and DH-81) and a Teflon sample splitter. Field solid-phase extraction procedures developed to process samples for organic constituent analyses produce an extracted sample with stabilized compounds for more accurate results. Improvements to standard operational procedures include the use of processing chambers and capsule filtering systems. A modified collecting and processing procedure for organic carbon is designed to avoid contamination from equipment cleaned with methanol. Quality assurance is maintained by strict collecting and processing procedures, replicate sampling, equipment blank samples, and a rigid cleaning procedure using detergent, hydrochloric acid, and methanol.
LAKE MICHIGAN MASS BALANCE STUDY UPDATE
A 2005 field design of tributary and open Lake Michigan sampling will be discussed for the first time at this Council meeting. The sample design is expected to aid in determining whether or not contaminant loads and open lake concentrations have decreased over the past 10 years s...
COMPASS Final Report: Near Earth Asteroids Rendezvous and Sample Earth Returns (NEARER)
NASA Technical Reports Server (NTRS)
Oleson, Steven R.; McGuire, Melissa L.
2009-01-01
In this study, the Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team completed a design for a multi-asteroid (Nereus and 1996 FG3) sample return capable spacecraft for the NASA In-Space Propulsion Office. The objective of the study was to support technology development and assess the relative benefits of different electric propulsion systems on asteroid sample return design. The design uses a single, heritage Orion solar array (SA) (approx.6.5 kW at 1 AU) to power a single NASA Evolutionary Xenon Thruster ((NEXT) a spare NEXT is carried) to propel a lander to two near Earth asteroids. After landing and gathering science samples, the Solar Electric Propulsion (SEP) vehicle spirals back to Earth where it drops off the first sample s return capsule and performs an Earth flyby to assist the craft in rendezvousing with a second asteroid, which is then sampled. The second sample is returned in a similar fashion. The vehicle, dubbed Near Earth Asteroids Rendezvous and Sample Earth Returns (NEARER), easily fits in an Atlas 401 launcher and its cost estimates put the mission in the New Frontier s (NF's) class mission.
Tan, Ziwen; Qin, Guoyou; Zhou, Haibo
2016-01-01
Outcome-dependent sampling (ODS) designs have been well recognized as a cost-effective way to enhance study efficiency in both statistical literature and biomedical and epidemiologic studies. A partially linear additive model (PLAM) is widely applied in real problems because it allows for a flexible specification of the dependence of the response on some covariates in a linear fashion and other covariates in a nonlinear non-parametric fashion. Motivated by an epidemiological study investigating the effect of prenatal polychlorinated biphenyls exposure on children's intelligence quotient (IQ) at age 7 years, we propose a PLAM in this article to investigate a more flexible non-parametric inference on the relationships among the response and covariates under the ODS scheme. We propose the estimation method and establish the asymptotic properties of the proposed estimator. Simulation studies are conducted to show the improved efficiency of the proposed ODS estimator for PLAM compared with that from a traditional simple random sampling design with the same sample size. The data of the above-mentioned study is analyzed to illustrate the proposed method. PMID:27006375
Blinded sample size re-estimation in three-arm trials with 'gold standard' design.
Mütze, Tobias; Friede, Tim
2017-10-15
In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Design and validation of a wind tunnel system for odour sampling on liquid area sources.
Capelli, L; Sironi, S; Del Rosso, R; Céntola, P
2009-01-01
The aim of this study is to describe the methods adopted for the design and the experimental validation of a wind tunnel, a sampling system suitable for the collection of gaseous samples on passive area sources, which allows to simulate wind action on the surface to be monitored. The first step of the work was the study of the air velocity profiles. The second step of the work consisted in the validation of the sampling system. For this purpose, the odour concentration of some air samples collected by means of the wind tunnel was measured by dynamic olfactometry. The results of the air velocity measurements show that the wind tunnel design features enabled the achievement of a uniform and homogeneous air flow through the hood. Moreover, the laboratory tests showed a very good correspondence between the odour concentration values measured at the wind tunnel outlet and the odour concentration values predicted by the application of a specific volatilization model, based on the Prandtl boundary layer theory. The agreement between experimental and theoretical trends demonstrate that the studied wind tunnel represents a suitable sampling system for the simulation of specific odour emission rates from liquid area sources without outward flow.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Pals, Sherri L.; Beaty, Brenda L.; Posner, Samuel F.; Bull, Sheana S.
2009-01-01
Studies designed to evaluate HIV and STD prevention interventions often involve random assignment of groups such as neighborhoods or communities to study conditions (e.g., to intervention or control). Investigators who design group-randomized trials (GRTs) must take the expected intraclass correlation coefficient (ICC) into account in sample size…
Mars Rover Sample Return mission study
NASA Technical Reports Server (NTRS)
Bourke, Roger D.
1989-01-01
The Mars Rover/Sample Return mission is examined as a precursor to a manned mission to Mars. The value of precursor missions is noted, using the Apollo lunar program as an example. The scientific objectives of the Mars Rover/Sample Return mission are listed and the basic mission plans are described. Consideration is given to the options for mission design, launch configurations, rover construction, and entry and lander design. Also, the potential for international cooperation on the Mars Rover/Sample Return mission is discussed.
Methodological quality of behavioural weight loss studies: a systematic review
Lemon, S. C.; Wang, M. L.; Haughton, C. F.; Estabrook, D. P.; Frisard, C. F.; Pagoto, S. L.
2018-01-01
Summary This systematic review assessed the methodological quality of behavioural weight loss intervention studies conducted among adults and associations between quality and statistically significant weight loss outcome, strength of intervention effectiveness and sample size. Searches for trials published between January, 2009 and December, 2014 were conducted using PUBMED, MEDLINE and PSYCINFO and identified ninety studies. Methodological quality indicators included study design, anthropometric measurement approach, sample size calculations, intent-to-treat (ITT) analysis, loss to follow-up rate, missing data strategy, sampling strategy, report of treatment receipt and report of intervention fidelity (mean = 6.3). Indicators most commonly utilized included randomized design (100%), objectively measured anthropometrics (96.7%), ITT analysis (86.7%) and reporting treatment adherence (76.7%). Most studies (62.2%) had a follow-up rate >75% and reported a loss to follow-up analytic strategy or minimal missing data (69.9%). Describing intervention fidelity (34.4%) and sampling from a known population (41.1%) were least common. Methodological quality was not associated with reporting a statistically significant result, effect size or sample size. This review found the published literature of behavioural weight loss trials to be of high quality for specific indicators, including study design and measurement. Identified for improvement include utilization of more rigorous statistical approaches to loss to follow up and better fidelity reporting. PMID:27071775
Baldo, Matías N; Angeli, Emmanuel; Gareis, Natalia C; Hunzicker, Gabriel A; Murguía, Marcelo C; Ortega, Hugo H; Hein, Gustavo J
2018-04-01
A relative bioavailability study (RBA) of two phenytoin (PHT) formulations was conducted in rabbits, in order to compare the results obtained from different matrices (plasma and blood from dried blood spot (DBS) sampling) and different experimental designs (classic and block). The method was developed by liquid chromatography tandem-mass spectrometry (LC-MS/MS) in plasma and blood samples. The different sample preparation techniques, plasma protein precipitation and DBS, were validated according to international requirements. The analytical method was validated with ranges 0.20-50.80 and 0.12-20.32 µg ml -1 , r > 0.999 for plasma and blood, respectively. Accuracy and precision were within acceptance criteria for bioanalytical assay validation (< 15 for bias and CV% and < 20 for limit of quantification (LOQ)). PHT showed long-term stability, both for plasma and blood, and under refrigerated and room temperature conditions. Haematocrit values were measured during the validation process and RBA study. Finally, the pharmacokinetic parameters (C max , T max and AUC 0-t ) obtained from the RBA study were tested. Results were highly comparable for matrices and experimental designs. A matrix correlation higher than 0.975 and a ratio of (PHT blood) = 1.158 (PHT plasma) were obtained. The results obtained herein show that the use of classic experimental design and DBS sampling for animal pharmacokinetic studies should be encouraged as they could help to prevent the use of a large number of animals and also animal euthanasia. Finally, the combination of DBS sampling with LC-MS/MS technology showed to be an excellent tool not only for therapeutic drug monitoring but also for RBA studies.
Dingus, Cheryl A; Teuschler, Linda K; Rice, Glenn E; Simmons, Jane Ellen; Narotsky, Michael G
2011-10-01
In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA's Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss.
Dingus, Cheryl A.; Teuschler, Linda K.; Rice, Glenn E.; Simmons, Jane Ellen; Narotsky, Michael G.
2011-01-01
In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA’s Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss. PMID:22073030
Mars Sample Return Architecture Assessment Study
NASA Astrophysics Data System (ADS)
Centuori, S.; Hermosín, P.; Martín, J.; De Zaiacomo, G.; Colin, S.; Godfrey, A.; Myles, J.; Johnson, H.; Sachdev, T.; Ahmed, R.
2018-04-01
Current paper presents the results of ESA funded activity "Mars Sample Return Architecture Assessment Study" carried-out by DEIMOS Space, Lockheed Martin UK Ampthill, and MDA Corporation, where more than 500 mission design options have been studied.
Optimizing methods and dodging pitfalls in microbiome research.
Kim, Dorothy; Hofstaedter, Casey E; Zhao, Chunyu; Mattei, Lisa; Tanes, Ceylan; Clarke, Erik; Lauder, Abigail; Sherrill-Mix, Scott; Chehoud, Christel; Kelsen, Judith; Conrad, Máire; Collman, Ronald G; Baldassano, Robert; Bushman, Frederic D; Bittinger, Kyle
2017-05-05
Research on the human microbiome has yielded numerous insights into health and disease, but also has resulted in a wealth of experimental artifacts. Here, we present suggestions for optimizing experimental design and avoiding known pitfalls, organized in the typical order in which studies are carried out. We first review best practices in experimental design and introduce common confounders such as age, diet, antibiotic use, pet ownership, longitudinal instability, and microbial sharing during cohousing in animal studies. Typically, samples will need to be stored, so we provide data on best practices for several sample types. We then discuss design and analysis of positive and negative controls, which should always be run with experimental samples. We introduce a convenient set of non-biological DNA sequences that can be useful as positive controls for high-volume analysis. Careful analysis of negative and positive controls is particularly important in studies of samples with low microbial biomass, where contamination can comprise most or all of a sample. Lastly, we summarize approaches to enhancing experimental robustness by careful control of multiple comparisons and to comparing discovery and validation cohorts. We hope the experimental tactics summarized here will help researchers in this exciting field advance their studies efficiently while avoiding errors.
Reproducibility of preclinical animal research improves with heterogeneity of study samples
Vogt, Lucile; Sena, Emily S.; Würbel, Hanno
2018-01-01
Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495
Olives, Casey; Valadez, Joseph J; Pagano, Marcello
2014-03-01
To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Chen, Daniel; Chen, Damian; Yen, Ray; Cheng, Mingjen; Lan, Andy; Ghaskadvi, Rajesh
2008-11-01
Identifying hotspots--structures that limit the lithography process window--become increasingly important as the industry relies heavily on RET to print sub-wavelength designs. KLA-Tencor's patented Process Window Qualification (PWQ) methodology has been used for this purpose in various fabs. PWQ methodology has three key advantages (a) PWQ Layout--to obtain the best sensitivity (b) Design Based Binning--for pattern repeater analysis (c) Intelligent sampling--for the best DOI sampling rate. This paper evaluates two different analysis strategies for SEM review sampling successfully deployed at Inotera Memories, Inc. We propose a new approach combining the location repeater and pattern repeaters. Based on a recent case study the new sampling flow reduces the data analysis and sampling time from 6 hours to 1.5 hour maintaining maximum DOI sample rate.
Stark, J.R.; Andrews, W.J.; Fallon, J.D.; Fong, A.L.; Goldstein, R.M.; Hanson, P.E.; Kroening, S.E.; Lee, K.E.
1996-01-01
Environmental stratification consists of dividing the study unit into subareas with homogeneous characteristics to assess natural and anthropogenic factors affecting water quality. The assessment of water quality in streams and in aquifers is based on the sampling design that compares water quality within homogeneous subareas defined by subbasins or aquifer boundaries. The study unit is stratified at four levels for the surface-water component: glacial deposit composition, surficial geology, general land use and land cover, and secondary land use. Ground-water studies emphasize shallow ground water where quality is most likely influenced by overlying land use and land cover. Stratification for ground-water sampling is superimposed on the distribution of shallow aquifers. For each aquifer and surface-water basin this stratification forms the basis for the proposed sampling design used in the Upper Mississippi River Basin National Water-Quality Assessment.
Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models
USDA-ARS?s Scientific Manuscript database
Cumulative nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. This study used an agroecosystems simulation model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2...
Sample size requirements for the design of reliability studies: precision consideration.
Shieh, Gwowen
2014-09-01
In multilevel modeling, the intraclass correlation coefficient based on the one-way random-effects model is routinely employed to measure the reliability or degree of resemblance among group members. To facilitate the advocated practice of reporting confidence intervals in future reliability studies, this article presents exact sample size procedures for precise interval estimation of the intraclass correlation coefficient under various allocation and cost structures. Although the suggested approaches do not admit explicit sample size formulas and require special algorithms for carrying out iterative computations, they are more accurate than the closed-form formulas constructed from large-sample approximations with respect to the expected width and assurance probability criteria. This investigation notes the deficiency of existing methods and expands the sample size methodology for the design of reliability studies that have not previously been discussed in the literature.
Scrivani, Peter V; Erb, Hollis N
2013-01-01
High quality clinical research is essential for advancing knowledge in the areas of veterinary radiology and radiation oncology. Types of clinical research studies may include experimental studies, method-comparison studies, and patient-based studies. Experimental studies explore issues relative to pathophysiology, patient safety, and treatment efficacy. Method-comparison studies evaluate agreement between techniques or between observers. Patient-based studies investigate naturally acquired disease and focus on questions asked in clinical practice that relate to individuals or populations (e.g., risk, accuracy, or prognosis). Careful preplanning and study design are essential in order to achieve valid results. A key point to planning studies is ensuring that the design is tailored to the study objectives. Good design includes a comprehensive literature review, asking suitable questions, selecting the proper sample population, collecting the appropriate data, performing the correct statistical analyses, and drawing conclusions supported by the available evidence. Most study designs are classified by whether they are experimental or observational, longitudinal or cross-sectional, and prospective or retrospective. Additional features (e.g., controlled, randomized, or blinded) may be described that address bias. Two related challenging aspects of study design are defining an important research question and selecting an appropriate sample population. The sample population should represent the target population as much as possible. Furthermore, when comparing groups, it is important that the groups are as alike to each other as possible except for the variables of interest. Medical images are well suited for clinical research because imaging signs are categorical or numerical variables that might be predictors or outcomes of diseases or treatments. © 2013 Veterinary Radiology & Ultrasound.
ERIC Educational Resources Information Center
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica
2013-01-01
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
Glenn, Beth A.; Bastani, Roshan; Maxwell, Annette E.
2013-01-01
Objective Threats to external validity including pretest sensitization and the interaction of selection and an intervention are frequently overlooked by researchers despite their potential to significantly influence study outcomes. The purpose of this investigation was to conduct secondary data analyses to assess the presence of external validity threats in the setting of a randomized trial designed to promote mammography use in a high risk sample of women. Design During the trial, recruitment and intervention implementation took place in three cohorts (with different ethnic composition), utilizing two different designs (pretest-posttest control group design; posttest only control group design). Results Results reveal that the intervention produced different outcomes across cohorts, dependent upon the research design used and the characteristics of the sample. Conclusion These results illustrate the importance of weighing the pros and cons of potential research designs before making a selection and attending more closely to issues of external validity. PMID:23289517
The Study on Mental Health at Work: Design and sampling.
Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit
2017-08-01
The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. S-MGA is a representative study of German employees aged 31-60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment.
Correction of sampling bias in a cross-sectional study of post-surgical complications.
Fluss, Ronen; Mandel, Micha; Freedman, Laurence S; Weiss, Inbal Salz; Zohar, Anat Ekka; Haklai, Ziona; Gordon, Ethel-Sherry; Simchen, Elisheva
2013-06-30
Cross-sectional designs are often used to monitor the proportion of infections and other post-surgical complications acquired in hospitals. However, conventional methods for estimating incidence proportions when applied to cross-sectional data may provide estimators that are highly biased, as cross-sectional designs tend to include a high proportion of patients with prolonged hospitalization. One common solution is to use sampling weights in the analysis, which adjust for the sampling bias inherent in a cross-sectional design. The current paper describes in detail a method to build weights for a national survey of post-surgical complications conducted in Israel. We use the weights to estimate the probability of surgical site infections following colon resection, and validate the results of the weighted analysis by comparing them with those obtained from a parallel study with a historically prospective design. Copyright © 2012 John Wiley & Sons, Ltd.
Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.
1991-01-01
Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.
Smith, Jennifer L.; Sturrock, Hugh J. W.; Assefa, Liya; Nikolay, Birgit; Njenga, Sammy M.; Kihara, Jimmy; Mwandawiro, Charles S.; Brooker, Simon J.
2015-01-01
Transmission assessment surveys (TAS) for lymphatic filariasis have been proposed as a platform to assess the impact of mass drug administration (MDA) on soil-transmitted helminths (STHs). This study used computer simulation and field data from pre- and post-MDA settings across Kenya to evaluate the performance and cost-effectiveness of the TAS design for STH assessment compared with alternative survey designs. Variations in the TAS design and different sample sizes and diagnostic methods were also evaluated. The district-level TAS design correctly classified more districts compared with standard STH designs in pre-MDA settings. Aggregating districts into larger evaluation units in a TAS design decreased performance, whereas age group sampled and sample size had minimal impact. The low diagnostic sensitivity of Kato-Katz and mini-FLOTAC methods was found to increase misclassification. We recommend using a district-level TAS among children 8–10 years of age to assess STH but suggest that key consideration is given to evaluation unit size. PMID:25487730
Autocorrelation of location estimates and the analysis of radiotracking data
Otis, D.L.; White, Gary C.
1999-01-01
The wildlife literature has been contradictory about the importance of autocorrelation in radiotracking data used for home range estimation and hypothesis tests of habitat selection. By definition, the concept of a home range involves autocorrelated movements, but estimates or hypothesis tests based on sampling designs that predefine a time frame of interest, and that generate representative samples of an animal's movement during this time frame, should not be affected by length of the sampling interval and autocorrelation. Intensive sampling of the individual's home range and habitat use during the time frame of the study leads to improved estimates for the individual, but use of location estimates as the sample unit to compare across animals is pseudoreplication. We therefore recommend against use of habitat selection analysis techniques that use locations instead of individuals as the sample unit. We offer a general outline for sampling designs for radiotracking studies.
Converse, S.J.; Kendall, W.L.; Doherty, P.F.; Naughton, M.B.; Hines, J.E.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
For many animal populations, one or more life stages are not accessible to sampling, and therefore an unobservable state is created. For colonially-breeding populations, this unobservable state could represent the subset of adult breeders that have foregone breeding in a given year. This situation applies to many seabird populations, notably albatrosses, where skipped breeders are either absent from the colony, or are present but difficult to capture or correctly assign to breeding state. Kendall et al. have proposed design strategies for investigations of seabird demography where such temporary emigration occurs, suggesting the use of the robust design to permit the estimation of time-dependent parameters and to increase the precision of estimates from multi-state models. A traditional robust design, where animals are subject to capture multiple times in a sampling season, is feasible in many cases. However, due to concerns that multiple captures per season could cause undue disturbance to animals, Kendall et al. developed a less-invasive robust design (LIRD), where initial captures are followed by an assessment of the ratio of marked-to-unmarked birds in the population or sampled plot. This approach has recently been applied in the Northwestern Hawaiian Islands to populations of Laysan (Phoebastria immutabilis) and black-footed (P. nigripes) albatrosses. In this paper, we outline the LIRD and its application to seabird population studies. We then describe an approach to determining optimal allocation of sampling effort in which we consider a non-robust design option (nRD), and variations of both the traditional robust design (RD), and the LIRD. Variations we considered included the number of secondary sampling occasions for the RD and the amount of total effort allocated to the marked-to-unmarked ratio assessment for the LIRD. We used simulations, informed by early data from the Hawaiian study, to address optimal study design for our example cases. We found that the LIRD performed as well or nearly as well as certain variations of the RD in terms of root mean square error, especially when relatively little of the total effort was allocated to the assessment of the marked-to-unmarked ratio versus to initial captures. For the RD, we found no clear benefit of using 2, 4, or 6 secondary sampling occasions per year, though this result will depend on the relative effort costs of captures versus recaptures and on the length of the study. We also found that field-readable bands, which may be affixed to birds in addition to standard metal bands, will be beneficial in longer-term studies of albatrosses in the Northwestern Hawaiian Islands. Field-readable bands reduce the effort cost of recapturing individuals, and in the long-term this cost reduction can offset the additional effort expended in affixing the bands. Finally, our approach to determining optimal study design can be generally applied by researchers, with little seed data, to design their studies at the outset.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Impact of spatial variability and sampling design on model performance
NASA Astrophysics Data System (ADS)
Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes
2017-04-01
Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.
Effect of uncertainties on probabilistic-based design capacity of hydrosystems
NASA Astrophysics Data System (ADS)
Tung, Yeou-Koung
2018-02-01
Hydrosystems engineering designs involve analysis of hydrometric data (e.g., rainfall, floods) and use of hydrologic/hydraulic models, all of which contribute various degrees of uncertainty to the design process. Uncertainties in hydrosystem designs can be generally categorized into aleatory and epistemic types. The former arises from the natural randomness of hydrologic processes whereas the latter are due to knowledge deficiency in model formulation and model parameter specification. This study shows that the presence of epistemic uncertainties induces uncertainty in determining the design capacity. Hence, the designer needs to quantify the uncertainty features of design capacity to determine the capacity with a stipulated performance reliability under the design condition. Using detention basin design as an example, the study illustrates a methodological framework by considering aleatory uncertainty from rainfall and epistemic uncertainties from the runoff coefficient, curve number, and sampling error in design rainfall magnitude. The effects of including different items of uncertainty and performance reliability on the design detention capacity are examined. A numerical example shows that the mean value of the design capacity of the detention basin increases with the design return period and this relation is found to be practically the same regardless of the uncertainty types considered. The standard deviation associated with the design capacity, when subject to epistemic uncertainty, increases with both design frequency and items of epistemic uncertainty involved. It is found that the epistemic uncertainty due to sampling error in rainfall quantiles should not be ignored. Even with a sample size of 80 (relatively large for a hydrologic application) the inclusion of sampling error in rainfall quantiles resulted in a standard deviation about 2.5 times higher than that considering only the uncertainty of the runoff coefficient and curve number. Furthermore, the presence of epistemic uncertainties in the design would result in under-estimation of the annual failure probability of the hydrosystem and has a discounting effect on the anticipated design return period.
Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla
2016-01-01
Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897
Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.
Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping
2015-06-07
Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.
Planning and setting objectives in field studies: Chapter 2
Fisher, Robert N.; Dodd, C. Kenneth
2016-01-01
This chapter enumerates the steps required in designing and planning field studies on the ecology and conservation of reptiles, as these involve a high level of uncertainty and risk. To this end, the chapter differentiates between goals (descriptions of what one intends to accomplish) and objectives (the measurable steps required to achieve the established goals). Thus, meeting a specific goal may require many objectives. It may not be possible to define some of them until certain experiments have been conducted; often evaluations of sampling protocols are needed to increase certainty in the biological results. And if sampling locations are fixed and sampling events are repeated over time, then both study-specific covariates and sampling-specific covariates should exist. Additionally, other critical design considerations for field study include obtaining permits, as well as researching ethics and biosecurity issues.
A Semantic Differential Evaluation of Attitudinal Outcomes of Introductory Physical Science.
ERIC Educational Resources Information Center
Hecht, Alfred Roland
This study was designed to assess the attitudinal outcomes of Introductory Physical Science (IPS) curriculum materials used in schools. Random samples of 240 students receiving IPS instruction and 240 non-science students were assigned to separate Solomon four-group designs with non-equivalent control groups. Random samples of 60 traditional…
Latent spatial models and sampling design for landscape genetics
Ephraim M. Hanks; Melvin B. Hooten; Steven T. Knick; Sara J. Oyler-McCance; Jennifer A. Fike; Todd B. Cross; Michael K. Schwartz
2016-01-01
We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial...
Design of Phase II Non-inferiority Trials.
Jung, Sin-Ho
2017-09-01
With the development of inexpensive treatment regimens and less invasive surgical procedures, we are confronted with non-inferiority study objectives. A non-inferiority phase III trial requires a roughly four times larger sample size than that of a similar standard superiority trial. Because of the large required sample size, we often face feasibility issues to open a non-inferiority trial. Furthermore, due to lack of phase II non-inferiority trial design methods, we do not have an opportunity to investigate the efficacy of the experimental therapy through a phase II trial. As a result, we often fail to open a non-inferiority phase III trial and a large number of non-inferiority clinical questions still remain unanswered. In this paper, we want to develop some designs for non-inferiority randomized phase II trials with feasible sample sizes. At first, we review a design method for non-inferiority phase III trials. Subsequently, we propose three different designs for non-inferiority phase II trials that can be used under different settings. Each method is demonstrated with examples. Each of the proposed design methods is shown to require a reasonable sample size for non-inferiority phase II trials. The three different non-inferiority phase II trial designs are used under different settings, but require similar sample sizes that are typical for phase II trials.
ERIC Educational Resources Information Center
Gbetodeme, Selom; Amankwa, Joana; Dzegblor, Noble Komla
2016-01-01
To facilitate the design process in every art form, there are certain guidelines that all professional designers should use. These are known as elements and principles of design. This study is a survey carried out to assess the knowledge of dressmakers about basic design in the Ho Municipality of Ghana. Sixty dressmakers were randomly sampled for…
A Numerical Climate Observing Network Design Study
NASA Technical Reports Server (NTRS)
Stammer, Detlef
2003-01-01
This project was concerned with three related questions of an optimal design of a climate observing system: 1. The spatial sampling characteristics required from an ARGO system. 2. The degree to which surface observations from ARGO can be used to calibrate and test satellite remote sensing observations of sea surface salinity (SSS) as it is anticipated now. 3. The more general design of an climate observing system as it is required in the near future for CLIVAR in the Atlantic. An important question in implementing an observing system is that of the sampling density required to observe climate-related variations in the ocean. For that purpose this project was concerned with the sampling requirements for the ARGO float system, but investigated also other elements of a climate observing system. As part of this project we studied the horizontal and vertical sampling characteristics of a global ARGO system which is required to make it fully complementary to altimeter data with the goal to capture climate related variations on large spatial scales (less thanAttachment: 1000 km). We addressed this question in the framework of a numerical model study in the North Atlantic with an 1/6 horizontal resolution. The advantage of a numerical design study is the knowledge of the full model state. Sampled by a synthetic float array, model results will therefore allow to test and improve existing deployment strategies with the goal to make the system as optimal and cost-efficient as possible. Attachment: "Optimal observations for variational data assimilation".
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Wahish, Amal; Armitage, D.; Hill, B.
A design for a sample cell system suitable for high temperature Quasi-Elastic Neutron Scattering (QENS) experiments is presented. The apparatus was developed at the Spallation Neutron Source in Oak Ridge National Lab where it is currently in use. The design provides a special sample cell environment under controlled humid or dry gas flow over a wide range of temperature up to 950 °C. Using such a cell, chemical, dynamical, and physical changes can be studied in situ under various operating conditions. While the cell combined with portable automated gas environment system is especially useful for in situ studies of microscopic dynamicsmore » under operational conditions that are similar to those of solid oxide fuel cells, it can additionally be used to study a wide variety of materials, such as high temperature proton conductors. The cell can also be used in many different neutron experiments when a suitable sample holder material is selected. The sample cell system has recently been used to reveal fast dynamic processes in quasi-elastic neutron scattering experiments, which standard probes (such as electrochemical impedance spectroscopy) could not detect. In this work, we outline the design of the sample cell system and present results demonstrating its abilities in high temperature QENS experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Wahish, Amal; Armitage, D.; al-Binni, U.
Our design for a sample cell system suitable for high temperature Quasi-Elastic Neutron Scattering (QENS) experiments is presented. The apparatus was developed at the Spallation Neutron Source in Oak Ridge National Lab where it is currently in use. The design provides a special sample cell environment under controlled humid or dry gas flow over a wide range of temperature up to 950°C. Using such a cell, chemical, dynamical, and physical changes can be studied in situ under various operating conditions. And while the cell combined with portable automated gas environment system is especially useful for in situ studies of microscopicmore » dynamics under operational conditions that are similar to those of solid oxide fuel cells, it can additionally be used to study a wide variety of materials, such as high temperature protonconductors. The cell can also be used in many different neutron experiments when a suitable sample holder material is selected. Finally, the sample cell system has recently been used to reveal fast dynamic processes in quasi-elastic neutron scattering experiments, which standard probes (such as electrochemical impedance spectroscopy) could not detect. In this work, we outline the design of the sample cell system and present results demonstrating its abilities in high temperature QENS experiments.« less
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
Sample acquisition and instrument deployment
NASA Technical Reports Server (NTRS)
Boyd, Robert C.
1995-01-01
Progress is reported in developing the Sample Acquisition and Instrument Deployment (SAID) system, a robotic system for deploying science instruments and acquiring samples for analysis. The system is a conventional four degree of freedom manipulator 2 meters in length. A baseline design has been achieved through analysis and trade studies. The design considers environmental operating conditions on the surface of Mars, as well as volume constraints on proposed Mars landers. Control issues have also been studied, and simulations of joint and tip movements have been performed. The systems have been fabricated and tested in environmental chambers, as well as soil testing and robotic control testing.
Olives, Casey; Valadez, Joseph J.; Brooker, Simon J.; Pagano, Marcello
2012-01-01
Background Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. Methodology We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n = 15 and n = 25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Principle Findings Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n = 15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. Conclusion/Significance This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools. PMID:22970333
Adaptation of G-TAG Software for Validating Touch-and-Go Comet Surface Sampling Design Methodology
NASA Technical Reports Server (NTRS)
Mandic, Milan; Acikmese, Behcet; Blackmore, Lars
2011-01-01
The G-TAG software tool was developed under the R&TD on Integrated Autonomous Guidance, Navigation, and Control for Comet Sample Return, and represents a novel, multi-body dynamics simulation software tool for studying TAG sampling. The G-TAG multi-body simulation tool provides a simulation environment in which a Touch-and-Go (TAG) sampling event can be extensively tested. TAG sampling requires the spacecraft to descend to the surface, contact the surface with a sampling collection device, and then to ascend to a safe altitude. The TAG event lasts only a few seconds but is mission-critical with potentially high risk. Consequently, there is a need for the TAG event to be well characterized and studied by simulation and analysis in order for the proposal teams to converge on a reliable spacecraft design. This adaptation of the G-TAG tool was developed to support the Comet Odyssey proposal effort, and is specifically focused to address comet sample return missions. In this application, the spacecraft descends to and samples from the surface of a comet. Performance of the spacecraft during TAG is assessed based on survivability and sample collection performance. For the adaptation of the G-TAG simulation tool to comet scenarios, models are developed that accurately describe the properties of the spacecraft, approach trajectories, and descent velocities, as well as the models of the external forces and torques acting on the spacecraft. The adapted models of the spacecraft, descent profiles, and external sampling forces/torques were more sophisticated and customized for comets than those available in the basic G-TAG simulation tool. Scenarios implemented include the study of variations in requirements, spacecraft design (size, locations, etc. of the spacecraft components), and the environment (surface properties, slope, disturbances, etc.). The simulations, along with their visual representations using G-View, contributed to the Comet Odyssey New Frontiers proposal effort by indicating problems and/or benefits of different approaches and designs.
Age 60 Study. Part 1. Bibliographic Database
1994-10-01
seven of these aircraft types participated in a spectacle design study. Experimental spectacles were designed for each pilot and evaluated for...observation flight administered by observers who were uninformed of the details of the experimental design . Students and instructors also completed a critique...intraindividual lability in field-dependence-field independence, and (4) various measurement, sampling, and experimental design concerns associated
Glenn, Beth A; Bastani, Roshan; Maxwell, Annette E
2013-01-01
Threats to external validity, including pretest sensitisation and the interaction of selection and an intervention, are frequently overlooked by researchers despite their potential to significantly influence study outcomes. The purpose of this investigation was to conduct secondary data analyses to assess the presence of external validity threats in the setting of a randomised trial designed to promote mammography use in a high-risk sample of women. During the trial, recruitment and intervention, implementation took place in three cohorts (with different ethnic composition), utilising two different designs (pretest-posttest control group design and posttest only control group design). Results reveal that the intervention produced different outcomes across cohorts, dependent upon the research design used and the characteristics of the sample. These results illustrate the importance of weighing the pros and cons of potential research designs before making a selection and attending more closely to issues of external validity.
Secondary outcome analysis for data from an outcome-dependent sampling design.
Pan, Yinghao; Cai, Jianwen; Longnecker, Matthew P; Zhou, Haibo
2018-04-22
Outcome-dependent sampling (ODS) scheme is a cost-effective way to conduct a study. For a study with continuous primary outcome, an ODS scheme can be implemented where the expensive exposure is only measured on a simple random sample and supplemental samples selected from 2 tails of the primary outcome variable. With the tremendous cost invested in collecting the primary exposure information, investigators often would like to use the available data to study the relationship between a secondary outcome and the obtained exposure variable. This is referred as secondary analysis. Secondary analysis in ODS designs can be tricky, as the ODS sample is not a random sample from the general population. In this article, we use the inverse probability weighted and augmented inverse probability weighted estimating equations to analyze the secondary outcome for data obtained from the ODS design. We do not make any parametric assumptions on the primary and secondary outcome and only specify the form of the regression mean models, thus allow an arbitrary error distribution. Our approach is robust to second- and higher-order moment misspecification. It also leads to more precise estimates of the parameters by effectively using all the available participants. Through simulation studies, we show that the proposed estimator is consistent and asymptotically normal. Data from the Collaborative Perinatal Project are analyzed to illustrate our method. Copyright © 2018 John Wiley & Sons, Ltd.
A re-evaluation of a case-control model with contaminated controls for resource selection studies
Christopher T. Rota; Joshua J. Millspaugh; Dylan C. Kesler; Chad P. Lehman; Mark A. Rumble; Catherine M. B. Jachowski
2013-01-01
A common sampling design in resource selection studies involves measuring resource attributes at sample units used by an animal and at sample units considered available for use. Few models can estimate the absolute probability of using a sample unit from such data, but such approaches are generally preferred over statistical methods that estimate a relative probability...
Missing observations in multiyear rotation sampling designs
NASA Technical Reports Server (NTRS)
Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)
1982-01-01
Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.
Sensitive Metamaterial Sensor for Distinction of Authentic and Inauthentic Fuel Samples
NASA Astrophysics Data System (ADS)
Tümkaya, Mehmet Ali; Dinçer, Furkan; Karaaslan, Muharrem; Sabah, Cumali
2017-08-01
A metamaterial-based sensor has been realized to distinguish authentic and inauthentic fuel samples in the microwave frequency regime. Unlike the many studies in literature on metamaterial-based sensor applications, this study focuses on a compact metamaterial-based sensor operating in the X-band frequency range. Firstly, electromagnetic properties of authentic and inauthentic fuel samples were obtained experimentally in a laboratory environment. Secondly, these experimental results were used to design and create a highly efficient metamaterial-based sensor with easy fabrication characteristics and simple design structure. The experimental results for the sensor were in good agreement with the numerical ones. The proposed sensor offers a more efficient design and can be used to detect fuel and multiple other liquids in various application fields from medical to military areas in several frequency regimes.
ERIC Educational Resources Information Center
Bethel, James; Green, James L.; Nord, Christine; Kalton, Graham; West, Jerry
2005-01-01
This report is Volume 2 of the methodology report that provides information about the development, design, and conduct of the 9-month data collection of the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B). This volume begins with a brief overview of the ECLS-B, but focuses on the sample design, calculation of response rates, development…
Survey design for lakes and reservoirs in the United States to assess contaminants in fish tissue.
Olsen, Anthony R; Snyder, Blaine D; Stahl, Leanne L; Pitt, Jennifer L
2009-03-01
The National Lake Fish Tissue Study (NLFTS) was the first survey of fish contamination in lakes and reservoirs in the 48 conterminous states based on a probability survey design. This study included the largest set (268) of persistent, bioaccumulative, and toxic (PBT) chemicals ever studied in predator and bottom-dwelling fish species. The U.S. Environmental Protection Agency (USEPA) implemented the study in cooperation with states, tribal nations, and other federal agencies, with field collection occurring at 500 lakes and reservoirs over a four-year period (2000-2003). The sampled lakes and reservoirs were selected using a spatially balanced unequal probability survey design from 270,761 lake objects in USEPA's River Reach File Version 3 (RF3). The survey design selected 900 lake objects, with a reserve sample of 900, equally distributed across six lake area categories. A total of 1,001 lake objects were evaluated to identify 500 lake objects that met the study's definition of a lake and could be accessed for sampling. Based on the 1,001 evaluated lakes, it was estimated that a target population of 147,343 (+/-7% with 95% confidence) lakes and reservoirs met the NLFTS definition of a lake. Of the estimated 147,343 target lakes, 47% were estimated not to be sampleable either due to landowner access denial (35%) or due to physical barriers (12%). It was estimated that a sampled population of 78,664 (+/-12% with 95% confidence) lakes met the NLFTS lake definition, had either predator or bottom-dwelling fish present, and could be sampled.
NASA Astrophysics Data System (ADS)
Lazarina, Maria; Kallimanis, Athanasios S.; Pantis, John D.; Sgardelis, Stefanos P.
2014-11-01
The species-area relationship (SAR) is one of the few generalizations in ecology. However, many different relationships are denoted as SARs. Here, we empirically evaluated the differences between SARs derived from nested-contiguous and non-contiguous sampling designs, using plants, birds and butterflies datasets from Great Britain, Greece, Massachusetts, New York and San Diego. The shape of SAR depends on the sampling scheme, but there is little empirical documentation on the magnitude of the deviation between different types of SARs and the factors affecting it. We implemented a strictly nested sampling design to construct nested-contiguous SAR (SACR), and systematic nested but non-contiguous, and random designs to construct non-contiguous species richness curves (SASRs for systematic and SACs for random designs) per dataset. The SACR lay below any SASR and most of the SACs. The deviation between them was related to the exponent f of the power law relationship between sampled area and extent. The lower the exponent f, the higher was the deviation between the curves. We linked SACR to SASR and SAC through the concept of "effective" area (Ae), i.e. the nested-contiguous area containing equal number of species with the accumulated sampled area (AS) of a non-contiguous sampling. The relationship between effective and sampled area was modeled as log(Ae) = klog(AS). A Generalized Linear Model was used to estimate the values of k from sampling design and dataset properties. The parameter k increased with the average distance between samples and with beta diversity, while k decreased with f. For both systematic and random sampling, the model performed well in predicting effective area in both the training set and in the test set which was totally independent from the training one. Through effective area, we can link different types of species richness curves based on sampling design properties, sampling effort, spatial scale and beta diversity patterns.
Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling
2006-01-01
Hidden populations, such as injection drug users and sex workers, are central to a number of public health problems. However, because of the nature of these groups, it is difficult to collect accurate information about them, and this difficulty complicates disease prevention efforts. A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence of certain traits in these populations. Yet, not enough is known about the sample-to-sample variability of these prevalence estimates. In this paper, we present a bootstrap method for constructing confidence intervals around respondent-driven sampling estimates and demonstrate in simulations that it outperforms the naive method currently in use. We also use simulations and real data to estimate the design effects for respondent-driven sampling in a number of situations. We conclude with practical advice about the power calculations that are needed to determine the appropriate sample size for a study using respondent-driven sampling. In general, we recommend a sample size twice as large as would be needed under simple random sampling. PMID:16937083
Schmidt, M; Eng, P J; Stubbs, J E; Fenter, P; Soderholm, L
2011-07-01
We present a novel design of a purpose-built, portable sample cell for in situ x-ray scattering experiments of radioactive or atmosphere sensitive samples. The cell has a modular design that includes two independent layers of containment that are used simultaneously to isolate the sensitive samples. Both layers of containment can be flushed with an inert gas, thus serving a double purpose as containment of radiological material (either as a solid sample or as a liquid phase) and in separating reactive samples from the ambient atmosphere. A remote controlled solution flow system is integrated into the containment system that allows sorption experiments to be performed on the diffractometer. The cell's design is discussed in detail and we demonstrate the cell's performance by presenting first results of crystal truncation rod measurements. The results were obtained from muscovite mica single crystals reacted with 1 mM solutions of Th(IV) with 0.1 M NaCl background electrolyte. Data were obtained in specular as well as off-specular geometry.
Disease-Concordant Twins Empower Genetic Association Studies.
Tan, Qihua; Li, Weilong; Vandin, Fabio
2017-01-01
Genome-wide association studies with moderate sample sizes are underpowered, especially when testing SNP alleles with low allele counts, a situation that may lead to high frequency of false-positive results and lack of replication in independent studies. Related individuals, such as twin pairs concordant for a disease, should confer increased power in genetic association analysis because of their genetic relatedness. We conducted a computer simulation study to explore the power advantage of the disease-concordant twin design, which uses singletons from disease-concordant twin pairs as cases and ordinary healthy samples as controls. We examined the power gain of the twin-based design for various scenarios (i.e., cases from monozygotic and dizygotic twin pairs concordant for a disease) and compared the power with the ordinary case-control design with cases collected from the unrelated patient population. Simulation was done by assigning various allele frequencies and allelic relative risks for different mode of genetic inheritance. In general, for achieving a power estimate of 80%, the sample sizes needed for dizygotic and monozygotic twin cases were one half and one fourth of the sample size of an ordinary case-control design, with variations depending on genetic mode. Importantly, the enriched power for dizygotic twins also applies to disease-concordant sibling pairs, which largely extends the application of the concordant twin design. Overall, our simulation revealed a high value of disease-concordant twins in genetic association studies and encourages the use of genetically related individuals for highly efficiently identifying both common and rare genetic variants underlying human complex diseases without increasing laboratory cost. © 2016 John Wiley & Sons Ltd/University College London.
Earth Entry Vehicle Design for Sample Return Missions Using M-SAPE
NASA Technical Reports Server (NTRS)
Samareh, Jamshid
2015-01-01
Most mission concepts that return sample material to Earth share one common element: an Earth entry vehicle (EEV). The primary focus of this paper is the examination of EEV design space for relevant sample return missions. Mission requirements for EEV concepts can be divided into three major groups: entry conditions (e.g., velocity and flight path angle), payload (e.g., mass, volume, and g-load limit), and vehicle characteristics (e.g., thermal protection system, structural topology, and landing concepts). The impacts of these requirements on the EEV design have been studied with an integrated system analysis tool, and the results will be discussed in details. In addition, through sensitivities analyses, critical design drivers that have been identified will be reviewed.
Optimal Design for Two-Level Random Assignment and Regression Discontinuity Studies
ERIC Educational Resources Information Center
Rhoads, Christopher H.; Dye, Charles
2016-01-01
An important concern when planning research studies is to obtain maximum precision of an estimate of a treatment effect given a budget constraint. When research designs have a "multilevel" or "hierarchical" structure changes in sample size at different levels of the design will impact precision differently. Furthermore, there…
[Methodological design of the National Health and Nutrition Survey 2016].
Romero-Martínez, Martín; Shamah-Levy, Teresa; Cuevas-Nasu, Lucía; Gómez-Humarán, Ignacio Méndez; Gaona-Pineda, Elsa Berenice; Gómez-Acosta, Luz María; Rivera-Dommarco, Juan Ángel; Hernández-Ávila, Mauricio
2017-01-01
Describe the design methodology of the halfway health and nutrition national survey (Ensanut-MC) 2016. The Ensanut-MC is a national probabilistic survey whose objective population are the inhabitants of private households in Mexico. The sample size was determined to make inferences on the urban and rural areas in four regions. Describes main design elements: target population, topics of study, sampling procedure, measurement procedure and logistics organization. A final sample of 9 479 completed household interviews, and a sample of 16 591 individual interviews. The response rate for households was 77.9%, and the response rate for individuals was 91.9%. The Ensanut-MC probabilistic design allows valid statistical inferences about interest parameters for Mexico´s public health and nutrition, specifically on overweight, obesity and diabetes mellitus. Updated information also supports the monitoring, updating and formulation of new policies and priority programs.
A Study of Trial and Error Learning in Technology, Engineering, and Design Education
ERIC Educational Resources Information Center
Franzen, Marissa Marie Sloan
2016-01-01
The purpose of this research study was to determine if trial and error learning was an effective, practical, and efficient learning method for Technology, Engineering, and Design Education students at the post-secondary level. A mixed methods explanatory research design was used to measure the viability of the learning source. The study sample was…
ERIC Educational Resources Information Center
Shiyko, Mariya P.; Ram, Nilam
2011-01-01
Researchers have been making use of ecological momentary assessment (EMA) and other study designs that sample feelings and behaviors in real time and in naturalistic settings to study temporal dynamics and contextual factors of a wide variety of psychological, physiological, and behavioral processes. As EMA designs become more widespread,…
An evaluation of the systems approach to bridge design.
DOT National Transportation Integrated Search
1974-01-01
This study contains the findings from a survey of available integrated computer systems for bridge analysis and design, along with a sample design of a grade separation structure using the two leading design systems, BEST and STRC. It appears that in...
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
Treatment Trials for Neonatal Seizures: The Effect of Design on Sample Size
Stevenson, Nathan J.; Boylan, Geraldine B.; Hellström-Westas, Lena; Vanhatalo, Sampsa
2016-01-01
Neonatal seizures are common in the neonatal intensive care unit. Clinicians treat these seizures with several anti-epileptic drugs (AEDs) to reduce seizures in a neonate. Current AEDs exhibit sub-optimal efficacy and several randomized control trials (RCT) of novel AEDs are planned. The aim of this study was to measure the influence of trial design on the required sample size of a RCT. We used seizure time courses from 41 term neonates with hypoxic ischaemic encephalopathy to build seizure treatment trial simulations. We used five outcome measures, three AED protocols, eight treatment delays from seizure onset (Td) and four levels of trial AED efficacy to simulate different RCTs. We performed power calculations for each RCT design and analysed the resultant sample size. We also assessed the rate of false positives, or placebo effect, in typical uncontrolled studies. We found that the false positive rate ranged from 5 to 85% of patients depending on RCT design. For controlled trials, the choice of outcome measure had the largest effect on sample size with median differences of 30.7 fold (IQR: 13.7–40.0) across a range of AED protocols, Td and trial AED efficacy (p<0.001). RCTs that compared the trial AED with positive controls required sample sizes with a median fold increase of 3.2 (IQR: 1.9–11.9; p<0.001). Delays in AED administration from seizure onset also increased the required sample size 2.1 fold (IQR: 1.7–2.9; p<0.001). Subgroup analysis showed that RCTs in neonates treated with hypothermia required a median fold increase in sample size of 2.6 (IQR: 2.4–3.0) compared to trials in normothermic neonates (p<0.001). These results show that RCT design has a profound influence on the required sample size. Trials that use a control group, appropriate outcome measure, and control for differences in Td between groups in analysis will be valid and minimise sample size. PMID:27824913
Roux, Emmanuel; Gaborit, Pascal; Romaña, Christine A; Girod, Romain; Dessay, Nadine; Dusfour, Isabelle
2013-12-01
Sampling design is a key issue when establishing species inventories and characterizing habitats within highly heterogeneous landscapes. Sampling efforts in such environments may be constrained and many field studies only rely on subjective and/or qualitative approaches to design collection strategy. The region of Cacao, in French Guiana, provides an excellent study site to understand the presence and abundance of Anopheles mosquitoes, their species dynamics and the transmission risk of malaria across various environments. We propose an objective methodology to define a stratified sampling design. Following thorough environmental characterization, a factorial analysis of mixed groups allows the data to be reduced and non-collinear principal components to be identified while balancing the influences of the different environmental factors. Such components defined new variables which could then be used in a robust k-means clustering procedure. Then, we identified five clusters that corresponded to our sampling strata and selected sampling sites in each stratum. We validated our method by comparing the species overlap of entomological collections from selected sites and the environmental similarities of the same sites. The Morisita index was significantly correlated (Pearson linear correlation) with environmental similarity based on i) the balanced environmental variable groups considered jointly (p = 0.001) and ii) land cover/use (p-value < 0.001). The Jaccard index was significantly correlated with land cover/use-based environmental similarity (p-value = 0.001). The results validate our sampling approach. Land cover/use maps (based on high spatial resolution satellite images) were shown to be particularly useful when studying the presence, density and diversity of Anopheles mosquitoes at local scales and in very heterogeneous landscapes.
2013-01-01
Background Sampling design is a key issue when establishing species inventories and characterizing habitats within highly heterogeneous landscapes. Sampling efforts in such environments may be constrained and many field studies only rely on subjective and/or qualitative approaches to design collection strategy. The region of Cacao, in French Guiana, provides an excellent study site to understand the presence and abundance of Anopheles mosquitoes, their species dynamics and the transmission risk of malaria across various environments. We propose an objective methodology to define a stratified sampling design. Following thorough environmental characterization, a factorial analysis of mixed groups allows the data to be reduced and non-collinear principal components to be identified while balancing the influences of the different environmental factors. Such components defined new variables which could then be used in a robust k-means clustering procedure. Then, we identified five clusters that corresponded to our sampling strata and selected sampling sites in each stratum. Results We validated our method by comparing the species overlap of entomological collections from selected sites and the environmental similarities of the same sites. The Morisita index was significantly correlated (Pearson linear correlation) with environmental similarity based on i) the balanced environmental variable groups considered jointly (p = 0.001) and ii) land cover/use (p-value << 0.001). The Jaccard index was significantly correlated with land cover/use-based environmental similarity (p-value = 0.001). Conclusions The results validate our sampling approach. Land cover/use maps (based on high spatial resolution satellite images) were shown to be particularly useful when studying the presence, density and diversity of Anopheles mosquitoes at local scales and in very heterogeneous landscapes. PMID:24289184
Investigating Test Equating Methods in Small Samples through Various Factors
ERIC Educational Resources Information Center
Asiret, Semih; Sünbül, Seçil Ömür
2016-01-01
In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…
Erickson, Melinda L.
2012-01-01
The U.S. Geological Survey, in cooperation with the Minnesota Pollution Control Agency, completed a study on the occurrence of steroidal hormones and other endocrine active compounds in shallow groundwater in nonagricultural areas of Minnesota during 2009–10. This report describes the study design and methods, and presents the data collected on steroidal hormones and other related compounds. Environmental and quality-control samples were collected from 40 wells as part of this study. Samples were analyzed by the U.S. Geological Survey National Water Quality Laboratory for 16 steroidal hormones and 4 other related compounds, of which all but 2 compounds are endocrine active compounds. Most of the water samples did not contain detectable concentrations of any of the 20 compounds analyzed. Water samples from three wells had detectable concentrations of one or more compounds. Bisphenol A was detected in samples from three wells, and trans-diethylstilbestrol was detected in one of the samples in which bisphenol A also was detected.
Influence function based variance estimation and missing data issues in case-cohort studies.
Mark, S D; Katki, H
2001-12-01
Recognizing that the efficiency in relative risk estimation for the Cox proportional hazards model is largely constrained by the total number of cases, Prentice (1986) proposed the case-cohort design in which covariates are measured on all cases and on a random sample of the cohort. Subsequent to Prentice, other methods of estimation and sampling have been proposed for these designs. We formalize an approach to variance estimation suggested by Barlow (1994), and derive a robust variance estimator based on the influence function. We consider the applicability of the variance estimator to all the proposed case-cohort estimators, and derive the influence function when known sampling probabilities in the estimators are replaced by observed sampling fractions. We discuss the modifications required when cases are missing covariate information. The missingness may occur by chance, and be completely at random; or may occur as part of the sampling design, and depend upon other observed covariates. We provide an adaptation of S-plus code that allows estimating influence function variances in the presence of such missing covariates. Using examples from our current case-cohort studies on esophageal and gastric cancer, we illustrate how our results our useful in solving design and analytic issues that arise in practice.
Design and analysis of three-arm trials with negative binomially distributed endpoints.
Mütze, Tobias; Munk, Axel; Friede, Tim
2016-02-20
A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.
Validation of Statistical Sampling Algorithms in Visual Sample Plan (VSP): Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuffer, Lisa L; Sego, Landon H.; Wilson, John E.
2009-02-18
The U.S. Department of Homeland Security, Office of Technology Development (OTD) contracted with a set of U.S. Department of Energy national laboratories, including the Pacific Northwest National Laboratory (PNNL), to write a Remediation Guidance for Major Airports After a Chemical Attack. The report identifies key activities and issues that should be considered by a typical major airport following an incident involving release of a toxic chemical agent. Four experimental tasks were identified that would require further research in order to supplement the Remediation Guidance. One of the tasks, Task 4, OTD Chemical Remediation Statistical Sampling Design Validation, dealt with statisticalmore » sampling algorithm validation. This report documents the results of the sampling design validation conducted for Task 4. In 2005, the Government Accountability Office (GAO) performed a review of the past U.S. responses to Anthrax terrorist cases. Part of the motivation for this PNNL report was a major GAO finding that there was a lack of validated sampling strategies in the U.S. response to Anthrax cases. The report (GAO 2005) recommended that probability-based methods be used for sampling design in order to address confidence in the results, particularly when all sample results showed no remaining contamination. The GAO also expressed a desire that the methods be validated, which is the main purpose of this PNNL report. The objective of this study was to validate probability-based statistical sampling designs and the algorithms pertinent to within-building sampling that allow the user to prescribe or evaluate confidence levels of conclusions based on data collected as guided by the statistical sampling designs. Specifically, the designs found in the Visual Sample Plan (VSP) software were evaluated. VSP was used to calculate the number of samples and the sample location for a variety of sampling plans applied to an actual release site. Most of the sampling designs validated are probability based, meaning samples are located randomly (or on a randomly placed grid) so no bias enters into the placement of samples, and the number of samples is calculated such that IF the amount and spatial extent of contamination exceeds levels of concern, at least one of the samples would be taken from a contaminated area, at least X% of the time. Hence, "validation" of the statistical sampling algorithms is defined herein to mean ensuring that the "X%" (confidence) is actually met.« less
The complexity of personality: advantages of a genetically sensitive multi-group design.
Hahn, Elisabeth; Spinath, Frank M; Siedler, Thomas; Wagner, Gert G; Schupp, Jürgen; Kandler, Christian
2012-03-01
Findings from many behavioral genetic studies utilizing the classical twin design suggest that genetic and non-shared environmental effects play a significant role in human personality traits. This study focuses on the methodological advantages of extending the sampling frame to include multiple dyads of relatives. We investigated the sensitivity of heritability estimates to the inclusion of sibling pairs, mother-child pairs and grandparent-grandchild pairs from the German Socio-Economic Panel Study in addition to a classical German twin sample consisting of monozygotic- and dizygotic twins. The resulting dataset contained 1.308 pairs, including 202 monozygotic and 147 dizygotic twin pairs, along with 419 sibling pairs, 438 mother-child dyads, and 102 grandparent-child dyads. This genetically sensitive multi-group design allowed the simultaneous testing of additive and non-additive genetic, common and specific environmental effects, including cultural transmission and twin-specific environmental influences. Using manifest and latent modeling of phenotypes (i.e., controlling for measurement error), we compare results from the extended sample with those from the twin sample alone and discuss implications for future research.
Elliott, Sarah M.; Erickson, Melinda L.
2014-01-01
The U.S. Geological Survey, in cooperation with the Minnesota Pollution Control Agency, completed a study on the occurrence of pharmaceutical compounds and other contaminants of emerging concern in shallow groundwater in non-agricultural areas of Minnesota during 2013. This report describes the study design and methods for the study on the occurrence of pharmaceuticals and other contaminants of emerging concern, and presents the data collected on pharmaceutical compounds. Samples were analyzed by the U.S. Geological Survey National Water Quality Laboratory for 110 pharmaceutical compounds using research method 9017. Samples from 21 of 45 wells had detectable concentrations of at least one of the 110 compounds analyzed. One sample contained detectable concentrations of nine compounds, which was the most detected in a single sample. Fewer than five compounds were detected in most samples. Among all samples, 27 of the 110 compounds were detected in groundwater from at least one well. Desmethyldiltiazem and nicotine were the most frequently detected compounds, each detected in 5 of 46 environmental samples (one well was sampled twice so a total of 46 environmental samples were collected from 45 wells). Caffeine had the highest detectable concentration of all the compounds at 2,060 nanograms per liter.
Schramm, Sébastien; Vailhen, Dominique; Bridoux, Maxime Cyril
2016-02-12
A method for the sensitive quantification of trace amounts of organic explosives in water samples was developed by using stir bar sorptive extraction (SBSE) followed by liquid desorption and ultra-high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS). The proposed method was developed and optimized using a statistical design of experiment approach. Use of experimental designs allowed a complete study of 10 factors and 8 analytes including nitro-aromatics, amino-nitro-aromatics and nitric esters. The liquid desorption study was performed using a full factorial experimental design followed by a kinetic study. Four different variables were tested here: the liquid desorption mode (stirring or sonication), the chemical nature of the stir bar (PDMS or PDMS-PEG), the composition of the liquid desorption phase and finally, the volume of solvent used for the liquid desorption. On the other hand, the SBSE extraction study was performed using a Doehlert design. SBSE extraction conditions such as extraction time profiles, sample volume, modifier addition, and acetic acid addition were examined. After optimization of the experimental parameters, sensitivity was improved by a factor 5-30, depending on the compound studied, due to the enrichment factors reached using the SBSE method. Limits of detection were in the ng/L level for all analytes studied. Reproducibility of the extraction with different stir bars was close to the reproducibility of the analytical method (RSD between 4 and 16%). Extractions in various water sample matrices (spring, mineral and underground water) have shown similar enrichment compared to ultrapure water, revealing very low matrix effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Niss, Frida; Rosenmai, Anna Kjerstine; Mandava, Geeta; Örn, Stefan; Oskarsson, Agneta; Lundqvist, Johan
2018-04-01
The use of in vitro bioassays for studies of toxic activity in environmental water samples is a rapidly expanding field of research. Cell-based bioassays can assess the total toxicity exerted by a water sample, regardless whether the toxicity is caused by a known or unknown agent or by a complex mixture of different agents. When using bioassays for environmental water samples, it is often necessary to concentrate the water samples before applying the sample. Commonly, water samples are concentrated 10-50 times. However, there is always a risk of losing compounds in the sample in such sample preparation. We have developed an alternative experimental design by preparing a concentrated cell culture medium which was then diluted in the environmental water sample to compose the final cell culture media for the in vitro assays. Water samples from five Swedish waste water treatment plants were analyzed for oxidative stress response, estrogen receptor (ER), and aryl hydrocarbon receptor (AhR) activity using this experimental design. We were able to detect responses equivalent to 8.8-11.3 ng/L TCCD for AhR activity and 0.4-0.9 ng/L 17β-estradiol for ER activity. We were unable to detect oxidative stress response in any of the studied water samples. In conclusion, we have developed an experimental design allowing us to examine environmental water samples in toxicity in vitro assays at a concentration factor close to 1, without the risk of losing known or unknown compounds during an extraction procedure.
Liu, Wenjuan; Ji, Jianlin; Chen, Hua; Ye, Chenyu
2014-01-01
Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients' perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients' impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the 'central point', and three color attributes were optimized to maximize the patients' satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room.
Chen, Hua; Ye, Chenyu
2014-01-01
Color is one of the most powerful aspects of a psychological counseling environment. Little scientific research has been conducted on color design and much of the existing literature is based on observational studies. Using design of experiments and response surface methodology, this paper proposes an optimal color design approach for transforming patients’ perception into color elements. Six indices, pleasant-unpleasant, interesting-uninteresting, exciting-boring, relaxing-distressing, safe-fearful, and active-inactive, were used to assess patients’ impression. A total of 75 patients participated, including 42 for Experiment 1 and 33 for Experiment 2. 27 representative color samples were designed in Experiment 1, and the color sample (L = 75, a = 0, b = -60) was the most preferred one. In Experiment 2, this color sample was set as the ‘central point’, and three color attributes were optimized to maximize the patients’ satisfaction. The experimental results show that the proposed method can get the optimal solution for color design of a counseling room. PMID:24594683
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
Intra-class correlation estimates for assessment of vitamin A intake in children.
Agarwal, Girdhar G; Awasthi, Shally; Walter, Stephen D
2005-03-01
In many community-based surveys, multi-level sampling is inherent in the design. In the design of these studies, especially to calculate the appropriate sample size, investigators need good estimates of intra-class correlation coefficient (ICC), along with the cluster size, to adjust for variation inflation due to clustering at each level. The present study used data on the assessment of clinical vitamin A deficiency and intake of vitamin A-rich food in children in a district in India. For the survey, 16 households were sampled from 200 villages nested within eight randomly-selected blocks of the district. ICCs and components of variances were estimated from a three-level hierarchical random effects analysis of variance model. Estimates of ICCs and variance components were obtained at village and block levels. Between-cluster variation was evident at each level of clustering. In these estimates, ICCs were inversely related to cluster size, but the design effect could be substantial for large clusters. At the block level, most ICC estimates were below 0.07. At the village level, many ICC estimates ranged from 0.014 to 0.45. These estimates may provide useful information for the design of epidemiological studies in which the sampled (or allocated) units range in size from households to large administrative zones.
Tapsell, Linda C; Lonergan, Maureen; Martin, Allison; Batterham, Marijka J; Neale, Elizabeth P
2015-11-01
Integrating professional expertise in diet, exercise and behavioural support may provide more effective preventive health services but this needs testing. We describe the design and baseline results of a trial in the Illawarra region of New South Wales, Australia. The HealthTrack study is a 12 month randomised controlled trial testing effects of a novel interdisciplinary lifestyle intervention versus usual care. The study recruited overweight and obese adults 25-54 years resident in the Illawarra. Primary outcomes were weight, and secondary outcomes were disease risk factors (lipids, glucose, blood pressure), and behaviour (diet, activity, and psychological factors). Protocols, recruitment and baseline characteristics are reported. Between May 2014 and April 2015, 377 participants were recruited and randomised. The median age (IQR) of the mostly female sample (74%) was 45 (37-51) years. The sample comprised obese (BMI 32 (29-35) kg/m(2)) well educated (79% post school qualifications) non-smokers (96%). A high proportion reported suffering from anxiety (26.8%) and depression (33.7%). Metabolic syndrome was identified in 34.9% of the sample. The HealthTrack study sample was recruited to test the effectiveness of an interdisciplinary approach to preventive healthcare in self-identified overweight adults in the Illawarra region. The profile of participants gives some indication of those likely to use services similar to the trial design. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Adolph, Karen E.; Robinson, Scott R.
2011-01-01
Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…
Optimum design of Geodesic dome’s jointing system
NASA Astrophysics Data System (ADS)
Tran, Huy. T.
2018-04-01
This study attempts to create a new design for joint connector of Geodesic dome. A new type of joint connector design is proposed for flexible rotating connection; comparing it to another, this design is cheaper and workable. After calculating the bearing capacity of the sample according to EC3 and Vietnam standard TCVN 5575-2012, FEM model of the design sample is carried out in many specific situation to consider the stress distribution, the deformation, the local destruction… in the connector. The analytical results and the FE data are consistent. The FE analysis also points out the behavior of some details that simple calculation cannot show. Hence, we can choose the optimum design of joint connector.
Evaluation of agile designs in first-in-human (FIH) trials--a simulation study.
Perlstein, Itay; Bolognese, James A; Krishna, Rajesh; Wagner, John A
2009-12-01
The aim of the investigation was to evaluate alternatives to standard first-in-human (FIH) designs in order to optimize the information gained from such studies by employing novel agile trial designs. Agile designs combine adaptive and flexible elements to enable optimized use of prior information either before and/or during conduct of the study to seamlessly update the study design. A comparison of the traditional 6 + 2 (active + placebo) subjects per cohort design with alternative, reduced sample size, agile designs was performed by using discrete event simulation. Agile designs were evaluated for specific adverse event models and rates as well as dose-proportional, saturated, and steep-accumulation pharmacokinetic profiles. Alternative, reduced sample size (hereafter referred to as agile) designs are proposed for cases where prior knowledge about pharmacokinetics and/or adverse event relationships are available or appropriately assumed. Additionally, preferred alternatives are proposed for a general case when prior knowledge is limited or unavailable. Within the tested conditions and stated assumptions, some agile designs were found to be as efficient as traditional designs. Thus, simulations demonstrated that the agile design is a robust and feasible approach to FIH clinical trials, with no meaningful loss of relevant information, as it relates to PK and AE assumptions. In some circumstances, applying agile designs may decrease the duration and resources required for Phase I studies, increasing the efficiency of early clinical development. We highlight the value and importance of useful prior information when specifying key assumptions related to safety, tolerability, and PK.
[Survey of what is published on Italian nursing journals].
Bongiorno, Elena; Colleoni, Pasqualina; Casati, Monica
2005-01-01
Nursing research is an important activity for nurses; the main aim is to improve the quality of nursing. Several national and european laws have been issued about it. To develop knowledge about nursing, nurses have to understand the results of researches, implement them in the different situation and sometimes carry out researches. The results can be published in nursing journals which a lot of nurses use to share information. This study reviewed the characteristics of research articles published in italian nursing journals from 1998 to 2003. Phenomena of interest are: areas of enquiry, investigators, methods, research design, sampling and means to gather data. 122 articles have been reviewed: 78% focus on clinical aspects, 55% were carry out by nurses, 92% adopt the quantitative approach, 90% used non experimental design, 89% used convenience selection sampling method and 58% answer ways. The characteristics of this study are similar to other studies about italian nursing publication. There are some limits in this type of literature: lower generalization because of lower representativeness of sample, convenience selection sampling method, and higher risk of interference due to frequent use of non experimental design. However the number of italian nurses that carry out researches is increasing and nursing is the most studied area.
Comparing three sampling techniques for estimating fine woody down dead biomass
Robert E. Keane; Kathy Gray
2013-01-01
Designing woody fuel sampling methods that quickly, accurately and efficiently assess biomass at relevant spatial scales requires extensive knowledge of each sampling method's strengths, weaknesses and tradeoffs. In this study, we compared various modifications of three common sampling methods (planar intercept, fixed-area microplot and photoload) for estimating...
A STRINGENT COMPARISON OF SAMPLING AND ANALYSIS METHODS FOR VOCS IN AMBIENT AIR
A carefully designed study was conducted during the summer of 1998 to simultaneously collect samples of ambient air by canisters and compare the analysis results to direct sorbent preconcentration results taken at the time of sample collection. A total of 32 1-h sample sets we...
Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.
Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J
2013-06-01
An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Knowledge of Metabolic Syndrome in Chinese Adults: Implications for Health Education
ERIC Educational Resources Information Center
Lo, Sally Wai Sze; Chair, Sek Ying; Lee, Iris Fung Kam
2016-01-01
Objective: The objective of this study was to assess knowledge of metabolic syndrome (MS) among Chinese adults and provide directions for designing healthcare promotion schemes for improving MS awareness in the community. Design: The study adopted a cross-sectional design and a convenience sampling method. Method: Chinese adults aged 18-65 years…
Basic School Teachers' Perceptions about Curriculum Design in Ghana
ERIC Educational Resources Information Center
Abudu, Amadu Musah; Mensah, Mary Afi
2016-01-01
This study focused on teachers' perceptions about curriculum design and barriers to their participation. The sample size was 130 teachers who responded to a questionnaire. The analyses made use of descriptive statistics and descriptions. The study found that the level of teachers' participation in curriculum design is low. The results further…
Efficiency Study of NLS Base-Year Design. RTI-22U-884-3.
ERIC Educational Resources Information Center
Moore, R. P.; And Others
An efficiency study was conducted of the base year design used for the National Longitudinal Study of the High School Class of 1972 (NLS). Finding the optimal design involved a search for the numbers of sample schools and students that would maximize the variance at a given cost. Twenty-one variables describing students' plans, attitudes,…
ERIC Educational Resources Information Center
Urban, Jennifer Brown; van Eeden-Moorefield, Bradley Matheus
2017-01-01
Designing your own study and writing your research proposal takes time, often more so than conducting the study. This practical, accessible guide walks you through the entire process. You will learn to identify and narrow your research topic, develop your research question, design your study, and choose appropriate sampling and measurement…
Trajectory Design for a Single-String Impactor Concept
NASA Technical Reports Server (NTRS)
Dono Perez, Andres; Burton, Roland; Stupl, Jan; Mauro, David
2017-01-01
This paper introduces a trajectory design for a secondary spacecraft concept to augment science return in interplanetary missions. The concept consist of a single-string probe with a kinetic impactor on board that generates an artificial plume to perform in-situ sampling. The trajectory design was applied to a particular case study that samples ejecta particles from the Jovian moon Europa. Results were validated using statistical analysis. Details regarding the navigation, targeting and disposal challenges related to this concept are presented herein.
ICS-II USA research design and methodology.
Rana, H; Andersen, R M; Nakazono, T T; Davidson, P L
1997-05-01
The purpose of the WHO-sponsored International Collaborative Study of Oral Health Outcomes (ICS-II) was to provide policy-markers and researchers with detailed, reliable, and valid data on the oral health situation in their countries or regions, together with comparative data from other dental care delivery systems. ICS-II used a cross-sectional design with no explicit control groups or experimental interventions. A standardized methodology was developed and tested for collecting and analyzing epidemiological, sociocultural, economic, and delivery system data. Respondent information was obtained by household interviews, and clinical examinations were conducted by calibrated oral epidemiologists. Discussed are the sampling design characteristics for the USA research locations, response rates, samples size for interview and oral examination data, weighting procedures, and statistical methods. SUDAAN was used to adjust variance calculations, since complex sampling designs were used.
Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
Sample and design considerations in post-disaster mental health needs assessment tracking surveys
Kessler, Ronald C.; Keane, Terence M.; Ursano, Robert J.; Mokdad, Ali; Zaslavsky, Alan M.
2009-01-01
Although needs assessment surveys are carried out after many large natural and man-made disasters, synthesis of findings across these surveys and disaster situations about patterns and correlates of need is hampered by inconsistencies in study designs and measures. Recognizing this problem, the US Substance Abuse and Mental Health Services Administration (SAMHSA) assembled a task force in 2004 to develop a model study design and interview schedule for use in post-disaster needs assessment surveys. The US National Institute of Mental Health subsequently approved a plan to establish a center to implement post-disaster mental health needs assessment surveys in the future using an integrated series of measures and designs of the sort proposed by the SAMHSA task force. A wide range of measurement, design, and analysis issues will arise in developing this center. Given that the least widely discussed of these issues concerns study design, the current report focuses on the most important sampling and design issues proposed for this center based on our experiences with the SAMHSA task force, subsequent Katrina surveys, and earlier work in other disaster situations. PMID:19035440
ERIC Educational Resources Information Center
Ercikan, Kadriye; Roth, Wolff-Michael
2014-01-01
Context: Generalization is a critical concept in all research designed to generate knowledge that applies to all elements of a unit (population) while studying only a subset of these elements (sample). Commonly applied criteria for generalizing focus on experimental design or representativeness of samples of the population of units. The criteria…
A study of hydrocarbons associated with brines from DOE geopressured wells. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeley, D.F.
1993-07-01
Accomplishments are summarized on the following tasks: distribution coefficients and solubilities, DOE design well sampling, analysis of well samples, review of theoretical models of geopressured reservoir hydrocarbons, monitor for aliphatic hydrocarbons, development of a ph meter probe, DOE design well scrubber analysis, removal and disposition of gas scrubber equipment at Pleasant Bayou Well, and disposition of archived brines.
A Comparison of Single Sample and Bootstrap Methods to Assess Mediation in Cluster Randomized Trials
ERIC Educational Resources Information Center
Pituch, Keenan A.; Stapleton, Laura M.; Kang, Joo Youn
2006-01-01
A Monte Carlo study examined the statistical performance of single sample and bootstrap methods that can be used to test and form confidence interval estimates of indirect effects in two cluster randomized experimental designs. The designs were similar in that they featured random assignment of clusters to one of two treatment conditions and…
A study of hydrocarbons associated with brines from DOE geopressured wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeley, D.F.
1993-01-01
Accomplishments are summarized on the following tasks: distribution coefficients and solubilities, DOE design well sampling, analysis of well samples, review of theoretical models of geopressured reservoir hydrocarbons, monitor for aliphatic hydrocarbons, development of a ph meter probe, DOE design well scrubber analysis, removal and disposition of gas scrubber equipment at Pleasant Bayou Well, and disposition of archived brines.
The Danish National Health Survey 2010. Study design and respondent characteristics.
Christensen, Anne Illemann; Ekholm, Ola; Glümer, Charlotte; Andreasen, Anne Helms; Hvidberg, Michael Falk; Kristensen, Peter Lund; Larsen, Finn Breinholt; Ortiz, Britta; Juel, Knud
2012-06-01
In 2010 the five Danish regions and the National Institute of Public Health at the University of Southern Denmark conducted a national representative health survey among the adult population in Denmark. This paper describes the study design and the sample and study population as well as the content of the questionnaire. The survey was based on five regional stratified random samples and one national random sample. The samples were mutually exclusive. A total of 298,550 individuals (16 years or older) were invited to participate. Information was collected using a mixed mode approach (paper and web questionnaires). A questionnaire with a minimum of 52 core questions was used in all six subsamples. Calibrated weights were computed in order to take account of the complex survey design and reduce non-response bias. In all, 177,639 individuals completed the questionnaire (59.5%). The response rate varied from 52.3% in the Capital Region of Denmark sample to 65.5% in the North Denmark Region sample. The response rate was particularly low among young men, unmarried people and among individuals with a different ethnic background than Danish. The survey was a result of extensive national cooperation across sectors, which makes it unique in its field of application, e.g. health surveillance, planning and prioritizing public health initiatives and research. However, the low response rate in some subgroups of the study population can pose problems in generalizing data, and efforts to increase the response rate will be important in the forthcoming surveys.
Lucernoni, Federico; Rizzotto, Matteo; Tapparo, Federica; Capelli, Laura; Sironi, Selena; Busini, Valentina
2016-11-01
The work focuses on the principles for the design of a specific static hood and on the definition of an optimal sampling procedure for the assessment of landfill gas (LFG) surface emissions. This is carried out by means of computational fluid dynamics (CFD) simulations to investigate the fluid dynamics conditions of the hood. The study proves that understanding the fluid dynamic conditions is fundamental in order to understand the sampling results and correctly interpret the measured concentration values by relating them to a suitable LFG emission model, and therefore to estimate emission rates. For this reason, CFD is a useful tool for the design and evaluation of sampling systems, among others, to verify the fundamental hypotheses on which the mass balance for the sampling hood is defined. The procedure here discussed, which is specific for the case of the investigated landfill, can be generalized to be applied also to different scenarios, where hood sampling is involved. Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimating accuracy of land-cover composition from two-stage cluster sampling
Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.
2009-01-01
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.
Pemberton, T J; Jakobsson, M; Conrad, D F; Coop, G; Wall, J D; Pritchard, J K; Patel, P I; Rosenberg, N A
2008-07-01
When performing association studies in populations that have not been the focus of large-scale investigations of haplotype variation, it is often helpful to rely on genomic databases in other populations for study design and analysis - such as in the selection of tag SNPs and in the imputation of missing genotypes. One way of improving the use of these databases is to rely on a mixture of database samples that is similar to the population of interest, rather than using the single most similar database sample. We demonstrate the effectiveness of the mixture approach in the application of African, European, and East Asian HapMap samples for tag SNP selection in populations from India, a genetically intermediate region underrepresented in genomic studies of haplotype variation.
Sampling design considerations for demographic studies: a case of colonial seabirds
Kendall, William L.; Converse, Sarah J.; Doherty, Paul F.; Naughton, Maura B.; Anders, Angela; Hines, James E.; Flint, Elizabeth
2009-01-01
For the purposes of making many informed conservation decisions, the main goal for data collection is to assess population status and allow prediction of the consequences of candidate management actions. Reducing the bias and variance of estimates of population parameters reduces uncertainty in population status and projections, thereby reducing the overall uncertainty under which a population manager must make a decision. In capture-recapture studies, imperfect detection of individuals, unobservable life-history states, local movement outside study areas, and tag loss can cause bias or precision problems with estimates of population parameters. Furthermore, excessive disturbance to individuals during capture?recapture sampling may be of concern because disturbance may have demographic consequences. We address these problems using as an example a monitoring program for Black-footed Albatross (Phoebastria nigripes) and Laysan Albatross (Phoebastria immutabilis) nesting populations in the northwestern Hawaiian Islands. To mitigate these estimation problems, we describe a synergistic combination of sampling design and modeling approaches. Solutions include multiple capture periods per season and multistate, robust design statistical models, dead recoveries and incidental observations, telemetry and data loggers, buffer areas around study plots to neutralize the effect of local movements outside study plots, and double banding and statistical models that account for band loss. We also present a variation on the robust capture?recapture design and a corresponding statistical model that minimizes disturbance to individuals. For the albatross case study, this less invasive robust design was more time efficient and, when used in combination with a traditional robust design, reduced the standard error of detection probability by 14% with only two hours of additional effort in the field. These field techniques and associated modeling approaches are applicable to studies of most taxa being marked and in some cases have individually been applied to studies of birds, fish, herpetofauna, and mammals.
Backhouse, Martin E
2002-01-01
A number of approaches to conducting economic evaluations could be adopted. However, some decision makers have a preference for wholly stochastic cost-effectiveness analyses, particularly if the sampled data are derived from randomised controlled trials (RCTs). Formal requirements for cost-effectiveness evidence have heightened concerns in the pharmaceutical industry that development costs and times might be increased if formal requirements increase the number, duration or costs of RCTs. Whether this proves to be the case or not will depend upon the timing, nature and extent of the cost-effectiveness evidence required. To illustrate how different requirements for wholly stochastic cost-effectiveness evidence could have a significant impact on two of the major determinants of new drug development costs and times, namely RCT sample size and study duration. Using data collected prospectively in a clinical evaluation, sample sizes were calculated for a number of hypothetical cost-effectiveness study design scenarios. The results were compared with a baseline clinical trial design. The sample sizes required for the cost-effectiveness study scenarios were mostly larger than those for the baseline clinical trial design. Circumstances can be such that a wholly stochastic cost-effectiveness analysis might not be a practical proposition even though its clinical counterpart is. In such situations, alternative research methodologies would be required. For wholly stochastic cost-effectiveness analyses, the importance of prior specification of the different components of study design is emphasised. However, it is doubtful whether all the information necessary for doing this will typically be available when product registration trials are being designed. Formal requirements for wholly stochastic cost-effectiveness evidence based on the standard frequentist paradigm have the potential to increase the size, duration and number of RCTs significantly and hence the costs and timelines associated with new product development. Moreover, it is possible to envisage situations where such an approach would be impossible to adopt. Clearly, further research is required into the issue of how to appraise the economic consequences of alternative economic evaluation research strategies.
Power and sample size for multivariate logistic modeling of unmatched case-control studies.
Gail, Mitchell H; Haneuse, Sebastien
2017-01-01
Sample size calculations are needed to design and assess the feasibility of case-control studies. Although such calculations are readily available for simple case-control designs and univariate analyses, there is limited theory and software for multivariate unconditional logistic analysis of case-control data. Here we outline the theory needed to detect scalar exposure effects or scalar interactions while controlling for other covariates in logistic regression. Both analytical and simulation methods are presented, together with links to the corresponding software.
Application of allflex conservation buffer in illumina genotyping.
de Groot, M; Ras, T; van Haeringen, W A
2016-12-01
This experiment was designed to study if liquid conservation buffer used in the novel Tissue Sampling Technology (TST) from Allflex can be used for Illumina BeadChip genotyping. Ear punches were collected from 6 bovine samples, using both the Tissue Sampling Unit (TSU) as well as the Total Tagger Universal (TTU) collection system. The stability of the liquid conservation buffer was tested by genotyping samples on Illumina BeadChips, incubated at 0, 3, 15, 24, 48, 72, 168, 336, 720 h after sample collection. Additionally, a replenishment study was designed to test how often the liquid conservation buffer could be completely replenished before a significant call rate drop could be observed. Results from the stability study showed an average call rate of 0.993 for samples collected with the TSU system and 0.953 for samples collected with the TTU system, both exceeding the inclusion threshold call rate of 0.85. As an additional control, the identity of the individual animals was confirmed using the International Society of Animal Genetics (ISAG) recommended SNP panel. The replenishment study revealed a slight drop in the sample call rate after replenishing the conservation buffer for the fourth time for the TSU as well as the TTU samples. In routine analysis, this application allows for multiple experiments to be performed on the liquid conservation buffer, while maintaining the tissue samples for future use. The data collected in this study shows that the liquid conservation buffer used in the TST system can be used for Illumina BeadChip genotyping applications.
Combining band recovery data and Pollock's robust design to model temporary and permanent emigration
Lindberg, M.S.; Kendall, W.L.; Hines, J.E.; Anderson, M.G.
2001-01-01
Capture-recapture models are widely used to estimate demographic parameters of marked populations. Recently, this statistical theory has been extended to modeling dispersal of open populations. Multistate models can be used to estimate movement probabilities among subdivided populations if multiple sites are sampled. Frequently, however, sampling is limited to a single site. Models described by Burnham (1993, in Marked Individuals in the Study of Bird Populations, 199-213), which combined open population capture-recapture and band-recovery models, can be used to estimate permanent emigration when sampling is limited to a single population. Similarly, Kendall, Nichols, and Hines (1997, Ecology 51, 563-578) developed models to estimate temporary emigration under Pollock's (1982, Journal of Wildlife Management 46, 757-760) robust design. We describe a likelihood-based approach to simultaneously estimate temporary and permanent emigration when sampling is limited to a single population. We use a sampling design that combines the robust design and recoveries of individuals obtained immediately following each sampling period. We present a general form for our model where temporary emigration is a first-order Markov process, and we discuss more restrictive models. We illustrate these models with analysis of data on marked Canvasback ducks. Our analysis indicates that probability of permanent emigration for adult female Canvasbacks was 0.193 (SE = 0.082) and that birds that were present at the study area in year i - 1 had a higher probability of presence in year i than birds that were not present in year i - 1.
75 FR 43172 - Maternal, Infant, and Early Childhood Home Visiting Program
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-23
... the evaluation results have been published in a peer-reviewed journal; or (bb) quasi-experimental... design (i.e. randomized controlled trial [RCT] or quasi-experimental design [QED]), level of attrition... a quasi-experimental design as a study design in which sample members are selected for the program...
ERIC Educational Resources Information Center
Duvekot, Jorieke; Hoopen, Leontine W.; Slappendel, Geerte; van der Ende, Jan; Verhulst, Frank C.; van der Sijde, Ad; Greaves-Lord, Kirstin
2017-01-01
This paper provides an overview of the design and cohort characteristics of the Social Spectrum Study: a clinical cohort study that used a two-phase sampling design to identify children at risk for ASD. After screening 1281 children aged 2.5-10 years who had been consecutively referred to one of six mental health services in the Netherlands,…
Moerbeek, Mirjam
2018-01-01
Background This article studies the design of trials that compare three treatment conditions that are delivered by two types of health professionals. The one type of health professional delivers one treatment, and the other type delivers two treatments, hence, this design is a combination of a nested and crossed design. As each health professional treats multiple patients, the data have a nested structure. This nested structure has thus far been ignored in the design of such trials, which may result in an underestimate of the required sample size. In the design stage, the sample sizes should be determined such that a desired power is achieved for each of the three pairwise comparisons, while keeping costs or sample size at a minimum. Methods The statistical model that relates outcome to treatment condition and explicitly takes the nested data structure into account is presented. Mathematical expressions that relate sample size to power are derived for each of the three pairwise comparisons on the basis of this model. The cost-efficient design achieves sufficient power for each pairwise comparison at lowest costs. Alternatively, one may minimize the total number of patients. The sample sizes are found numerically and an Internet application is available for this purpose. The design is also compared to a nested design in which each health professional delivers just one treatment. Results Mathematical expressions show that this design is more efficient than the nested design. For each pairwise comparison, power increases with the number of health professionals and the number of patients per health professional. The methodology of finding a cost-efficient design is illustrated using a trial that compares treatments for social phobia. The optimal sample sizes reflect the costs for training and supervising psychologists and psychiatrists, and the patient-level costs in the three treatment conditions. Conclusion This article provides the methodology for designing trials that compare three treatment conditions while taking the nesting of patients within health professionals into account. As such, it helps to avoid underpowered trials. To use the methodology, a priori estimates of the total outcome variances and intraclass correlation coefficients must be obtained from experts’ opinions or findings in the literature. PMID:29316807
Latent spatial models and sampling design for landscape genetics
Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.
2016-01-01
We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.
Opinions of Teachers and Preservice Teachers of Social Studies on Geo-Literacy
ERIC Educational Resources Information Center
Memisoglu, Hatice
2017-01-01
The purpose of this study is to investigate the opinions of teachers and preservice teachers of social studies on geo-literacy. The study used the qualitative research design of phenomenology to collect data. The study consisted of 20 teachers and 30 prospective teachers of social studies. The purposive sampling method of criterion sampling was…
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
Mars, Phobos, and Deimos Sample Return Enabled by ARRM Alternative Trade Study Spacecraft
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Vavrina, Matthew; Merrill, Raymond G.; Qu, Min; Naasz, Bo J.
2014-01-01
The Asteroid Robotic Redirect Mission (ARRM) has been the topic of many mission design studies since 2011. The reference ARRM spacecraft uses a powerful solar electric propulsion (SEP) system and a bag device to capture a small asteroid from an Earth-like orbit and redirect it to a distant retrograde orbit (DRO) around the moon. The ARRM Option B spacecraft uses the same propulsion system and multi-Degree of Freedom (DoF) manipulators device to retrieve a very large sample (thousands of kilograms) from a 100+ meter diameter farther-away Near Earth Asteroid (NEA). This study will demonstrate that the ARRM Option B spacecraft design can also be used to return samples from Mars and its moons - either by acquiring a large rock from the surface of Phobos or Deimos, and or by rendezvousing with a sample-return spacecraft launched from the surface of Mars.
Mars, Phobos, and Deimos Sample Return Enabled by ARRM Alternative Trade Study Spacecraft
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Vavrina, Matthew; Naasz, Bo; Merill, Raymond G.; Qu, Min
2014-01-01
The Asteroid Robotic Redirect Mission (ARRM) has been the topic of many mission design studies since 2011. The reference ARRM spacecraft uses a powerful solar electric propulsion (SEP) system and a bag device to capture a small asteroid from an Earth-like orbit and redirect it to a distant retrograde orbit (DRO) around the moon. The ARRM Option B spacecraft uses the same propulsion system and multi-Degree of Freedom (DoF) manipulators device to retrieve a very large sample (thousands of kilograms) from a 100+ meter diameter farther-away Near Earth Asteroid (NEA). This study will demonstrate that the ARRM Option B spacecraft design can also be used to return samples from Mars and its moons - either by acquiring a large rock from the surface of Phobos or Deimos, and/or by rendezvousing with a sample-return spacecraft launched from the surface of Mars.
A new design of groundwater sampling device and its application.
Tsai, Yih-jin; Kuo, Ming-ching T
2005-01-01
Compounds in the atmosphere contaminate samples of groundwater. An inexpensive and simple method for collecting groundwater samples is developed to prevent contamination when the background concentration of contaminants is high. This new design of groundwater sampling device involves a glass sampling bottle with a Teflon-lined valve at each end. A cleaned and dried sampling bottle was connected to a low flow-rate peristaltic pump with Teflon tubing and was filled with water. No headspace volume was remained in the sampling bottle. The sample bottle was then packed in a PVC bag to prevent the target component from infiltrating into the water sample through the valves. In this study, groundwater was sampled at six wells using both the conventional method and the improved method. The analysis of trichlorofluoromethane (CFC-11) concentrations at these six wells indicates that all the groundwater samples obtained by the conventional sampling method were contaminated by CFC-11 from the atmosphere. The improved sampling method greatly eliminated the problems of contamination, preservation and quantitative analysis of natural water.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826
Espin‐Garcia, Osvaldo; Craiu, Radu V.
2017-01-01
ABSTRACT We evaluate two‐phase designs to follow‐up findings from genome‐wide association study (GWAS) when the cost of regional sequencing in the entire cohort is prohibitive. We develop novel expectation‐maximization‐based inference under a semiparametric maximum likelihood formulation tailored for post‐GWAS inference. A GWAS‐SNP (where SNP is single nucleotide polymorphism) serves as a surrogate covariate in inferring association between a sequence variant and a normally distributed quantitative trait (QT). We assess test validity and quantify efficiency and power of joint QT‐SNP‐dependent sampling and analysis under alternative sample allocations by simulations. Joint allocation balanced on SNP genotype and extreme‐QT strata yields significant power improvements compared to marginal QT‐ or SNP‐based allocations. We illustrate the proposed method and evaluate the sensitivity of sample allocation to sampling variation using data from a sequencing study of systolic blood pressure. PMID:29239496
Three Conceptual Replication Studies in Group Theory
ERIC Educational Resources Information Center
Melhuish, Kathleen
2018-01-01
Many studies in mathematics education research occur with a nonrepresentative sample and are never replicated. To challenge this paradigm, I designed a large-scale study evaluating student conceptions in group theory that surveyed a national, representative sample of students. By replicating questions previously used to build theory around student…
Appearance Investment and Everyday Interpersonal Functioning: An Experience Sampling Study
ERIC Educational Resources Information Center
Forand, Nicholas R.; Gunthert, Kathleen C.; German, Ramaris E.; Wenze, Susan J.
2010-01-01
Several studies have shown that body satisfaction affects interpersonal functioning. However, few have studied the specific interpersonal correlates of another important body image dimension, appearance investment--that is, the importance a woman places on appearance. We used an experience sampling design with PDA (personal digital assistant)…
NHEXAS PHASE I MARYLAND STUDY--STANDARD OPERATING PROCEDURE FOR DUPLICATE SAMPLING (F12)
This SOP is designed to provide both general and specific guidance for the collection of duplicate samples by Harvard School of Public Health/Emory University for the NHEXAS project. The purpose of the duplicate sampling was to ensure that the value obtained for a sample was ind...
Simulation methods to estimate design power: an overview for applied research.
Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E
2011-06-20
Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.
Simulation methods to estimate design power: an overview for applied research
2011-01-01
Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447
Gerassi, Lara; Edmond, Tonya; Nichols, Andrea
2017-06-01
The study of sex trafficking, prostitution, sex work, and sexual exploitation is associated with many methodological issues and challenges. Researchers' study designs must consider the many safety issues related to this vulnerable and hidden population. Community advisory boards and key stakeholder involvement are essential to study design to increase safety of participants, usefulness of study aims, and meaningfulness of conclusions. Nonrandomized sampling strategies are most often utilized when studying exploited women and girls, which have the capacity to provide rich data and require complex sampling and recruitment methods. This article reviews the current methodological issues when studying this marginalized population as well as strategies to address challenges while working with the community in order to bring about social change. The authors also discuss their own experiences in collaborating with community organizations to conduct research in this field.
Gerassi, Lara; Edmond, Tonya; Nichols, Andrea
2016-01-01
The study of sex trafficking, prostitution, sex work, and sexual exploitation is associated with many methodological issues and challenges. Researchers’ study designs must consider the many safety issues related to this vulnerable and hidden population. Community advisory boards and key stakeholder involvement are essential to study design to increase safety of participants, usefulness of study aims, and meaningfulness of conclusions. Nonrandomized sampling strategies are most often utilized when studying exploited women and girls, which have the capacity to provide rich data and require complex sampling and recruitment methods. This article reviews the current methodological issues when studying this marginalized population as well as strategies to address challenges while working with the community in order to bring about social change. The authors also discuss their own experiences in collaborating with community organizations to conduct research in this field. PMID:28824337
Model-based inference for small area estimation with sampling weights
Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.
2017-01-01
Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860
Development study of the X-ray scattering properties of a group of optically polished flat samples
NASA Technical Reports Server (NTRS)
Froechtenigt, J. F.
1973-01-01
A group of twelve optically polished flat samples were used to study the scattering of X-rays. The X-ray beam reflected from the twelve optical flat samples was analyzed by means of a long vacuum system of special design for these tests. The scattering measurements were made at 8.34A and 0.92 deg angle of incidence. The results for ten of the samples are comparable, the two exceptions being the fire polished samples.
Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme
2013-07-01
The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Korram, S.
1977-01-01
The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.
STUDY OF HOME DEMONSTRATION UNITS IN A SAMPLE OF 27 COUNTIES IN NEW YORK STATE, NUMBER 3.
ERIC Educational Resources Information Center
ALEXANDER, FRANK D.; HARSHAW, JEAN
AN EXPLORATORY STUDY EXAMINED CHARACTERISTICS OF 1,128 HOME DEMONSTRATION UNITS TO SUGGEST HYPOTHESES AND SCOPE FOR A MORE INTENSIVE STUDY OF A SMALL SAMPLE OF UNITS, AND TO PROVIDE GUIDANCE IN SAMPLING. DATA WERE OBTAINED FROM A SPECIALLY DESIGNED MEMBERSHIP CARD USED IN 1962. UNIT SIZE AVERAGED 23.6 MEMBERS BUT THE RANGE WAS FAIRLY GREAT. A NEED…
ERIC Educational Resources Information Center
Price, Jay R.
This study sought information about selective service rejection in Delaware, specifically rejectee characteristics, reasons for rejection, and the high rejection rate in Delaware. The basic design was a modified case study method in which a sample of individual records were examined. Differences between this sample and national samples were tested…
Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda
2016-01-01
Background Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules’ performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Methods Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. Results A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2–4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Conclusion Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved. PMID:26730980
Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda
2016-01-01
Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules' performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2-4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved.
ERIC Educational Resources Information Center
Salvucci, Sameena; And Others
This technical report provides the results of a study on the calculation and use of generalized variance functions (GVFs) and design effects for the 1990-91 Schools and Staffing Survey (SASS). The SASS is a periodic integrated system of sample surveys conducted by the National Center for Education Statistics (NCES) that produces sampling variances…
A BASIS FOR MODIFYING THE TANK 12 COMPOSITE SAMPLING DESIGN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shine, G.
The SRR sampling campaign to obtain residual solids material from the Savannah River Site (SRS) Tank Farm Tank 12 primary vessel resulted in obtaining appreciable material in all 6 planned source samples from the mound strata but only in 5 of the 6 planned source samples from the floor stratum. Consequently, the design of the compositing scheme presented in the Tank 12 Sampling and Analysis Plan, Pavletich (2014a), must be revised. Analytical Development of SRNL statistically evaluated the sampling uncertainty associated with using various compositing arrays and splitting one or more samples for compositing. The variance of the simple meanmore » of composite sample concentrations is a reasonable standard to investigate the impact of the following sampling options. Composite Sample Design Option (a). Assign only 1 source sample from the floor stratum and 1 source sample from each of the mound strata to each of the composite samples. Each source sample contributes material to only 1 composite sample. Two source samples from the floor stratum would not be used. Composite Sample Design Option (b). Assign 2 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that one source sample from the floor must be used twice, with 2 composite samples sharing material from this particular source sample. All five source samples from the floor would be used. Composite Sample Design Option (c). Assign 3 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that several of the source samples from the floor stratum must be assigned to more than one composite sample. All 5 source samples from the floor would be used. Using fewer than 12 source samples will increase the sampling variability over that of the Basic Composite Sample Design, Pavletich (2013). Considering the impact to the variance of the simple mean of the composite sample concentrations, the recommendation is to construct each sample composite using four or five source samples. Although the variance using 5 source samples per composite sample (Composite Sample Design Option (c)) was slightly less than the variance using 4 source samples per composite sample (Composite Sample Design Option (b)), there is no practical difference between those variances. This does not consider that the measurement error variance, which is the same for all composite sample design options considered in this report, will further dilute any differences. Composite Sample Design Option (a) had the largest variance for the mean concentration in the three composite samples and should be avoided. These results are consistent with Pavletich (2014b) which utilizes a low elevation and a high elevation mound source sample and two floor source samples for each composite sample. Utilizing the four source samples per composite design, Pavletich (2014b) utilizes aliquots of Floor Sample 4 for two composite samples.« less
Heidel, R Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Sampling Lesbian, Gay, and Bisexual Populations
ERIC Educational Resources Information Center
Meyer, Ilan H.; Wilson, Patrick A.
2009-01-01
Sampling has been the single most influential component of conducting research with lesbian, gay, and bisexual (LGB) populations. Poor sampling designs can result in biased results that will mislead other researchers, policymakers, and practitioners. Investigators wishing to study LGB populations must therefore devote significant energy and…
ERIC Educational Resources Information Center
Brown, James Dean
Some issues in the design of classroom research on second language teaching are discussed, with the intention of helping the researcher avoid conceptual pitfalls that may cripple the study later in the process. This begins with an examination of concerns in sampling, including definition of a population to be studied, alternative sampling…
Modeling false positive detections in species occurrence data under different study designs.
Chambert, Thierry; Miller, David A W; Nichols, James D
2015-02-01
The occurrence of false positive detections in presence-absence data, even when they occur infrequently, can lead to severe bias when estimating species occupancy patterns. Building upon previous efforts to account for this source of observational error, we established a general framework to model false positives in occupancy studies and extend existing modeling approaches to encompass a broader range of sampling designs. Specifically, we identified three common sampling designs that are likely to cover most scenarios encountered by researchers. The different designs all included ambiguous detections, as well as some known-truth data, but their modeling differed in the level of the model hierarchy at which the known-truth information was incorporated (site level or observation level). For each model, we provide the likelihood, as well as R and BUGS code needed for implementation. We also establish a clear terminology and provide guidance to help choosing the most appropriate design and modeling approach.
On the Analysis of Case-Control Studies in Cluster-correlated Data Settings.
Haneuse, Sebastien; Rivera-Rodriguez, Claudia
2018-01-01
In resource-limited settings, long-term evaluation of national antiretroviral treatment (ART) programs often relies on aggregated data, the analysis of which may be subject to ecological bias. As researchers and policy makers consider evaluating individual-level outcomes such as treatment adherence or mortality, the well-known case-control design is appealing in that it provides efficiency gains over random sampling. In the context that motivates this article, valid estimation and inference requires acknowledging any clustering, although, to our knowledge, no statistical methods have been published for the analysis of case-control data for which the underlying population exhibits clustering. Furthermore, in the specific context of an ongoing collaboration in Malawi, rather than performing case-control sampling across all clinics, case-control sampling within clinics has been suggested as a more practical strategy. To our knowledge, although similar outcome-dependent sampling schemes have been described in the literature, a case-control design specific to correlated data settings is new. In this article, we describe this design, discuss balanced versus unbalanced sampling techniques, and provide a general approach to analyzing case-control studies in cluster-correlated settings based on inverse probability-weighted generalized estimating equations. Inference is based on a robust sandwich estimator with correlation parameters estimated to ensure appropriate accounting of the outcome-dependent sampling scheme. We conduct comprehensive simulations, based in part on real data on a sample of N = 78,155 program registrants in Malawi between 2005 and 2007, to evaluate small-sample operating characteristics and potential trade-offs associated with standard case-control sampling or when case-control sampling is performed within clusters.
Navigating complex sample analysis using national survey data.
Saylor, Jennifer; Friedmann, Erika; Lee, Hyeon Joo
2012-01-01
The National Center for Health Statistics conducts the National Health and Nutrition Examination Survey and other national surveys with probability-based complex sample designs. Goals of national surveys are to provide valid data for the population of the United States. Analyses of data from population surveys present unique challenges in the research process but are valuable avenues to study the health of the United States population. The aim of this study was to demonstrate the importance of using complex data analysis techniques for data obtained with complex multistage sampling design and provide an example of analysis using the SPSS Complex Samples procedure. Illustration of challenges and solutions specific to secondary data analysis of national databases are described using the National Health and Nutrition Examination Survey as the exemplar. Oversampling of small or sensitive groups provides necessary estimates of variability within small groups. Use of weights without complex samples accurately estimates population means and frequency from the sample after accounting for over- or undersampling of specific groups. Weighting alone leads to inappropriate population estimates of variability, because they are computed as if the measures were from the entire population rather than a sample in the data set. The SPSS Complex Samples procedure allows inclusion of all sampling design elements, stratification, clusters, and weights. Use of national data sets allows use of extensive, expensive, and well-documented survey data for exploratory questions but limits analysis to those variables included in the data set. The large sample permits examination of multiple predictors and interactive relationships. Merging data files, availability of data in several waves of surveys, and complex sampling are techniques used to provide a representative sample but present unique challenges. In sophisticated data analysis techniques, use of these data is optimized.
Plot shape effects on plant species diversity measurements
Keeley, Jon E.; Fotheringham, C.J.
2005-01-01
Abstract. Question: Do rectangular sample plots record more plant species than square plots as suggested by both empirical and theoretical studies?Location: Grasslands, shrublands and forests in the Mediterranean-climate region of California, USA.Methods: We compared three 0.1-ha sampling designs that differed in the shape and dispersion of 1-m2 and 100-m2 nested subplots. We duplicated an earlier study that compared the Whittaker sample design, which had square clustered subplots, with the modified Whittaker design, which had dispersed rectangular subplots. To sort out effects of dispersion from shape we used a third design that overlaid square subplots on the modified Whittaker design. Also, using data from published studies we extracted species richness values for 400-m2 subplots that were either square or 1:4 rectangles partially overlaid on each other from desert scrub in high and low rainfall years, chaparral, sage scrub, oak savanna and coniferous forests with and without fire.Results: We found that earlier empirical reports of more than 30% greater richness with rectangles were due to the confusion of shape effects with spatial effects, coupled with the use of cumulative number of species as the metric for comparison. Average species richness was not significantly different between square and 1:4 rectangular sample plots at either 1- or 100-m2. Pairwise comparisons showed no significant difference between square and rectangular samples in all but one vegetation type, and that one exhibited significantly greater richness with squares. Our three intensive study sites appear to exhibit some level of self-similarity at the scale of 400 m2, but, contrary to theoretical expectations, we could not detect plot shape effects on species richness at this scale.Conclusions: At the 0.1-ha scale or lower there is no evidence that plot shape has predictable effects on number of species recorded from sample plots. We hypothesize that for the mediterranean-climate vegetation types studied here, the primary reason that 1:4 rectangles do not sample greater species richness than squares is because species turnover varies along complex environmental gradients that are both parallel and perpendicular to the long axis of rectangular plots. Reports in the literature of much greater species richness recorded for highly elongated rectangular strips than for squares of the same area are not likely to be fair comparisons because of the dramatically different periphery/area ratio, which includes a much greater proportion of species that are using both above and below-ground niche space outside the sample area.
Teratology studies in the rat.
Leroy, Mariline; Allais, Linda
2013-01-01
The rat is the rodent species of choice for the regulatory safety testing of xenobiotics, such as medicinal products, food additives, and other chemicals. Many decades of experience and extensive data have accumulated for both general and developmental toxicology investigations in this species. The high fertility and large litter size of the rat are advantages for teratogenicity testing. The study designs are well defined in the regulatory guidelines and are relatively standardized between testing laboratories across the world. Teratology studies address maternal- and embryo-toxicity following exposure during the period of organogenesis. This chapter describes the design and conduct of a teratology study in the rat in compliance with the regulatory guidelines. The procedures for the handling and housing of the pregnant animals, the caesarean examinations and the sampling of fetuses for morphological examinations are described. The utility and design of preliminary studies and the inclusion of satellite animals in the main study for toxicokinetic sampling are discussed.
A DUST-SETTLING CHAMBER FOR SAMPLING-INSTRUMENT COMPARISON STUDIES
Introduction: Few methods exist that can evenly and reproducibly deposit dusts onto surfaces for surface-sampling methodological studies. A dust-deposition chamber was designed for that purpose.
Methods: A 1-m3 Rochester-type chamber was modified to produce high airborne d...
NASA Astrophysics Data System (ADS)
Xiao, Deli; Zhang, Chan; He, Jia; Zeng, Rong; Chen, Rong; He, Hua
2016-12-01
Simple, accurate and high-throughput pretreatment method would facilitate large-scale studies of trace analysis in complex samples. Magnetic mixed hemimicelles solid-phase extraction has the power to become a key pretreatment method in biological, environmental and clinical research. However, lacking of experimental predictability and unsharpness of extraction mechanism limit the development of this promising method. Herein, this work tries to establish theoretical-based experimental designs for extraction of trace analytes from complex samples using magnetic mixed hemimicelles solid-phase extraction. We selected three categories and six sub-types of compounds for systematic comparative study of extraction mechanism, and comprehensively illustrated the roles of different force (hydrophobic interaction, π-π stacking interactions, hydrogen-bonding interaction, electrostatic interaction) for the first time. What’s more, the application guidelines for supporting materials, surfactants and sample matrix were also summarized. The extraction mechanism and platform established in the study render its future promising for foreseeable and efficient pretreatment under theoretical based experimental design for trace analytes from environmental, biological and clinical samples.
Sampling design for the Study of Cardiovascular Risks in Adolescents (ERICA).
Vasconcellos, Mauricio Teixeira Leite de; Silva, Pedro Luis do Nascimento; Szklo, Moyses; Kuschnir, Maria Cristina Caetano; Klein, Carlos Henrique; Abreu, Gabriela de Azevedo; Barufaldi, Laura Augusta; Bloch, Katia Vergetti
2015-05-01
The Study of Cardiovascular Risk in Adolescents (ERICA) aims to estimate the prevalence of cardiovascular risk factors and metabolic syndrome in adolescents (12-17 years) enrolled in public and private schools of the 273 municipalities with over 100,000 inhabitants in Brazil. The study population was stratified into 32 geographical strata (27 capitals and five sets with other municipalities in each macro-region of the country) and a sample of 1,251 schools was selected with probability proportional to size. In each school three combinations of shift (morning and afternoon) and grade were selected, and within each of these combinations, one class was selected. All eligible students in the selected classes were included in the study. The design sampling weights were calculated by the product of the reciprocals of the inclusion probabilities in each sampling stage, and were later calibrated considering the projections of the numbers of adolescents enrolled in schools located in the geographical strata by sex and age.
A Sample Handling System for Mars Sample Return - Design and Status
NASA Astrophysics Data System (ADS)
Allouis, E.; Renouf, I.; Deridder, M.; Vrancken, D.; Gelmi, R.; Re, E.
2009-04-01
A mission to return atmosphere and soil samples form the Mars is highly desired by planetary scientists from around the world and space agencies are starting preparation for the launch of a sample return mission in the 2020 timeframe. Such a mission would return approximately 500 grams of atmosphere, rock and soil samples to Earth by 2025. Development of a wide range of new technology will be critical to the successful implementation of such a challenging mission. Technical developments required to realise the mission include guided atmospheric entry, soft landing, sample handling robotics, biological sealing, Mars atmospheric ascent sample rendezvous & capture and Earth return. The European Space Agency has been performing system definition studies along with numerous technology development studies under the framework of the Aurora programme. Within the scope of these activities Astrium has been responsible for defining an overall sample handling architecture in collaboration with European partners (sample acquisition and sample capture, Galileo Avionica; sample containment and automated bio-sealing, Verhaert). Our work has focused on the definition and development of the robotic systems required to move the sample through the transfer chain. This paper presents the Astrium team's high level design for the surface transfer system and the orbiter transfer system. The surface transfer system is envisaged to use two robotic arms of different sizes to allow flexible operations and to enable sample transfer over relatively large distances (~2 to 3 metres): The first to deploy/retract the Drill Assembly used for sample collection, the second for the transfer of the Sample Container (the vessel containing all the collected samples) from the Drill Assembly to the Mars Ascent Vehicle (MAV). The sample transfer actuator also features a complex end-effector for handling the Sample Container. The orbiter transfer system will transfer the Sample Container from the capture mechanism through a bio-sealing system to the Earth Return Capsule (ERC) and has distinctly different requirements from the surface transfer system. The operations required to transfer the samples to the ERC are clearly defined and make use of mechanisms specifically designed for the job rather than robotic arms. Though it is mechanical rather than robotic, the design of the orbiter transfer system is very complex in comparison to most previous missions to fulfil all the scientific and technological requirements. Further mechanisms will be required to lock the samples into the ERC and to close the door at the rear of the ERC through which the samples have been inserted. Having performed this overall definition study, Astrium is now leading the next step of the development of the MSR sample handling: the Mars Surface Sample Transfer and Manipulation project (MSSTM). Organised in two phases, the project will re-evaluate in phase 1 the output of the previous study in the light of new inputs (e.g. addition of a rover) and investigate further the architectures and systems involved in the sample transfer chain while identifying the critical technologies. The second phase of the project will concentrate on the prototyping of a number of these key technologies with the goal of providing an end-to end validation of the surface sample transfer concept.
Wickremsinhe, Enaksha R; Perkins, Everett J
2015-03-01
Traditional pharmacokinetic analysis in nonclinical studies is based on the concentration of a test compound in plasma and requires approximately 100 to 200 μL blood collected per time point. However, the total blood volume of mice limits the number of samples that can be collected from an individual animal-often to a single collection per mouse-thus necessitating dosing multiple mice to generate a pharmacokinetic profile in a sparse-sampling design. Compared with traditional methods, dried blood spot (DBS) analysis requires smaller volumes of blood (15 to 20 μL), thus supporting serial blood sampling and the generation of a complete pharmacokinetic profile from a single mouse. Here we compare plasma-derived data with DBS-derived data, explain how to adopt DBS sampling to support discovery mouse studies, and describe how to generate pharmacokinetic and pharmacodynamic data from a single mouse. Executing novel study designs that use DBS enhances the ability to identify and streamline better drug candidates during drug discovery. Implementing DBS sampling can reduce the number of mice needed in a drug discovery program. In addition, the simplicity of DBS sampling and the smaller numbers of mice needed translate to decreased study costs. Overall, DBS sampling is consistent with 3Rs principles by achieving reductions in the number of animals used, decreased restraint-associated stress, improved data quality, direct comparison of interanimal variability, and the generation of multiple endpoints from a single study.
Wickremsinhe, Enaksha R; Perkins, Everett J
2015-01-01
Traditional pharmacokinetic analysis in nonclinical studies is based on the concentration of a test compound in plasma and requires approximately 100 to 200 µL blood collected per time point. However, the total blood volume of mice limits the number of samples that can be collected from an individual animal—often to a single collection per mouse—thus necessitating dosing multiple mice to generate a pharmacokinetic profile in a sparse-sampling design. Compared with traditional methods, dried blood spot (DBS) analysis requires smaller volumes of blood (15 to 20 µL), thus supporting serial blood sampling and the generation of a complete pharmacokinetic profile from a single mouse. Here we compare plasma-derived data with DBS-derived data, explain how to adopt DBS sampling to support discovery mouse studies, and describe how to generate pharmacokinetic and pharmacodynamic data from a single mouse. Executing novel study designs that use DBS enhances the ability to identify and streamline better drug candidates during drug discovery. Implementing DBS sampling can reduce the number of mice needed in a drug discovery program. In addition, the simplicity of DBS sampling and the smaller numbers of mice needed translate to decreased study costs. Overall, DBS sampling is consistent with 3Rs principles by achieving reductions in the number of animals used, decreased restraint-associated stress, improved data quality, direct comparison of interanimal variability, and the generation of multiple endpoints from a single study. PMID:25836959
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
Anderson, Samantha F; Maxwell, Scott E
2017-01-01
Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.
Effects of School Design on Student Outcomes
ERIC Educational Resources Information Center
Tanner, C. Kenneth
2009-01-01
Purpose: The purpose of this study is to compare student achievement with three school design classifications: movement and circulation, day lighting, and views. Design/methodology/approach: From a sample of 71 schools, measures of these three school designs, taken with a ten-point Likert scale, are compared to students' outcomes defined by six…
Better cancer biomarker discovery through better study design.
Rundle, Andrew; Ahsan, Habibul; Vineis, Paolo
2012-12-01
High-throughput laboratory technologies coupled with sophisticated bioinformatics algorithms have tremendous potential for discovering novel biomarkers, or profiles of biomarkers, that could serve as predictors of disease risk, response to treatment or prognosis. We discuss methodological issues in wedding high-throughput approaches for biomarker discovery with the case-control study designs typically used in biomarker discovery studies, especially focusing on nested case-control designs. We review principles for nested case-control study design in relation to biomarker discovery studies and describe how the efficiency of biomarker discovery can be effected by study design choices. We develop a simulated prostate cancer cohort data set and a series of biomarker discovery case-control studies nested within the cohort to illustrate how study design choices can influence biomarker discovery process. Common elements of nested case-control design, incidence density sampling and matching of controls to cases are not typically factored correctly into biomarker discovery analyses, inducing bias in the discovery process. We illustrate how incidence density sampling and matching of controls to cases reduce the apparent specificity of truly valid biomarkers 'discovered' in a nested case-control study. We also propose and demonstrate a new case-control matching protocol, we call 'antimatching', that improves the efficiency of biomarker discovery studies. For a valid, but as yet undiscovered, biomarker(s) disjunctions between correctly designed epidemiologic studies and the practice of biomarker discovery reduce the likelihood that true biomarker(s) will be discovered and increases the false-positive discovery rate. © 2012 The Authors. European Journal of Clinical Investigation © 2012 Stichting European Society for Clinical Investigation Journal Foundation.
Spittal, Matthew J; Carlin, John B; Currier, Dianne; Downes, Marnie; English, Dallas R; Gordon, Ian; Pirkis, Jane; Gurrin, Lyle
2016-10-31
The Australian Longitudinal Study on Male Health (Ten to Men) used a complex sampling scheme to identify potential participants for the baseline survey. This raises important questions about when and how to adjust for the sampling design when analyzing data from the baseline survey. We describe the sampling scheme used in Ten to Men focusing on four important elements: stratification, multi-stage sampling, clustering and sample weights. We discuss how these elements fit together when using baseline data to estimate a population parameter (e.g., population mean or prevalence) or to estimate the association between an exposure and an outcome (e.g., an odds ratio). We illustrate this with examples using a continuous outcome (weight in kilograms) and a binary outcome (smoking status). Estimates of a population mean or disease prevalence using Ten to Men baseline data are influenced by the extent to which the sampling design is addressed in an analysis. Estimates of mean weight and smoking prevalence are larger in unweighted analyses than weighted analyses (e.g., mean = 83.9 kg vs. 81.4 kg; prevalence = 18.0 % vs. 16.7 %, for unweighted and weighted analyses respectively) and the standard error of the mean is 1.03 times larger in an analysis that acknowledges the hierarchical (clustered) structure of the data compared with one that does not. For smoking prevalence, the corresponding standard error is 1.07 times larger. Measures of association (mean group differences, odds ratios) are generally similar in unweighted or weighted analyses and whether or not adjustment is made for clustering. The extent to which the Ten to Men sampling design is accounted for in any analysis of the baseline data will depend on the research question. When the goals of the analysis are to estimate the prevalence of a disease or risk factor in the population or the magnitude of a population-level exposure-outcome association, our advice is to adopt an analysis that respects the sampling design.
Whitehead, John; Valdés-Márquez, Elsa; Lissmats, Agneta
2009-01-01
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. Copyright 2008 John Wiley & Sons, Ltd.
RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING
Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...
NASA Technical Reports Server (NTRS)
Wilson, J. L.
1974-01-01
A users guide to the Sampled Data Stability Analysis Program (SADSAP) is provided. This program is a general purpose sampled data Stability Analysis Program capable of providing frequency response on root locus data.
Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa
2013-01-01
Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.
Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders
2017-10-01
Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.
This protocol describes how quality control samples should be handled in the field, and was designed as a quick reference source for the field staff. The protocol describes quality control samples for air-VOCs, air-particles, water samples, house dust, soil, urine, blood, hair, a...
Methods for measuring populations of small, diurnal forest birds.
D.A. Manuwal; A.B. Carey
1991-01-01
Before a bird population is measured, the objectives of the study should be clearly defined. Important factors to be considered in designing a study are study site selection, plot size or transect length, distance between sampling points, duration of counts, and frequency and timing of sampling. Qualified field personnel are especially important. Assumptions applying...
Performing skin microbiome research: A method to the madness
Kong, Heidi H.; Andersson, Björn; Clavel, Thomas; Common, John E.; Jackson, Scott A.; Olson, Nathan D.; Segre, Julia A.; Traidl-Hoffmann, Claudia
2017-01-01
Growing interest in microbial contributions to human health and disease has increasingly led investigators to examine the microbiome in both healthy skin and cutaneous disorders, including acne, psoriasis and atopic dermatitis. The need for common language, effective study design, and validated methods are critical for high-quality, standardized research. Features, unique to skin, pose particular challenges when conducting microbiome research. This review discusses microbiome research standards and highlights important factors to consider, including clinical study design, skin sampling, sample processing, DNA sequencing, control inclusion, and data analysis. PMID:28063650
Biomarkers for Pulmonary Injury Following Deployment
2013-02-01
Department of the Army position, policy or decision unless so designated by other documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No...methods, please refer to the 2012 Annual Report for this contract. Rat BALF samples were denatured, digested, and desalted prior to liquid chromatography...that were part of the dust instillation studies designed by USACHER and carried out by a team at NIOSH. These samples were obtained both from a first
Designing clinical trials for amblyopia
Holmes, Jonathan M.
2015-01-01
Randomized clinical trial (RCT) study design leads to one of the highest levels of evidence, and is a preferred study design over cohort studies, because randomization reduces bias and maximizes the chance that even unknown confounding factors will be balanced between treatment groups. Recent randomized clinical trials and observational studies in amblyopia can be taken together to formulate an evidence-based approach to amblyopia treatment, which is presented in this review. When designing future clinical studies of amblyopia treatment, issues such as regression to the mean, sample size and trial duration must be considered, since each may impact study results and conclusions. PMID:25752747
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
Michael Arbaugh; Larry Bednar
1996-01-01
The sampling methods used to monitor ozone injury to ponderosa and Jeffrey pines depend on the objectives of the study, geographic and genetic composition of the forest, and the source and composition of air pollutant emissions. By using a standardized sampling methodology, it may be possible to compare conditions within local areas more accurately, and to apply the...
Chang, Xiaofeng; Bao, Xiaoying; Wang, Shiping; Zhu, Xiaoxue; Luo, Caiyun; Zhang, Zhenhua; Wilkes, Andreas
2016-05-15
The effects of climate change and human activities on grassland degradation and soil carbon stocks have become a focus of both research and policy. However, lack of research on appropriate sampling design prevents accurate assessment of soil carbon stocks and stock changes at community and regional scales. Here, we conducted an intensive survey with 1196 sampling sites over an area of 190 km(2) of degraded alpine meadow. Compared to lightly degraded meadow, soil organic carbon (SOC) stocks in moderately, heavily and extremely degraded meadow were reduced by 11.0%, 13.5% and 17.9%, respectively. Our field survey sampling design was overly intensive to estimate SOC status with a tolerable uncertainty of 10%. Power analysis showed that the optimal sampling density to achieve the desired accuracy would be 2, 3, 5 and 7 sites per 10 km(2) for lightly, moderately, heavily and extremely degraded meadows, respectively. If a subsequent paired sampling design with the optimum sample size were performed, assuming stock change rates predicted by experimental and modeling results, we estimate that about 5-10 years would be necessary to detect expected trends in SOC in the top 20 cm soil layer. Our results highlight the utility of conducting preliminary surveys to estimate the appropriate sampling density and avoid wasting resources due to over-sampling, and to estimate the sampling interval required to detect an expected sequestration rate. Future studies will be needed to evaluate spatial and temporal patterns of SOC variability. Copyright © 2016. Published by Elsevier Ltd.
Network Sampling with Memory: A proposal for more efficient sampling from social networks.
Mouw, Ted; Verdery, Ashton M
2012-08-01
Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS.
Network Sampling with Memory: A proposal for more efficient sampling from social networks
Mouw, Ted; Verdery, Ashton M.
2013-01-01
Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246
Replication and contradiction of highly cited research papers in psychiatry: 10-year follow-up.
Tajika, Aran; Ogawa, Yusuke; Takeshima, Nozomi; Hayasaka, Yu; Furukawa, Toshi A
2015-10-01
Contradictions and initial overestimates are not unusual among highly cited studies. However, this issue has not been researched in psychiatry. Aims: To assess how highly cited studies in psychiatry are replicated by subsequent studies. We selected highly cited studies claiming effective psychiatric treatments in the years 2000 through 2002. For each of these studies we searched for subsequent studies with a better-controlled design, or with a similar design but a larger sample. Among 83 articles recommending effective interventions, 40 had not been subject to any attempt at replication, 16 were contradicted, 11 were found to have substantially smaller effects and only 16 were replicated. The standardised mean differences of the initial studies were overestimated by 132%. Studies with a total sample size of 100 or more tended to produce replicable results. Caution is needed when a study with a small sample size reports a large effect. © The Royal College of Psychiatrists 2015.
Rast, Philippe; Hofer, Scott M.
2014-01-01
We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544
Understanding resilience in same-sex parented families: the work, love, play study
2010-01-01
Background While families headed by same-sex couples have achieved greater public visibility in recent years, there are still many challenges for these families in dealing with legal and community contexts that are not supportive of same-sex relationships. The Work, Love, Play study is a large longitudinal study of same-sex parents. It aims to investigate many facets of family life among this sample and examine how they change over time. The study focuses specifically on two key areas missing from the current literature: factors supporting resilience in same-sex parented families; and health and wellbeing outcomes for same-sex couples who undergo separation, including the negotiation of shared parenting arrangements post-separation. The current paper aims to provide a comprehensive overview of the design and methods of this longitudinal study and discuss its significance. Methods/Design The Work, Love, Play study is a mixed design, three wave, longitudinal cohort study of same-sex attracted parents. The sample includes lesbian, gay, bisexual and transgender parents in Australia and New Zealand (including single parents within these categories) caring for any children under the age of 18 years. The study will be conducted over six years from 2008 to 2014. Quantitative data are to be collected via three on-line surveys in 2008, 2010 and 2012 from the cohort of parents recruited in Wave1. Qualitative data will be collected via interviews with purposively selected subsamples in 2012 and 2013. Data collection began in 2008 and 355 respondents to Wave One of the study have agreed to participate in future surveys. Work is currently underway to increase this sample size. The methods and survey instruments are described. Discussion This study will make an important contribution to the existing research on same-sex parented families. Strengths of the study design include the longitudinal method, which will allow understanding of changes over time within internal family relationships and social supports. Further, the mixed method design enables triangulation of qualitative and quantitative data. A broad recruitment strategy has already enabled a large sample size with the inclusion of both gay men and lesbians. PMID:20211027
Detection of Bovine and Porcine Adenoviruses for Tracing the Source of Fecal Contamination
Maluquer de Motes, Carlos; Clemente-Casares, Pilar; Hundesa, Ayalkibet; Martín, Margarita; Girones, Rosina
2004-01-01
In this study, a molecular procedure for the detection of adenoviruses of animal origin was developed to evaluate the level of excretion of these viruses by swine and cattle and to design a test to facilitate the tracing of specific sources of environmental viral contamination. Two sets of oligonucleotides were designed, one to detect porcine adenoviruses and the other to detect bovine and ovine adenoviruses. The specificity of the assays was assessed in 31 fecal samples and 12 sewage samples that were collected monthly during a 1-year period. The data also provided information on the environmental prevalence of animal adenoviruses. Porcine adenoviruses were detected in 17 of 24 (70%) pools of swine samples studied, with most isolates being closely related to serotype 3. Bovine adenoviruses were present in 6 of 8 (75%) pools studied, with strains belonging to the genera Mastadenovirus and Atadenovirus and being similar to bovine adenoviruses of types 2, 4, and 7. These sets of primers produced negative results in nested PCR assays when human adenovirus controls and urban-sewage samples were tested. Likewise, the sets of primers previously designed for detection of human adenovirus also produced negative results with animal adenoviruses. These results indicate the importance of further studies to evaluate the usefulness of these tests to trace the source of fecal contamination in water and food and for environmental studies. PMID:15006765
[Saarland Growth Study: sampling design].
Danker-Hopfe, H; Zabransky, S
2000-01-01
The use of reference data to evaluate the physical development of children and adolescents is part of the daily routine in the paediatric ambulance. The construction of such reference data is based on the collection of extensive reference data. There are different kinds of reference data: cross sectional references, which are based on data collected from a big representative cross-sectional sample of the population, longitudinal references, which are based on follow-up surveys of usually smaller samples of individuals from birth to maturity, and mixed longitudinal references, which are a combination of longitudinal and cross-sectional reference data. The advantages and disadvantages of the different methods of data collection and the resulting reference data are discussed. The Saarland Growth Study was conducted for several reasons: growth processes are subject to secular changes, there are no specific reference data for children and adolescents from this part of the country and the growth charts in use in the paediatric praxis are possibly not appropriate any more. Therefore, the Saarland Growth Study served two purposes a) to create actual regional reference data and b) to create a database for future studies on secular trends in growth processes of children and adolescents from Saarland. The present contribution focusses on general remarks on the sampling design of (cross-sectional) growth surveys and its inferences for the design of the present study.
Detection of bovine and porcine adenoviruses for tracing the source of fecal contamination.
Maluquer de Motes, Carlos; Clemente-Casares, Pilar; Hundesa, Ayalkibet; Martín, Margarita; Girones, Rosina
2004-03-01
In this study, a molecular procedure for the detection of adenoviruses of animal origin was developed to evaluate the level of excretion of these viruses by swine and cattle and to design a test to facilitate the tracing of specific sources of environmental viral contamination. Two sets of oligonucleotides were designed, one to detect porcine adenoviruses and the other to detect bovine and ovine adenoviruses. The specificity of the assays was assessed in 31 fecal samples and 12 sewage samples that were collected monthly during a 1-year period. The data also provided information on the environmental prevalence of animal adenoviruses. Porcine adenoviruses were detected in 17 of 24 (70%) pools of swine samples studied, with most isolates being closely related to serotype 3. Bovine adenoviruses were present in 6 of 8 (75%) pools studied, with strains belonging to the genera Mastadenovirus and Atadenovirus and being similar to bovine adenoviruses of types 2, 4, and 7. These sets of primers produced negative results in nested PCR assays when human adenovirus controls and urban-sewage samples were tested. Likewise, the sets of primers previously designed for detection of human adenovirus also produced negative results with animal adenoviruses. These results indicate the importance of further studies to evaluate the usefulness of these tests to trace the source of fecal contamination in water and food and for environmental studies.
ERIC Educational Resources Information Center
Plonsky, Luke
2013-01-01
This study assesses research and reporting practices in quantitative second language (L2) research. A sample of 606 primary studies, published from 1990 to 2010 in "Language Learning and Studies in Second Language Acquisition," was collected and coded for designs, statistical analyses, reporting practices, and outcomes (i.e., effect…
40 CFR 1065.1105 - Sampling system design.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Sampling system design. 1065.1105... Compounds § 1065.1105 Sampling system design. (a) General. We recommend that you design your SVOC batch... practical, adjust sampling times based on the emission rate of target analytes from the engine to obtain...
Klick, B; Nishiura, H; Leung, G M; Cowling, B J
2014-04-01
Both case-ascertained household studies, in which households are recruited after an 'index case' is identified, and household cohort studies, where a household is enrolled before the start of the epidemic, may be used to test and estimate the protective effect of interventions used to prevent influenza transmission. A simulation approach parameterized with empirical data from household studies was used to evaluate and compare the statistical power of four study designs: a cohort study with routine virological testing of household contacts of infected index case, a cohort study where only household contacts with acute respiratory illness (ARI) are sampled for virological testing, a case-ascertained study with routine virological testing of household contacts, and a case-ascertained study where only household contacts with ARI are sampled for virological testing. We found that a case-ascertained study with ARI-triggered testing would be the most powerful design while a cohort design only testing household contacts with ARI was the least powerful. Sensitivity analysis demonstrated that these conclusions varied by model parameters including the serial interval and the risk of influenza virus infection from outside the household.
Watershed-based survey designs
Detenbeck, N.E.; Cincotta, D.; Denver, J.M.; Greenlee, S.K.; Olsen, A.R.; Pitchford, A.M.
2005-01-01
Watershed-based sampling design and assessment tools help serve the multiple goals for water quality monitoring required under the Clean Water Act, including assessment of regional conditions to meet Section 305(b), identification of impaired water bodies or watersheds to meet Section 303(d), and development of empirical relationships between causes or sources of impairment and biological responses. Creation of GIS databases for hydrography, hydrologically corrected digital elevation models, and hydrologic derivatives such as watershed boundaries and upstream–downstream topology of subcatchments would provide a consistent seamless nationwide framework for these designs. The elements of a watershed-based sample framework can be represented either as a continuous infinite set defined by points along a linear stream network, or as a discrete set of watershed polygons. Watershed-based designs can be developed with existing probabilistic survey methods, including the use of unequal probability weighting, stratification, and two-stage frames for sampling. Case studies for monitoring of Atlantic Coastal Plain streams, West Virginia wadeable streams, and coastal Oregon streams illustrate three different approaches for selecting sites for watershed-based survey designs.
Best practices for hybridization design in two-colour microarray analysis.
Knapen, Dries; Vergauwen, Lucia; Laukens, Kris; Blust, Ronny
2009-07-01
Two-colour microarrays are a popular platform of choice in gene expression studies. Because two different samples are hybridized on a single microarray, and several microarrays are usually needed in a given experiment, there are many possible ways to combine samples on different microarrays. The actual combination employed is commonly referred to as the 'hybridization design'. Different types of hybridization designs have been developed, all aimed at optimizing the experimental setup for the detection of differentially expressed genes while coping with technical noise. Here, we first provide an overview of the different classes of hybridization designs, discussing their advantages and limitations, and then we illustrate the current trends in the use of different hybridization design types in contemporary research.
Sampling and data handling methods for inhalable particulate sampling. Final report nov 78-dec 80
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, W.B.; Cushing, K.M.; Johnson, J.W.
1982-05-01
The report reviews the objectives of a research program on sampling and measuring particles in the inhalable particulate (IP) size range in emissions from stationary sources, and describes methods and equipment required. A computer technique was developed to analyze data on particle-size distributions of samples taken with cascade impactors from industrial process streams. Research in sampling systems for IP matter included concepts for maintaining isokinetic sampling conditions, necessary for representative sampling of the larger particles, while flowrates in the particle-sizing device were constant. Laboratory studies were conducted to develop suitable IP sampling systems with overall cut diameters of 15 micrometersmore » and conforming to a specified collection efficiency curve. Collection efficiencies were similarly measured for a horizontal elutriator. Design parameters were calculated for horizontal elutriators to be used with impactors, the EPA SASS train, and the EPA FAS train. Two cyclone systems were designed and evaluated. Tests on an Andersen Size Selective Inlet, a 15-micrometer precollector for high-volume samplers, showed its performance to be with the proposed limits for IP samplers. A stack sampling system was designed in which the aerosol is diluted in flow patterns and with mixing times simulating those in stack plumes.« less
Razmi, Rasoul; Shahpari, Behrouz; Pourbasheer, Eslam; Boustanifar, Mohammad Hasan; Azari, Zhila; Ebadi, Amin
2016-11-01
A rapid and simple method for the extraction and preconcentration of ceftazidime in aqueous samples has been developed using dispersive liquid-liquid microextraction followed by high-performance liquid chromatography analysis. The extraction parameters, such as the volume of extraction solvent and disperser solvent, salt effect, sample volume, centrifuge rate, centrifuge time, extraction time, and temperature in the dispersive liquid-liquid microextraction process, were studied and optimized with the experimental design methods. Firstly, for the preliminary screening of the parameters the taguchi design was used and then, the fractional factorial design was used for significant factors optimization. At the optimum conditions, the calibration curves for ceftazidime indicated good linearity over the range of 0.001-10 μg/mL with correlation coefficients higher than the 0.98, and the limits of detection were 0.13 and 0.17 ng/mL, for water and urine samples, respectively. The proposed method successfully employed to determine ceftazidime in water and urine samples and good agreement between the experimental data and predictive values has been achieved. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Segura-Correa, J C; Domínguez-Díaz, D; Avalos-Ramírez, R; Argaez-Sosa, J
2010-09-01
Knowledge of the intraherd correlation coefficient (ICC) and design (D) effect for infectious diseases could be of interest in sample size calculation and to provide the correct standard errors of prevalence estimates in cluster or two-stage samplings surveys. Information on 813 animals from 48 non-vaccinated cow-calf herds from North-eastern Mexico was used. The ICC for the bovine viral diarrhoea (BVD), infectious bovine rhinotracheitis (IBR), leptospirosis and neosporosis diseases were calculated using a Bayesian approach adjusting for the sensitivity and specificity of the diagnostic tests. The ICC and D values for BVD, IBR, leptospirosis and neosporosis were 0.31 and 5.91, 0.18 and 3.88, 0.22 and 4.53, and 0.11 and 2.68, respectively. The ICC and D values were different from 0 and D greater than 1, therefore large sample sizes are required to obtain the same precision in prevalence estimates than for a random simple sampling design. The report of ICC and D values is of great help in planning and designing two-stage sampling studies. 2010 Elsevier B.V. All rights reserved.
Lorber, M.; Johnson, Kevin; Kross, B.; Pinsky, P.; Burmeister, L.; Thurman, M.; Wilkins, A.; Hallberg, G.
1997-01-01
In 1988, the Iowa Department of Natural Resources, along with the University of Iowa conducted the Statewide Rural Well Water Survey, commonly known as SWRL. A total of 686 private rural drinking water wells was selected by use of a probability sample and tested for pesticides and nitrates. Sixty-eight of these wells, the '10% repeat' wells, were additionally sampled in October, 1990 and June, 1991. Starting in November, 1991, the University of Iowa, with sponsorship from the United States Environmental Protection Agency, revisited these wells to begin a study of the temporal variability of atrazine and nitrates in wells. Other wells, which had originally tested positive for atrazine in SWRL but were not in the 10% repeat population, were added to the study population. Temporal sampling for a year-long period began in February of 1992 and concluded in January of 1993. All wells were sampled monthly, one subset was sampled weekly, and a second subset was sampled for 14-day consecutive periods. Two unique aspects of this study were the use of an immunoassay technique to screen for triazines before gas chromatography/mass spectrometry (GC/MS) analysis and quantification of atrazine, and the use of well owners to sample the wells. A total of 1771 samples from 83 wells are in the final data base for this study. This paper reviews the study design, the analytical methodologies, and development of the data base. A companion paper discusses the analysis of the data from this survey.
DOT National Transportation Integrated Search
1989-02-01
This report represents a study undertaken to determine the life cycle, load characteristics, and associated costs of a representative sample of the oldest rigid and flexible pavements designed in Louisiana (1963-1967) using the AASHO Guide for Design...
Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.
2011-01-01
Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.
Under-sampling trajectory design for compressed sensing based DCE-MRI.
Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting
2013-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Chen, Zhijian; Craiu, Radu V; Bull, Shelley B
2014-11-01
In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies. © 2014 WILEY PERIODICALS, INC.
National accident sampling system sample design, phases 2 and 3 : executive summary
DOT National Transportation Integrated Search
1979-11-01
This report describes the Phase 2 and 3 sample design for the : National Accident Sampling System (NASS). It recommends a procedure : for the first-stage selection of Primary Sampling Units (PSU's) and : the second-stage design for the selection of a...
Designing an Elective Course on Gelotology
ERIC Educational Resources Information Center
Haynes, Gene C.
2016-01-01
The purpose of this capstone project was to design a course description on gelotology, the study of laughter, at a XYZ Institute. The course provides a detailed analysis of the background of gelotology, how the course was designed and how to put the course into application at the capstone site. The course was designed using a sample curriculum as…
Using Monte Carlo Simulations to Determine Power and Sample Size for Planned Missing Designs
ERIC Educational Resources Information Center
Schoemann, Alexander M.; Miller, Patrick; Pornprasertmanit, Sunthud; Wu, Wei
2014-01-01
Planned missing data designs allow researchers to increase the amount and quality of data collected in a single study. Unfortunately, the effect of planned missing data designs on power is not straightforward. Under certain conditions using a planned missing design will increase power, whereas in other situations using a planned missing design…
A Study Investigating Indian Middle School Students' Ideas of Design and Designers
ERIC Educational Resources Information Center
Ara, Farhat; Chunawala, Sugra; Natarajan, Chitra
2011-01-01
This paper reports on an investigation into middle school students' naive ideas about, and attitudes towards design and designers. The sample for the survey consisted of students from Classes 7 to 9 from a school located in Mumbai. The data were analysed qualitatively and quantitatively to look for trends in students' responses. Results show that…
Order of stimulus presentation influences children's acquisition in receptive identification tasks.
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
2016-03-01
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition in receptive identification tasks under 2 conditions: when the sample was presented before the comparisons (sample first) and when the comparisons were presented before the sample (comparison first). Participants included 4 typically developing kindergarten-age boys. Stimuli, which included birds and flags, were presented on a computer screen. Acquisition in the 2 conditions was compared in an adapted alternating-treatments design combined with a multiple baseline design across stimulus sets. All participants took fewer trials to meet the mastery criterion in the sample-first condition than in the comparison-first condition. © 2015 Society for the Experimental Analysis of Behavior.
77 FR 15092 - U.S. Energy Information Administration; Proposed Agency Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-14
... conducted under this clearance will generally be methodological studies of 500 cases or less. The samples... conducted under this clearance will generally be methodological studies of 500 cases or less, but will... the methodological design, sampling procedures (where possible) and questionnaires of the full scale...
Does Marital Status Influence the Parenting Styles Employed by Parents?
ERIC Educational Resources Information Center
Ashiono, Benard Litali; Mwoma, Teresa B.
2015-01-01
The current study sought to establish whether parents' marital status, influence their use of specific parenting styles in Kisauni District, Kenya. A correlational research design was employed to carry out this study. Stratified sampling technique was used to select preschools while purposive sampling technique was used to select preschool…
The Role of Flushing Dental Waterlines for the Removal of Microbial Contaminants - MCEARD
Objectives. This study was designed to determine the role of flushing dental water lines for the removal of heterotrophic plate count bacteria, Legionella spp., and free-living protozoa. Methods. Forty dental offices were surveyed in the study. An initial sample and a sample tak...
A network of stream-sampling sites was developed for the Mid-Atlantic Coastal Plain (New Jersey through North Carolina) a collaborative study between the U.S. Environmental Protection Agency and the U.S. Geological Survey. A stratified random sampling with unequal weighting was u...
Resource Utilisation and Curriculum Implementation in Community Colleges in Kenya
ERIC Educational Resources Information Center
Kigwilu, Peter Changilwa; Akala, Winston Jumba
2017-01-01
The study investigated how Catholic-sponsored community colleges in Nairobi utilise the existing physical facilities and teaching and learning resources for effective implementation of Artisan and Craft curricula. The study adopted a mixed methods research design. Proportional stratified random sampling was used to sample 172 students and 18…
A Retrospective Look at Website Accessibility over Time
ERIC Educational Resources Information Center
Hackett, Stephanie; Parmanto, Bambang; Zeng, Xiaoming
2005-01-01
Websites were retrospectively analysed to study the effects that technological advances in web design have had on accessibility for persons with disabilities. A random sample of general websites and a convenience sample of US government websites were studied and compared for the years 1997-2002. Web accessibility barrier (WAB) and complexity…
Water quality monitoring of Sweetwater and Loveland reservoirs--Phase one results 1998-1999
Majewski, Michael S.; Sidhu, Jagdeep S.; Mendez, Gregory O.
2002-01-01
In 1998, the U.S. Geological Survey began a study to assess the overall health of the watershed feeding the Sweetwater Reservoir in southern San Diego County, California. The study focussed on monitoring for organic chemical contamination and the effects of construction and operation of State Route 125 on water quality. Three environmental compartments (air, water, and bed sediments) are being sampled regularly for chemical contaminants, including volatile organic compounds, polynuclear aromatic hydrocarbons, polychlorinated biphenyls, pesticides, and major and trace elements. The study is divided into two phases. Phase I sampling is designed to establish baseline conditions for target compounds in terms of detection frequency and concentration in air, water, and bed sediments. Phase II sampling will continue at the established monitoring sites during and after construction of State Route 125 to assess chemical impact on water quality in the reservoir resulting from land-use changes and development in the watershed. This report describes the study design, the sampling and analytical methods, and presents the data results for the first year of the study, September 1998 to September 1999.
Chemical pump study for Pioneer Venus program
NASA Technical Reports Server (NTRS)
Rotheram, M.
1973-01-01
Two chemical pumps were designed for the Pioneer Venus large probe mass spectrometer. Factors involved in the design selection are reviewed. One pump is designed to process a sample of the Venus atmosphere to remove the major component, carbon dioxide, so that the minor, inert components may be measured with greater sensitivity. The other pump is designed to promote flow of atmospheric gas through a pressure reduction inlet system. This pump, located downstream from the mass spectrometer sampling point, provides the pressure differential required for flow through the inlet system. Both pumps utilize the reaction of carbon dioxide with lithium hydroxide. The available data for this reaction was reviewed with respect to the proposed applications, and certain deficiencies in reaction rate data at higher carbon dioxide pressures noted. The chemical pump designed for the inert gas experiment has an estimated volume of 30 cu cm and weight of 80 grams, exclusive of the four valves required for the operation. The chemical pump for the pressure reduction inlet system is designed for a total sample of 0.3 bar liter during the Venus descent.
van Breukelen, Gerard J P; Candel, Math J J M
2018-06-10
Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Critique of a practice-based pilot study in chiropractic practices in Western Australia.
Amorin-Woods, Lyndon G; Parkin-Smith, Gregory F; Nedkoff, Lee; Fisher, Colleen
2016-01-01
Practice-based data collection can offer insight into the nature of chiropractic practice and contribute to resolving the conundrum of the chiropractic profession's role in contemporary healthcare, subsequently informing care service policy. However, there is little formal data available about chiropractic practice to inform decision-makers about the nature and role of chiropractic within the context of a modern multidisciplinary healthcare context in Australia, particularly at a local and regional level. This was a mixed-methods data transformation model (qualitative to quantitative) pilot study the purpose of which was to provide a critique of the research design and collect data from a selected sample of chiropractic practices in Western Australia, with a view to offer recommendations related to the design, feasibility and implementation of a future confirmatory study. A narrative critique of the research methods of this pilot study is offered in this paper covering: (a) practice and patient recruitment, (b) enrollment of patients, (c) data collection methods, (d) acceptability of the study methods, (e) sample size calculations, and (f) design critique. The result of this critique provides a sensible sample size estimate and recommendations as to the design and implementation of a future confirmatory study. Furthermore, we believe that a confirmatory study is not only feasible, but indeed necessary, with a view to offer meaningful insight into chiropractic practice in Western Australia. ACTRN12616000434493 Australian New Zealand Clinical Trials Registry (ANZCTR). Registered 5 April 2016. First participant enrolled 01 July 2014, retrospectively registered.
Electrokinetic focusing injection methods on microfluidic devices.
Fu, Lung-Ming; Yang, Ruey-Jen; Lee, Gwo-Bin
2003-04-15
This paper presents an experimental and numerical investigation into electrokinetic focusing injection on microfluidic chips. The valving characteristics on microfluidic devices are controlled through appropriate manipulations of the electric potential strengths during the sample loading and dispensing steps. The present study also addresses the design and testing of various injection systems used to deliver a sample plug. A novel double-cross injection microfluidic chip is fabricated, which employs electrokinetic focusing to deliver sample plugs of variable volume. The proposed design combines several functions of traditional sample plug injection systems on a single microfluidic chip. The injection technique uses an unique sequence of loading steps with different electric potential distributions and magnitudes within the various channels to effectuate a virtual valve.
Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.
Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian
2014-01-01
In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).
Szerkus, O; Jacyna, J; Wiczling, P; Gibas, A; Sieczkowski, M; Siluk, D; Matuszewski, M; Kaliszan, R; Markuszewski, M J
2016-09-01
Fluoroquinolones are considered as gold standard for the prevention of bacterial infections after transrectal ultrasound guided prostate biopsy. However, recent studies reported that fluoroquinolone- resistant bacterial strains are responsible for gradually increasing number of infections after transrectal prostate biopsy. In daily clinical practice, antibacterial efficacy is evaluated only in vitro, by measuring the reaction of bacteria with an antimicrobial agent in culture media (i.e. calculation of minimal inhibitory concentration). Such approach, however, has no relation to the treated tissue characteristics and might be highly misleading. Thus, the objective of this study was to develop, with the use of Design of Experiments approach, a reliable, specific and sensitive ultra-high performance liquid chromatography- diode array detection method for the quantitative analysis of levofloxacin in plasma and prostate tissue samples obtained from patients undergoing prostate biopsy. Moreover, correlation study between concentrations observed in plasma samples vs prostatic tissue samples was performed, resulting in better understanding, evaluation and optimization of the fluoroquinolone-based antimicrobial prophylaxis during transrectal ultrasound guided prostate biopsy. Box-Behnken design was employed to optimize chromatographic conditions of the isocratic elution program in order to obtain desirable retention time, peak symmetry and resolution of levofloxacine and ciprofloxacine (internal standard) peaks. Fractional Factorial design 2(4-1) with four center points was used for screening of significant factors affecting levofloxacin extraction from the prostatic tissue. Due to the limited number of tissue samples the prostatic sample preparation procedure was further optimized using Central Composite design. Design of Experiments approach was also utilized for evaluation of parameter robustness. The method was found linear over the range of 0.030-10μg/mL for human plasma and 0.300-30μg/g for human prostate tissue samples. The intra-day and inter-day variability for levofloxacine from both plasma and prostate samples were less than 10%, with accuracies between 93 and 108% of the nominal values. The limit of detection and the limit of quantification for human plasma were 0.01μg/mL and 0.03μg/mL, respectively. For the prostate tissue, the limit of detection and the limit of quantification were 0.1μg/g and 0.3μg/g, respectively. The average recoveries of levofloxacin were in the range from 99 to 106%. Also, the method fulfills requirements of robustness what was determined and proved by Design of Experiments. The developed method was successfully applied to examine prostate tissue and plasma samples from 140 hospitalized patients enrolled into the clinical study, 12h after oral administration of LVF at a dose of 500mg. The mean (±SD) LVF concentration in prostate was 6.22±3.52μg/g and in plasma 2.54±1.14μg/mL. Due to simplicity of the method and relative small amount of sample needed for the assay, the method can be applied in clinical practice for monitoring of LVF concentrations in plasma and prostate gland. Copyright © 2016 Elsevier B.V. All rights reserved.
Stratified sampling design based on data mining.
Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung
2013-09-01
To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.
Holmes, Thomas D; Guilmette, Raymond A; Cheng, Yung Sung; Parkhurst, Mary Ann; Hoover, Mark D
2009-03-01
The Capstone Depleted Uranium (DU) Aerosol Study was undertaken to obtain aerosol samples resulting from a large-caliber DU penetrator striking an Abrams or Bradley test vehicle. The sampling strategy was designed to (1) optimize the performance of the samplers and maintain their integrity in the extreme environment created during perforation of an armored vehicle by a DU penetrator, (2) collect aerosols as a function of time post perforation, and (3) obtain size-classified samples for analysis of chemical composition, particle morphology, and solubility in lung fluid. This paper describes the experimental setup and sampling methodologies used to achieve these objectives. Custom-designed arrays of sampling heads were secured to the inside of the target in locations approximating the breathing zones of the crew locations in the test vehicles. Each array was designed to support nine filter cassettes and nine cascade impactors mounted with quick-disconnect fittings. Shielding and sampler placement strategies were used to minimize sampler loss caused by the penetrator impact and the resulting fragments of eroded penetrator and perforated armor. A cyclone train was used to collect larger quantities of DU aerosol for measurement of chemical composition and solubility. A moving filter sample was used to obtain semicontinuous samples for DU concentration determination. Control for the air samplers was provided by five remotely located valve control and pressure monitoring units located inside and around the test vehicle. These units were connected to a computer interface chassis and controlled using a customized LabVIEW engineering computer control program. The aerosol sampling arrays and control systems for the Capstone study provided the needed aerosol samples for physicochemical analysis, and the resultant data were used for risk assessment of exposure to DU aerosol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmes, Thomas D.; Guilmette, Raymond A.; Cheng, Yung-Sung
2009-03-01
The Capstone Depleted Uranium Aerosol Study was undertaken to obtain aerosol samples resulting from a kinetic-energy cartridge with a large-caliber depleted uranium (DU) penetrator striking an Abrams or Bradley test vehicle. The sampling strategy was designed to (1) optimize the performance of the samplers and maintain their integrity in the extreme environment created during perforation of an armored vehicle by a DU penetrator, (2) collect aerosols as a function of time post-impact, and (3) obtain size-classified samples for analysis of chemical composition, particle morphology, and solubility in lung fluid. This paper describes the experimental setup and sampling methodologies used tomore » achieve these objectives. Custom-designed arrays of sampling heads were secured to the inside of the target in locations approximating the breathing zones of the vehicle commander, loader, gunner, and driver. Each array was designed to support nine filter cassettes and nine cascade impactors mounted with quick-disconnect fittings. Shielding and sampler placement strategies were used to minimize sampler loss caused by the penetrator impact and the resulting fragments of eroded penetrator and perforated armor. A cyclone train was used to collect larger quantities of DU aerosol for chemical composition and solubility. A moving filter sample was used to obtain semicontinuous samples for depleted uranium concentration determination. Control for the air samplers was provided by five remotely located valve control and pressure monitoring units located inside and around the test vehicle. These units were connected to a computer interface chassis and controlled using a customized LabVIEW engineering computer control program. The aerosol sampling arrays and control systems for the Capstone study provided the needed aerosol samples for physicochemical analysis, and the resultant data were used for risk assessment of exposure to DU aerosol.« less
Study design in medical research: part 2 of a series on the evaluation of scientific publications.
Röhrig, Bernd; du Prel, Jean-Baptist; Blettner, Maria
2009-03-01
The scientific value and informativeness of a medical study are determined to a major extent by the study design. Errors in study design cannot be corrected afterwards. Various aspects of study design are discussed in this article. Six essential considerations in the planning and evaluation of medical research studies are presented and discussed in the light of selected scientific articles from the international literature as well as the authors' own scientific expertise with regard to study design. The six main considerations for study design are the question to be answered, the study population, the unit of analysis, the type of study, the measuring technique, and the calculation of sample size. This article is intended to give the reader guidance in evaluating the design of studies in medical research. This should enable the reader to categorize medical studies better and to assess their scientific quality more accurately.
Blood Sampling and Preparation Procedures for Proteomic Biomarker Studies of Psychiatric Disorders.
Guest, Paul C; Rahmoune, Hassan
2017-01-01
A major challenge in proteomic biomarker discovery and validation for psychiatric diseases is the inherent biological complexity underlying these conditions. There are also many technical issues which hinder this process such as the lack of standardization in sampling, processing and storage of bio-samples in preclinical and clinical settings. This chapter describes a reproducible procedure for sampling blood serum and plasma that is specifically designed for maximizing data quality output in two-dimensional gel electrophoresis, multiplex immunoassay and mass spectrometry profiling studies.
NASA Astrophysics Data System (ADS)
Corbin, B. A.; Seager, S.; Ross, A.; Hoffman, J.
2017-12-01
Distributed satellite systems (DSS) have emerged as an effective and cheap way to conduct space science, thanks to advances in the small satellite industry. However, relatively few space science missions have utilized multiple assets to achieve their primary scientific goals. Previous research on methods for evaluating mission concepts designs have shown that distributed systems are rarely competitive with monolithic systems, partially because it is difficult to quantify the added value of DSSs over monolithic systems. Comparatively little research has focused on how DSSs can be used to achieve new, fundamental space science goals that cannot be achieved with monolithic systems or how to choose a design from a larger possible tradespace of options. There are seven emergent capabilities of distributed satellites: shared sampling, simultaneous sampling, self-sampling, census sampling, stacked sampling, staged sampling, and sacrifice sampling. These capabilities are either fundamentally, analytically, or operationally unique in their application to distributed science missions, and they can be leveraged to achieve science goals that are either impossible or difficult and costly to achieve with monolithic systems. The Responsive Systems Comparison (RSC) method combines Multi-Attribute Tradespace Exploration with Epoch-Era Analysis to examine benefits, costs, and flexible options in complex systems over the mission lifecycle. Modifications to the RSC method as it exists in previously published literature were made in order to more accurately characterize how value is derived from space science missions. New metrics help rank designs by the value derived over their entire mission lifecycle and show more accurate cumulative value distributions. The RSC method was applied to four case study science missions that leveraged the emergent capabilities of distributed satellites to achieve their primary science goals. In all four case studies, RSC showed how scientific value was gained that would be impossible or unsatisfactory with monolithic systems and how changes in design and context variables affected the overall mission value. Each study serves as a blueprint for how to conduct a Pre-Phase A study using these methods to learn more about the tradespace of a particular mission.
30 CFR 70.208 - Bimonthly sampling; designated areas.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Bimonthly sampling; designated areas. 70.208... SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-UNDERGROUND COAL MINES Sampling Procedures § 70.208 Bimonthly sampling; designated areas. (a) Each operator shall take one valid respirable dust sample from...
30 CFR 70.208 - Bimonthly sampling; designated areas.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Bimonthly sampling; designated areas. 70.208... SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-UNDERGROUND COAL MINES Sampling Procedures § 70.208 Bimonthly sampling; designated areas. (a) Each operator shall take one valid respirable dust sample from...
Valder, Joshua F.; Delzer, Gregory C.; Price, Curtis V.; Sandstrom, Mark W.
2008-01-01
The National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey (USGS) began implementing Source Water-Quality Assessments (SWQAs) in 2002 that focus on characterizing the quality of source water and finished water of aquifers and major rivers used by some of the larger community water systems in the United States. As used for SWQA studies, source water is the raw (ambient) water collected at the supply well prior to water treatment (for ground water) or the raw (ambient) water collected from the river near the intake (for surface water). Finished water is the water that is treated, which typically involves, in part, the addition of chlorine or other disinfection chemicals to remove pathogens, and is ready to be delivered to consumers. Finished water is collected before the water enters the distribution system. This report describes the study design and percent recoveries of anthropogenic organic compounds (AOCs) with and without the addition of ascorbic acid to preserve water samples containing free chlorine. The percent recoveries were determined by using analytical results from a laboratory study conducted in 2004 by the USGS's National Water Quality Laboratory (NWQL) and from data collected during 2004-06 for a field study currently (2008) being conducted by the USGS's NAWQA Program. The laboratory study was designed to determine if preserving samples with ascorbic acid (quenching samples) adversely affects analytical performance under controlled conditions. During the laboratory study, eight samples of reagent water were spiked for each of five analytical schedules evaluated. Percent recoveries from these samples were then compared in two ways: (1) four quenched reagent spiked samples analyzed on day 0 were compared with four quenched reagent spiked samples analyzed on day 7 or 14, and (2) the combined eight quenched reagent spiked samples analyzed on day 0, 7, or 14 were compared with eight laboratory reagent spikes (LRSs). Percent recoveries from the quenched reagent spiked samples that were analyzed at two different times (day 0 and day 7 or 14) can be used to determine the stability of the quenched samples held for an amount of time representative of the normal amount of time between sample collection and analysis. The comparison between the quenched reagent spiked samples and the LRSs can be used to determine if quenching samples adversely affects the analytical performance under controlled conditions. The field study began in 2004 and is continuing today (February 2008) to characterize the effect of quenching on field-matrix spike recoveries and to better understand the potential oxidation and transformation of 277 AOCs. Three types of samples were collected from 11 NAWQA Study Units across the Nation: (1) quenched finished-water samples (not spiked), (2) quenched finished-water spiked samples, and (3) nonquenched finished-water spiked samples. Percent recoveries of AOCs in quenched and nonquenched finished-water spiked samples collected during 2004-06 are presented. Comparisons of percent recoveries between quenched and nonquenched spiked samples can be used to show how quenching affects finished-water samples. A maximum of 6 surface-water and 7 ground-water quenched finished-water spiked samples paired with nonquenched finished-water spiked samples were analyzed. Analytical results for the field study are presented in two ways: (1) by surface-water supplies or ground-water supplies, and (2) by use (or source) group category for surface-water and ground-water supplies. Graphical representations of percent recoveries for the quenched and nonquenched finished-water spiked samples also are presented.
ERIC Educational Resources Information Center
Riccobono, John A.; Cominole, Melissa B.; Siegel, Peter H.; Gabel, Tim J.; Link, Michael W.; Berkner, Lutz K.
This report describes the methods and procedures used for the 2000 National Postsecondary Student Aid Study (NPSAS:2000). NPSAS:2000 included notable changes from previous NPSAS surveys (conducted in 1987, 1990, 1993, and 1996) in its sample design and collection of data. For example, this study is the first to restrict institutional sampling to…
Shao, Jingyuan; Cao, Wen; Qu, Haibin; Pan, Jianyang; Gong, Xingchu
2018-01-01
The aim of this study was to present a novel analytical quality by design (AQbD) approach for developing an HPLC method to analyze herbal extracts. In this approach, critical method attributes (CMAs) and critical method parameters (CMPs) of the analytical method were determined using the same data collected from screening experiments. The HPLC-ELSD method for separation and quantification of sugars in Codonopsis Radix extract (CRE) samples and Astragali Radix extract (ARE) samples was developed as an example method with a novel AQbD approach. Potential CMAs and potential CMPs were found with Analytical Target Profile. After the screening experiments, the retention time of the D-glucose peak of CRE samples, the signal-to-noise ratio of the D-glucose peak of CRE samples, and retention time of the sucrose peak in ARE samples were considered CMAs. The initial and final composition of the mobile phase, flow rate, and column temperature were found to be CMPs using a standard partial regression coefficient method. The probability-based design space was calculated using a Monte-Carlo simulation method and verified by experiments. The optimized method was validated to be accurate and precise, and then it was applied in the analysis of CRE and ARE samples. The present AQbD approach is efficient and suitable for analysis objects with complex compositions.
1979 Reserve Force Studies Surveys: Survey Design, Sample Design and Administrative Procedures,
1981-08-01
three factors: the need for a statistically significant number of usable questionnaires from different groups within the random sampls and from...Because of the multipurpose nature of these surveys and the large number of questions needed to fully address some of the topics covered, we...varies. Collection of data at the unit level is needed to accurately estimate actual reserve compensation and benefits and their possible role in both
Q-Sample Construction: A Critical Step for a Q-Methodological Study.
Paige, Jane B; Morin, Karen H
2016-01-01
Q-sample construction is a critical step in Q-methodological studies. Prior to conducting Q-studies, researchers start with a population of opinion statements (concourse) on a particular topic of interest from which a sample is drawn. These sampled statements are known as the Q-sample. Although literature exists on methodological processes to conduct Q-methodological studies, limited guidance exists on the practical steps to reduce the population of statements to a Q-sample. A case exemplar illustrates the steps to construct a Q-sample in preparation for a study that explored perspectives nurse educators and nursing students hold about simulation design. Experts in simulation and Q-methodology evaluated the Q-sample for readability, clarity, and for representativeness of opinions contained within the concourse. The Q-sample was piloted and feedback resulted in statement refinement. Researchers especially those undertaking Q-method studies for the first time may benefit from the practical considerations to construct a Q-sample offered in this article. © The Author(s) 2014.
A comparison of two sampling designs for fish assemblage assessment in a large river
Kiraly, Ian A.; Coghlan, Stephen M.; Zydlewski, Joseph D.; Hayes, Daniel
2014-01-01
We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.
The importance of replication in wildlife research
Johnson, D.H.
2002-01-01
Wildlife ecology and management studies have been widely criticized for deficiencies in design or analysis. Manipulative experiments--with controls, randomization, and replication in space and time--provide powerful ways of learning about natural systems and establishing causal relationships, but such studies are rare in our field. Observational studies and sample surveys are more common; they also require appropriate design and analysis. More important than the design and analysis of individual studies is metareplication: replication of entire studies. Similar conclusions obtained from studies of the same phenomenon conducted under widely differing conditions will give us greater confidence in the generality of those findings than would any single study, however well designed and executed.
Espin-Garcia, Osvaldo; Craiu, Radu V; Bull, Shelley B
2018-02-01
We evaluate two-phase designs to follow-up findings from genome-wide association study (GWAS) when the cost of regional sequencing in the entire cohort is prohibitive. We develop novel expectation-maximization-based inference under a semiparametric maximum likelihood formulation tailored for post-GWAS inference. A GWAS-SNP (where SNP is single nucleotide polymorphism) serves as a surrogate covariate in inferring association between a sequence variant and a normally distributed quantitative trait (QT). We assess test validity and quantify efficiency and power of joint QT-SNP-dependent sampling and analysis under alternative sample allocations by simulations. Joint allocation balanced on SNP genotype and extreme-QT strata yields significant power improvements compared to marginal QT- or SNP-based allocations. We illustrate the proposed method and evaluate the sensitivity of sample allocation to sampling variation using data from a sequencing study of systolic blood pressure. © 2017 The Authors. Genetic Epidemiology Published by Wiley Periodicals, Inc.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities.
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.
Information Processing in Auditory-Visual Conflict.
ERIC Educational Resources Information Center
Henker, Barbara A.; Whalen, Carol K.
1972-01-01
The present study used a set of bimodal (auditory-visual) conflict designed specifically for the preschool child. The basic component was a match-to-sample sequence designed to reduce the often-found contaminating factors in studies with young children: failure to understand or remember instructions, inability to perform the indicator response, or…
40 CFR 79.68 - Salmonella typhimurium reverse mutation assay.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Potency of Extracts of Diesel and Related Environmental Emissions: Study Design, Sample Generation... the present time, TA1535, TA1537, TA98, and TA100 are designated as tester strains. The fifth strain... this study shall be in accordance with good laboratory practice provisions under § 79.60. (1) Direct...
40 CFR 79.68 - Salmonella typhimurium reverse mutation assay.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Potency of Extracts of Diesel and Related Environmental Emissions: Study Design, Sample Generation... the present time, TA1535, TA1537, TA98, and TA100 are designated as tester strains. The fifth strain... this study shall be in accordance with good laboratory practice provisions under § 79.60. (1) Direct...
40 CFR 79.68 - Salmonella typhimurium reverse mutation assay.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Potency of Extracts of Diesel and Related Environmental Emissions: Study Design, Sample Generation... the present time, TA1535, TA1537, TA98, and TA100 are designated as tester strains. The fifth strain... this study shall be in accordance with good laboratory practice provisions under § 79.60. (1) Direct...
40 CFR 79.68 - Salmonella typhimurium reverse mutation assay.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Potency of Extracts of Diesel and Related Environmental Emissions: Study Design, Sample Generation... the present time, TA1535, TA1537, TA98, and TA100 are designated as tester strains. The fifth strain... this study shall be in accordance with good laboratory practice provisions under § 79.60. (1) Direct...
40 CFR 79.68 - Salmonella typhimurium reverse mutation assay.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Potency of Extracts of Diesel and Related Environmental Emissions: Study Design, Sample Generation... the present time, TA1535, TA1537, TA98, and TA100 are designated as tester strains. The fifth strain... this study shall be in accordance with good laboratory practice provisions under § 79.60. (1) Direct...
A Population-Based Study of Preschoolers' Food Neophobia and Its Associations with Food Preferences
ERIC Educational Resources Information Center
Russell, Catherine Georgina; Worsley, Anthony
2008-01-01
Objective: This cross-sectional study was designed to investigate the relationships between food preferences, food neophobia, and children's characteristics among a population-based sample of preschoolers. Design: A parent-report questionnaire. Setting: Child-care centers, kindergartens, playgroups, day nurseries, and swimming centers. Subjects:…
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
A Typology of Mixed Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.
2007-01-01
This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…
Tataranni, Mariella; Lardicci, Claudio
2010-01-01
The aim of this study was to analyse the variability of four different benthic biotic indices (AMBI, BENTIX, H', M-AMBI) in two marine coastal areas of the North-Western Mediterranean Sea. In each coastal area, 36 replicates were randomly selected according to a hierarchical sampling design, which allowed estimating the variance components of the indices associated with four different spatial scales (ranging from metres to kilometres). All the analyses were performed at two different sampling periods in order to evaluate if the observed trends were consistent over the time. The variance components of the four indices revealed complex trends and different patterns in the two sampling periods. These results highlighted that independently from the employed index, a rigorous and appropriate sampling design taking into account different scales should always be used in order to avoid erroneous classifications and to develop effective monitoring programs.
A Comparison of Three Online Recruitment Strategies for Engaging Parents
Dworkin, Jodi; Hessel, Heather; Gliske, Kate; Rudi, Jessie H.
2017-01-01
Family scientists can face the challenge of effectively and efficiently recruiting normative samples of parents and families. Utilizing the Internet to recruit parents is a strategic way to find participants where they already are, enabling researchers to overcome many of the barriers to in-person recruitment. The present study was designed to compare three online recruitment strategies for recruiting parents: e-mail Listservs, Facebook, and Amazon Mechanical Turk (MTurk). Analyses revealed differences in the effectiveness and efficiency of data collection. In particular, MTurk resulted in the most demographically diverse sample, in a short period of time, with little cost. Listservs reached a large number of participants and resulted in a comparatively homogeneous sample. Facebook was not successful in recruiting a general sample of parents. Findings provide information that can help family researchers and practitioners be intentional about recruitment strategies and study design. PMID:28804184
A Comparison of Three Online Recruitment Strategies for Engaging Parents.
Dworkin, Jodi; Hessel, Heather; Gliske, Kate; Rudi, Jessie H
2016-10-01
Family scientists can face the challenge of effectively and efficiently recruiting normative samples of parents and families. Utilizing the Internet to recruit parents is a strategic way to find participants where they already are, enabling researchers to overcome many of the barriers to in-person recruitment. The present study was designed to compare three online recruitment strategies for recruiting parents: e-mail Listservs, Facebook, and Amazon Mechanical Turk (MTurk). Analyses revealed differences in the effectiveness and efficiency of data collection. In particular, MTurk resulted in the most demographically diverse sample, in a short period of time, with little cost. Listservs reached a large number of participants and resulted in a comparatively homogeneous sample. Facebook was not successful in recruiting a general sample of parents. Findings provide information that can help family researchers and practitioners be intentional about recruitment strategies and study design.
Nationwide Drinking Water Sampling Campaign for Exposure Assessments in Denmark
Voutchkova, Denitza Dimitrova; Hansen, Birgitte; Ernstsen, Vibeke; Kristiansen, Søren Munch
2018-01-01
Nationwide sampling campaign of treated drinking water of groundwater origin was designed and implemented in Denmark in 2013. The main purpose of the sampling was to obtain data on the spatial variation of iodine concentration and speciation in treated drinking water, which was supplied to the majority of the Danish population. This data was to be used in future exposure and epidemiologic studies. The water supply sector (83 companies, owning 144 waterworks throughout Denmark) was involved actively in the planning and implementation process, which reduced significantly the cost and duration of data collection. The dataset resulting from this collaboration covers not only iodine species (I−, IO3−, TI), but also major elements and parameters (pH, electrical conductivity, DOC, TC, TN, F−, Cl−, NO3−, SO42−, Ca2+, Mg2+, K+, Na+) and a long list of trace elements (n = 66). The water samples represent 144 waterworks abstracting about 45% of the annual Danish groundwater abstraction for drinking water purposes, which supply about 2.5 million Danes (45% of all Danish residents). This technical note presents the design, implementation, and limitations of such a sampling design in detail in order (1) to facilitate the future use of this dataset, (2) to inform future replication studies, or (3) to provide an example for other researchers. PMID:29518987
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-26
... purpose of the proposed methodological study is to continue the Vanguard phase of the National Children's... health and development. In combination, these studies will be used to inform the design of the Main Study... and to inform the design of the Main Study. Methods Provider Based Sampling We will compile a list of...
ERIC Educational Resources Information Center
Ragusa, G.
2009-01-01
This study documents the home literacy experiences of children born with very low birth weight (VLBW). The study's design was modelled after Purcell-Gates' study of social domains mediated by print as home literacy experiences. A design combining purposeful sampling, semi-structured data collection and descriptive case study analysis was employed…
Preliminary design polymeric materials experiment. [for space shuttles and Spacelab missions
NASA Technical Reports Server (NTRS)
Mattingly, S. G.; Rude, E. T.; Marshner, R. L.
1975-01-01
A typical Advanced Technology Laboratory mission flight plan was developed and used as a guideline for the identification of a number of experiment considerations. The experiment logistics beginning with sample preparation and ending with sample analysis are then overlaid on the mission in order to have a complete picture of the design requirements. The results of this preliminary design study fall into two categories. First specific preliminary designs of experiment hardware which is adaptable to a variety of mission requirements. Second, identification of those mission considerations which affect hardware design and will require further definition prior to final design. Finally, a program plan is presented which will provide the necessary experiment hardware in a realistic time period to match the planned shuttle flights. A bibliography of all material reviewed and consulted but not specifically referenced is provided.
Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure
Salehi, M.; Smith, D.R.
2005-01-01
Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.
23 CFR Appendix A to Part 1340 - Sample Design
Code of Federal Regulations, 2011 CFR
2011-04-01
... 23 Highways 1 2011-04-01 2011-04-01 false Sample Design A Appendix A to Part 1340 Highways... OBSERVATIONAL SURVEYS OF SEAT BELT USE Pt. 1340, App. A Appendix A to Part 1340—Sample Design Following is a description of a sample design that meets the final survey guidelines and, based upon NHTSA's experience in...
Kamlapure, Anand; Saraswat, Garima; Ganguli, Somesh Chandra; Bagwe, Vivas; Raychaudhuri, Pratap; Pai, Subash P
2013-12-01
We report the construction and performance of a low temperature, high field scanning tunneling microscope (STM) operating down to 350 mK and in magnetic fields up to 9 T, with thin film deposition and in situ single crystal cleaving capabilities. The main focus lies on the simple design of STM head and a sample holder design that allows us to get spectroscopic data on superconducting thin films grown in situ on insulating substrates. Other design details on sample transport, sample preparation chamber, and vibration isolation schemes are also described. We demonstrate the capability of our instrument through the atomic resolution imaging and spectroscopy on NbSe2 single crystal and spectroscopic maps obtained on homogeneously disordered NbN thin film.
Empirical Validation of a Procedure to Correct Position and Stimulus Biases in Matching-to-Sample
ERIC Educational Resources Information Center
Kangas, Brian D.; Branch, Marc N.
2008-01-01
The development of position and stimulus biases often occurs during initial training on matching-to-sample tasks. Furthermore, without intervention, these biases can be maintained via intermittent reinforcement provided by matching-to-sample contingencies. The present study evaluated the effectiveness of a correction procedure designed to…
Effects of Concept Mapping Instruction Approach on Students' Achievement in Basic Science
ERIC Educational Resources Information Center
Ogonnaya, Ukpai Patricia; Okafor, Gabriel; Abonyi, Okechukwu S.; Ugama, J. O.
2016-01-01
The study investigated the effects of concept mapping on students' achievement in basic science. The study was carried out in Ebonyi State of Nigeria. The study employed a quasi-experimental design. Specifically the pretest posttest non-equivalent control group research design was used. The sample was 122 students selected from two secondary…
Factors Affecting Cheating-Behavior at Undergraduate-Engineering
ERIC Educational Resources Information Center
Starovoytova, Diana; Namango, Saul
2016-01-01
This study is a fraction of a larger research on cheating in exams at the School of Engineering (SOE). The study design used a descriptive survey approach and a document analysis. A designed confidential self report questioner was applied as the main instrument for this study, with the sample size of 100 subjects, and a response rate of 95%. The…
Sampling design by the core-food approach for the Taiwan total diet study on veterinary drugs.
Chen, Chien-Chih; Tsai, Ching-Lun; Chang, Chia-Chin; Ni, Shih-Pei; Chen, Yi-Tzu; Chiang, Chow-Feng
2017-06-01
The core-food (CF) approach, first adopted in the United States in the 1980s, has been widely used by many countries to assess the exposure to dietary hazards at a population level. However, the reliability of exposure estimates (C × CR) depends critically on sampling methods designed for the detected chemical concentrations (C) of each CF to match with the corresponding consumption rate (CR) estimated from the surveyed intake data. In order to reduce the uncertainty of food matching, this study presents a sampling design scheme, namely the subsample method, for the 2016 Taiwan total diet study (TDS) on veterinary drugs. We first combined the four sets of national dietary recall data that covered the entire age strata (1-65+ years), and aggregated them into 307 CFs by their similarity in nutritional values, manufacturing and cooking methods. The 40 CFs pertinent to veterinary drug residues were selected for this study, and 16 subsamples for each CF were designed by weighing their quantities in CR, product brands, manufacturing, processing and cooking methods. The calculated food matching rates of each CF from this study were 84.3-97.3%, which were higher than those obtained from many previous studies using the representative food (RF) method (53.1-57.8%). The subsample method not only considers the variety of food processing and cooking methods, but also it provides better food matching and reduces the uncertainty of exposure assessment.
Rahmani, Mashaallah; Kaykhaii, Massoud; Sasani, Mojtaba
2018-01-05
This study aimed to investigate the efficiency of 3A zeolite as a novel adsorbent for removal of Rhodamine B and Malachite green dyes from water samples. To increase the removal efficiency, effecting parameters on adsorption process were investigated and optimized by adopting Taguchi design of experiments approach. The percentage contribution of each parameter on the removal of Rhodamine B and Malachite green dyes determined using ANOVA and showed that the most effective parameters in removal of RhB and MG by 3A zeolite are initial concentration of dye and pH, respectively. Under optimized condition, the amount predicted by Taguchi design method and the value obtained experimentally, showed good closeness (more than 94.86%). Good adsorption efficiency obtained for proposed methods indicates that, the 3A zeolite is capable to remove the significant amounts of Rhodamine B and Malachite green from environmental water samples. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Rahmani, Mashaallah; Kaykhaii, Massoud; Sasani, Mojtaba
2018-01-01
This study aimed to investigate the efficiency of 3A zeolite as a novel adsorbent for removal of Rhodamine B and Malachite green dyes from water samples. To increase the removal efficiency, effecting parameters on adsorption process were investigated and optimized by adopting Taguchi design of experiments approach. The percentage contribution of each parameter on the removal of Rhodamine B and Malachite green dyes determined using ANOVA and showed that the most effective parameters in removal of RhB and MG by 3A zeolite are initial concentration of dye and pH, respectively. Under optimized condition, the amount predicted by Taguchi design method and the value obtained experimentally, showed good closeness (more than 94.86%). Good adsorption efficiency obtained for proposed methods indicates that, the 3A zeolite is capable to remove the significant amounts of Rhodamine B and Malachite green from environmental water samples.
NASA Astrophysics Data System (ADS)
Adam, Tijjani; Hashim, U.
2017-03-01
Optimum flow in micro channel for sensing purpose is challenging. In this study, The optimizations of the fluid sample flows are made through the design and characterization of the novel microfluidics' architectures to achieve the optimal flow rate in the micro channels. The biocompatibility of the Polydimetylsiloxane (Sylgard 184 silicon elastomer) polymer used to fabricate the device offers avenue for the device to be implemented as the universal fluidic delivery system for bio-molecules sensing in various bio-medical applications. The study uses the following methodological approaches, designing a novel microfluidics' architectures by integrating the devices on a single 4 inches silicon substrate, fabricating the designed microfluidic devices using low-cost solution soft lithography technique, characterizing and validating the flow throughput of urine samples in the micro channels by generating pressure gradients through the devices' inlets. The characterization on the urine samples flow in the micro channels have witnessed the constant flow throughout the devices.
Lee, Kathy E.; Langer, Susan K.; Barber, Larry B.; Writer, Jeff H.; Ferrey, Mark L.; Schoenfuss, Heiko L.; Furlong, Edward T.; Foreman, William T.; Gray, James L.; ReVello, Rhiannon C.; Martinovic, Dalma; Woodruff, Olivia R.; Keefe, Steffanie H.; Brown, Greg K.; Taylor, Howard E.; Ferrer, Imma; Thurman, E. Michael
2011-01-01
This report presents the study design, environmental data, and quality-assurance data for an integrated chemical and biological study of selected streams or lakes that receive wastewater-treatment plant effluent in Minnesota. This study was a cooperative effort of the U.S. Geological Survey, the Minnesota Pollution Control Agency, St. Cloud State University, the University of St. Thomas, and the University of Colorado. The objective of the study was to identify distribution patterns of endocrine active chemicals, pharmaceuticals, and other organic and inorganic chemicals of concern indicative of wastewater effluent, and to identify biological characteristics of estrogenicity and fish responses in the same streams. The U.S. Geological Survey collected and analyzed water, bed-sediment, and quality-assurance samples, and measured or recorded streamflow once at each sampling location from September through November 2009. Sampling locations included surface water and wastewater-treatment plant effluent. Twenty-five wastewater-treatment plants were selected to include continuous flow and periodic release facilities with differing processing steps (activated sludge or trickling filters) and plant design flows ranging from 0.002 to 10.9 cubic meters per second (0.04 to 251 million gallons per day) throughout Minnesota in varying land-use settings. Water samples were collected from the treated effluent of the 25 wastewater-treatment plants and at one point upstream from and one point downstream from wastewater-treatment plant effluent discharges. Bed-sediment samples also were collected at each of the stream or lake locations. Water samples were analyzed for major ions, nutrients, trace elements, pharmaceuticals, phytoestrogens and pharmaceuticals, alkylphenols and other neutral organic chemicals, carboxylic acids, and steroidal hormones. A subset (25 samples) of the bed-sediment samples were analyzed for carbon, wastewater-indicator chemicals, and steroidal hormones; the remaining samples were archived. Biological characteristics were determined by using an in-vitro bioassay to determine total estrogenicity in water samples and a caged fish study to determine characteristics of fish from experiments that exposed fish to wastewater effluent in 2009. St. Cloud State University deployed and processed caged fathead minnows at 13 stream sites during September 2009 for the caged fish study. Measured fish data included length, weight, body condition factor, and vitellogenin concentrations.
NASA Astrophysics Data System (ADS)
Dasher, D. H.; Lomax, T. J.; Bethe, A.; Jewett, S.; Hoberg, M.
2016-02-01
A regional probabilistic survey of 20 randomly selected stations, where water and sediments were sampled, was conducted over an area of Simpson Lagoon and Gwydyr Bay in the Beaufort Sea adjacent Prudhoe Bay, Alaska, in 2014. Sampling parameters included water column for temperature, salinity, dissolved oxygen, chlorophyll a, nutrients and sediments for macroinvertebrates, chemistry, i.e., trace metals and hydrocarbons, and grain size. The 2014 probabilistic survey design allows for inferences to be made of environmental status, for instance the spatial or aerial distribution of sediment trace metals within the design area sampled. Historically, since the 1970's a number of monitoring studies have been conducted in this estuary area using a targeted rather than regional probabilistic design. Targeted non-random designs were utilized to assess specific points of interest and cannot be used to make inferences to distributions of environmental parameters. Due to differences in the environmental monitoring objectives between probabilistic and targeted designs there has been limited assessment see if benefits exist to combining the two approaches. This study evaluates if a combined approach using the 2014 probabilistic survey sediment trace metal and macroinvertebrate results and historical targeted monitoring data can provide a new perspective on better understanding the environmental status of these estuaries.
Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M
2011-03-01
The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Borges de Sousa, P.; Morrone, M.; Hovenga, N.; Garion, C.; van Weelderen, R.; Koettig, T.; Bremer, J.
2017-12-01
The High-Luminosity upgrade of the Large Hadron Collider (HL-LHC) will increase the accelerator’s luminosity by a factor 10 beyond its original design value, giving rise to more collisions and generating an intense flow of debris. A new beam screen has been designed for the inner triplets that incorporates tungsten alloy blocks to shield the superconducting magnets and the 1.9 K superfluid helium bath from incoming radiation. These screens will operate between 60 K and 80 K and are designed to sustain a nominal head load of 15 Wm-1, over 10 times the nominal heat load for the original LHC design. Their overall new and more complex design requires them and their constituent parts to be characterised from a thermal performance standpoint. In this paper we describe the experimental parametric study carried out on two principal thermal components: a representative sample of the beam screen with a tungsten-based alloy block and thermal link and the supporting structure composed of an assembly of ceramic spheres and titanium springs. Results from both studies are shown and discussed regarding their impact on the baseline considerations for the thermal design of the beam screens.
Mars Aerocapture Systems Study
NASA Technical Reports Server (NTRS)
Wright, Henry S.; Oh, David Y.; Westhelle, Carlos H.; Fisher, Jody L.; Dyke, R. Eric; Edquist, Karl T.; Brown, James L.; Justh, Hilary L.; Munk, Michelle M.
2006-01-01
Mars Aerocapture Systems Study (MASS) is a detailed study of the application of aerocapture to a large Mars robotic orbiter to assess and identify key technology gaps. This study addressed use of an Opposition class return segment for use in the Mars Sample Return architecture. Study addressed mission architecture issues as well as system design. Key trade studies focused on design of aerocapture aeroshell, spacecraft design and packaging, guidance, navigation and control with simulation, computational fluid dynamics, and thermal protection system sizing. Detailed master equipment lists are included as well as a cursory cost assessment.
Breck, Andrew; Goodman, Ken; Dunn, Lillian; Stephens, Robert L; Dawkins, Nicola; Dixon, Beth; Jernigan, Jan; Kakietek, Jakub; Lesesne, Catherine; Lessard, Laura; Nonas, Cathy; O'Dell, Sarah Abood; Osuji, Thearis A; Bronson, Bernice; Xu, Ye; Kettel Khan, Laura
2014-10-16
This article describes the multi-method cross-sectional design used to evaluate New York City Department of Health and Mental Hygiene's regulations of nutrition, physical activity, and screen time for children aged 3 years or older in licensed group child care centers. The Center Evaluation Component collected data from a stratified random sample of 176 licensed group child care centers in New York City. Compliance with the regulations was measured through a review of center records, a facility inventory, and interviews of center directors, lead teachers, and food service staff. The Classroom Evaluation Component included an observational and biometric study of a sample of approximately 1,400 children aged 3 or 4 years attending 110 child care centers and was designed to complement the center component at the classroom and child level. The study methodology detailed in this paper may aid researchers in designing policy evaluation studies that can inform other jurisdictions considering similar policies.
Gao, Jingjing; Nangia, Narinder; Jia, Jia; Bolognese, James; Bhattacharyya, Jaydeep; Patel, Nitin
2017-06-01
In this paper, we propose an adaptive randomization design for Phase 2 dose-finding trials to optimize Net Present Value (NPV) for an experimental drug. We replace the traditional fixed sample size design (Patel, et al., 2012) by this new design to see if NPV from the original paper can be improved. Comparison of the proposed design to the previous design is made via simulations using a hypothetical example based on a Diabetic Neuropathic Pain Study. Copyright © 2017 Elsevier Inc. All rights reserved.
Integrated flight/propulsion control system design based on a decentralized, hierarchical approach
NASA Technical Reports Server (NTRS)
Mattern, Duane; Garg, Sanjay; Bullard, Randy
1989-01-01
A sample integrated flight/propulsion control system design is presented for the piloted longitudinal landing task with a modern, statistically unstable fighter aircraft. The design procedure is summarized. The vehicle model used in the sample study is described, and the procedure for partitioning the integrated system is presented along with a description of the subsystems. The high-level airframe performance specifications and control design are presented and the control performance is evaluated. The generation of the low-level (engine) subsystem specifications from the airframe requirements are discussed, and the engine performance specifications are presented along with the subsystem control design. A compensator to accommodate the influence of airframe outputs on the engine subsystem is also considered. Finally, the entire closed loop system performance and stability characteristics are examined.
Integrated flight/propulsion control system design based on a decentralized, hierarchical approach
NASA Technical Reports Server (NTRS)
Mattern, Duane; Garg, Sanjay; Bullard, Randy
1989-01-01
A sample integrated flight/propulsion control system design is presented for the piloted longitiudinal landing task with a modern, statistically unstable fighter aircraft. The design procedure is summarized, the vehicle model used in the sample study is described, and the procedure for partitioning the integrated system is presented along with a description of the subsystems. The high-level airframe performance specifications and control design are presented and the control performance is evaluated. The generation of the low-level (engine) subsystem specifications from the airframe requirements are discussed, and the engine performance specifications are presented along with the subsystem control design. A compensator to accommodate the influence of airframe outputs on the engine subsystem is also considered. Finally, the entire closed loop system performance and stability characteristics are examined.
Thompson, Steven K
2006-12-01
A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.
Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence
2017-11-01
When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.
Single-cell transcriptome conservation in cryopreserved cells and tissues.
Guillaumet-Adkins, Amy; Rodríguez-Esteban, Gustavo; Mereu, Elisabetta; Mendez-Lago, Maria; Jaitin, Diego A; Villanueva, Alberto; Vidal, August; Martinez-Marti, Alex; Felip, Enriqueta; Vivancos, Ana; Keren-Shaul, Hadas; Heath, Simon; Gut, Marta; Amit, Ido; Gut, Ivo; Heyn, Holger
2017-03-01
A variety of single-cell RNA preparation procedures have been described. So far, protocols require fresh material, which hinders complex study designs. We describe a sample preservation method that maintains transcripts in viable single cells, allowing one to disconnect time and place of sampling from subsequent processing steps. We sequence single-cell transcriptomes from >1000 fresh and cryopreserved cells using 3'-end and full-length RNA preparation methods. Our results confirm that the conservation process did not alter transcriptional profiles. This substantially broadens the scope of applications in single-cell transcriptomics and could lead to a paradigm shift in future study designs.
Badali, D. S.; Gengler, R. Y. N.; Miller, R. J. D.
2016-01-01
A compact electron source specifically designed for time-resolved diffraction studies of free-standing thin films and monolayers is presented here. The sensitivity to thin samples is achieved by extending the established technique of ultrafast electron diffraction to the “medium” energy regime (1–10 kV). An extremely compact design, in combination with low bunch charges, allows for high quality diffraction in a lensless geometry. The measured and simulated characteristics of the experimental system reveal sub-picosecond temporal resolution, while demonstrating the ability to produce high quality diffraction patterns from atomically thin samples. PMID:27226978
Cryogenic Liquid Sample Acquisition System for Remote Space Applications
NASA Technical Reports Server (NTRS)
Mahaffy, Paul; Trainer, Melissa; Wegel, Don; Hawk, Douglas; Melek, Tony; Johnson, Christopher; Amato, Michael; Galloway, John
2013-01-01
There is a need to acquire autonomously cryogenic hydrocarbon liquid sample from remote planetary locations such as the lakes of Titan for instruments such as mass spectrometers. There are several problems that had to be solved relative to collecting the right amount of cryogenic liquid sample into a warmer spacecraft, such as not allowing the sample to boil off or fractionate too early; controlling the intermediate and final pressures within carefully designed volumes; designing for various particulates and viscosities; designing to thermal, mass, and power-limited spacecraft interfaces; and reducing risk. Prior art inlets for similar instruments in spaceflight were designed primarily for atmospheric gas sampling and are not useful for this front-end application. These cryogenic liquid sample acquisition system designs for remote space applications allow for remote, autonomous, controlled sample collections of a range of challenging cryogenic sample types. The design can control the size of the sample, prevent fractionation, control pressures at various stages, and allow for various liquid sample levels. It is capable of collecting repeated samples autonomously in difficult lowtemperature conditions often found in planetary missions. It is capable of collecting samples for use by instruments from difficult sample types such as cryogenic hydrocarbon (methane, ethane, and propane) mixtures with solid particulates such as found on Titan. The design with a warm actuated valve is compatible with various spacecraft thermal and structural interfaces. The design uses controlled volumes, heaters, inlet and vent tubes, a cryogenic valve seat, inlet screens, temperature and cryogenic liquid sensors, seals, and vents to accomplish its task.
Chaemfa, Chakra; Wild, Edward; Davison, Brian; Barber, Jonathan L; Jones, Kevin C
2009-06-01
Polyurethane foam disks are a cheap and versatile tool for sampling persistent organic pollutants (POPs) from the air in ambient, occupational and indoor settings. This study provides important background information on the ways in which the performance of these commonly used passive air samplers may be influenced by the key environmental variables of wind speed and aerosol entrapment. Studies were performed in the field, a wind tunnel and with microscopy techniques, to investigate deployment conditions and foam density influence on gas phase sampling rates (not obtained in this study) and aerosol trapping. The study showed: wind speed inside the sampler is greater on the upper side of the sampling disk than the lower side and tethered samplers have higher wind speeds across the upper and lower surfaces of the foam disk at a wind speed > or = 4 m/s; particles are trapped on the foam surface and within the body of the foam disk; fine (<1 um) particles can form clusters of larger size inside the foam matrix. Whilst primarily designed to sample gas phase POPs, entrapment of particles ensures some 'sampling' of particle bound POPs species, such as higher molecular weight PAHs and PCDD/Fs. Further work is required to investigate how quantitative such entrapment or 'sampling' is under different ambient conditions, and with different aerosol sizes and types.
Improving the design of maintenance studies for bipolar disorder.
Gitlin, Michael J; Abulseoud, Osama; Frye, Mark A
2010-08-01
In contrast to the trial design of acute mania studies, there is no standard design for bipolar maintenance studies. Over the past 15 years, the design of monotherapy maintenance studies in bipolar disorder has evolved significantly, but recent study designs continue to differ in important ways. We reviewed the design of recent controlled bipolar maintenance studies, using PubMed, from August 2006 to August 2009, examining the strengths and weaknesses of different study design features. Design differences are sufficiently important that the disparate results across maintenance studies may reflect either true differences in medication efficacy or the effects of these design differences on outcome. Design elements such as recent episode polarity, stabilization criteria, using enriched versus nonenriched samples, length of stabilization before randomization, length of experimental phase, and recurrence outcome criteria are critical factors that differ widely across studies and likely play a role in study outcome. As consensus for trial designs for bipolar maintenance therapy is developed, it will be easier to develop algorithms for maintenance treatment based on results from studies as opposed to clinical opinions.
Optimal sampling for radiotelemetry studies of spotted owl habitat and home range.
Andrew B. Carey; Scott P. Horton; Janice A. Reid
1989-01-01
Radiotelemetry studies of spotted owl (Strix occidentalis) ranges and habitat-use must be designed efficiently to estimate parameters needed for a sample of individuals sufficient to describe the population. Independent data are required by analytical methods and provide the greatest return of information per effort. We examined time series of...
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Factors Associated with Job Content Plateauing among Older Workers
ERIC Educational Resources Information Center
Armstrong-Stassen, Marjorie
2008-01-01
Purpose: The purpose of this paper is to identify personal and work environment factors associated with the experience of job content plateauing among older workers. Design/methodology/approach: Two cross-sectional studies, each including two samples, were conducted. In each study, one sample consisted of a diverse group of older workers and the…
Christopher W. Woodall; Bruce Leutscher
2005-01-01
The sampling design for the Forest Inventory and Analysis (FIA) program of the U.S. Department of Agriculture Forest Service allows intensification of fuel inventory sampling in areas of ?special interest? and implementation of fuel sampling protocol by non-FIA personnel. The objective of this study is to evaluate the contribution of sampling intensification/extension...
Xiao, Deli; Zhang, Chan; He, Jia; Zeng, Rong; Chen, Rong; He, Hua
2016-01-01
Simple, accurate and high-throughput pretreatment method would facilitate large-scale studies of trace analysis in complex samples. Magnetic mixed hemimicelles solid-phase extraction has the power to become a key pretreatment method in biological, environmental and clinical research. However, lacking of experimental predictability and unsharpness of extraction mechanism limit the development of this promising method. Herein, this work tries to establish theoretical-based experimental designs for extraction of trace analytes from complex samples using magnetic mixed hemimicelles solid-phase extraction. We selected three categories and six sub-types of compounds for systematic comparative study of extraction mechanism, and comprehensively illustrated the roles of different force (hydrophobic interaction, π-π stacking interactions, hydrogen-bonding interaction, electrostatic interaction) for the first time. What’s more, the application guidelines for supporting materials, surfactants and sample matrix were also summarized. The extraction mechanism and platform established in the study render its future promising for foreseeable and efficient pretreatment under theoretical based experimental design for trace analytes from environmental, biological and clinical samples. PMID:27924944
Gu, C; Rao, D C
2001-01-01
Because simplistic designs will lead to prohibitively large sample sizes, the optimization of genetic study designs is critical for successfully mapping genes for complex diseases. Creative designs are necessary for detecting and amplifying the usually weak signals for complex traits. Two important outcomes of a study design--power and resolution--are implicitly tied together by the principle of uncertainty. Overemphasis on either one may lead to suboptimal designs. To achieve optimality for a particular study, therefore, practical measures such as cost-effectiveness must be used to strike a balance between power and resolution. In this light, the myriad of factors involved in study design can be checked for their effects on the ultimate outcomes, and the popular existing designs can be sorted into building blocks that may be useful for particular situations. It is hoped that imaginative construction of novel designs using such building blocks will lead to enhanced efficiency in finding genes for complex human traits.
MIGRATION OF HAZARDOUS SUBSTANCES THROUGH SOIL
Factorlally designed column and batch leaching studies were conducted on samples of various Industrial wastes, flue gas desulfurlzatlon sludges, and coal fly ash to determine the effect of leaching solution composition on release of hazardous substances from waste samples, and t...
SAMPLING LARGE RIVERS FOR ALGAE, BENTHIC MACROINVERTEBRATES AND FISH
Multiple projects are currently underway to increase our understanding of the effects of different sampling methods and designs used for the biological assessment and monitoring of large (boatable) rivers. Studies include methods used to assess fish, benthic macroinvertebrates, ...
A molecular identification system for grasses: a novel technology for forensic botany.
Ward, J; Peakall, R; Gilmore, S R; Robertson, J
2005-09-10
Our present inability to rapidly, accurately and cost-effectively identify trace botanical evidence remains the major impediment to the routine application of forensic botany. Grasses are amongst the most likely plant species encountered as forensic trace evidence and have the potential to provide links between crime scenes and individuals or other vital crime scene information. We are designing a molecular DNA-based identification system for grasses consisting of several PCR assays that, like a traditional morphological taxonomic key, provide criteria that progressively identify an unknown grass sample to a given taxonomic rank. In a prior study of DNA sequences across 20 phylogenetically representative grass species, we identified a series of potentially informative indels in the grass mitochondrial genome. In this study we designed and tested five PCR assays spanning these indels and assessed the feasibility of these assays to aid identification of unknown grass samples. We confirmed that for our control set of 20 samples, on which the design of the PCR assays was based, the five primer combinations produced the expected results. Using these PCR assays in a 'blind test', we were able to identify 25 unknown grass samples with some restrictions. Species belonging to genera represented in our control set were all correctly identified to genus with one exception. Similarly, genera belonging to tribes in the control set were correctly identified to the tribal level. Finally, for those samples for which neither the tribal or genus specific PCR assays were designed, we could confidently exclude these samples from belonging to certain tribes and genera. The results confirmed the utility of the PCR assays and the feasibility of developing a robust full-scale usable grass identification system for forensic purposes.
Freeman, Christine M; Crudgington, Sean; Stolberg, Valerie R; Brown, Jeanette P; Sonstein, Joanne; Alexis, Neil E; Doerschuk, Claire M; Basta, Patricia V; Carretta, Elizabeth E; Couper, David J; Hastie, Annette T; Kaner, Robert J; O'Neal, Wanda K; Paine, Robert; Rennard, Stephen I; Shimbo, Daichi; Woodruff, Prescott G; Zeidler, Michelle; Curtis, Jeffrey L
2015-01-27
Subpopulations and Intermediate Outcomes in COPD Study (SPIROMICS) is a multi-center longitudinal, observational study to identify novel phenotypes and biomarkers of chronic obstructive pulmonary disease (COPD). In a subset of 300 subjects enrolled at six clinical centers, we are performing flow cytometric analyses of leukocytes from induced sputum, bronchoalveolar lavage (BAL) and peripheral blood. To minimize several sources of variability, we use a "just-in-time" design that permits immediate staining without pre-fixation of samples, followed by centralized analysis on a single instrument. The Immunophenotyping Core prepares 12-color antibody panels, which are shipped to the six Clinical Centers shortly before study visits. Sputum induction occurs at least two weeks before a bronchoscopy visit, at which time peripheral blood and bronchoalveolar lavage are collected. Immunostaining is performed at each clinical site on the day that the samples are collected. Samples are fixed and express shipped to the Immunophenotyping Core for data acquisition on a single modified LSR II flow cytometer. Results are analyzed using FACS Diva and FloJo software and cross-checked by Core scientists who are blinded to subject data. Thus far, a total of 152 sputum samples and 117 samples of blood and BAL have been returned to the Immunophenotyping Core. Initial quality checks indicate useable data from 126 sputum samples (83%), 106 blood samples (91%) and 91 BAL samples (78%). In all three sample types, we are able to identify and characterize the activation state or subset of multiple leukocyte cell populations (including CD4+ and CD8+ T cells, B cells, monocytes, macrophages, neutrophils and eosinophils), thereby demonstrating the validity of the antibody panel. Our study design, which relies on bi-directional communication between clinical centers and the Core according to a pre-specified protocol, appears to reduce several sources of variability often seen in flow cytometric studies involving multiple clinical sites. Because leukocytes contribute to lung pathology in COPD, these analyses will help achieve SPIROMICS aims of identifying subgroups of patients with specific COPD phenotypes. Future analyses will correlate cell-surface markers on a given cell type with smoking history, spirometry, airway measurements, and other parameters. This study was registered with ClinicalTrials.gov as NCT01969344 .
Rozej-Bielicka, Wioletta; Masny, Aleksander; Golab, Elzbieta
2017-10-01
The goal of the study was to design a single tube PCR test for detection and differentiation of Babesia species in DNA samples obtained from diverse biological materials. A multiplex, single tube PCR test was designed for amplification of approximately 400 bp region of the Babesia 18S rRNA gene. Universal primers were designed to match DNA of multiple Babesia spp. and to have low levels of similarity to DNA sequences of other intracellular protozoa and Babesia hosts. The PCR products amplified from Babesia DNA isolated from human, dog, rodent, deer, and tick samples were subjected to high-resolution melting analysis for Babesia species identification. The designed test allowed detection and differentiation of four Babesia species, three zoonotic (B. microti, B. divergens, B. venatorum) and one that is generally not considered zoonotic-Babesia canis. Both detection and identification of all four species were possible based on the HRM curves of the PCR products in samples obtained from the following: humans, dogs, rodents, and ticks. No cross-reactivity with DNA of Babesia hosts or Plasmodium falciparum and Toxoplasma gondii was observed. The lack of cross-reactivity with P. falciparum DNA might allow using the assay in endemic malaria areas. The designed assay is the first PCR-based test for detection and differentiation of several Babesia spp. of medical and veterinary importance, in a single tube reaction. The results of the study show that the designed assay for Babesia detection and identification could be a practical and inexpensive tool for diagnostics and screening studies of diverse biological materials.
Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.
2015-01-01
A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data could result in better informed management decisions and assist in guidance for more effective estuarine restoration projects.
Comolli, Luis R; Duarte, Robert; Baum, Dennis; Luef, Birgit; Downing, Kenneth H; Larson, David M; Csencsits, Roseann; Banfield, Jillian F
2012-06-01
We present a modern, light portable device specifically designed for environmental samples for cryogenic transmission-electron microscopy (cryo-TEM) by on-site cryo-plunging. The power of cryo-TEM comes from preparation of artifact-free samples. However, in many studies, the samples must be collected at remote field locations, and the time involved in transporting samples back to the laboratory for cryogenic preservation can lead to severe degradation artifacts. Thus, going back to the basics, we developed a simple mechanical device that is light and easy to transport on foot yet effective. With the system design presented here we are able to obtain cryo-samples of microbes and microbial communities not possible to culture, in their near-intact environmental conditions as well as in routine laboratory work, and in real time. This methodology thus enables us to bring the power of cryo-TEM to microbial ecology. Copyright © 2011 Wiley Periodicals, Inc.
Cutaway line drawing of STS-34 middeck experiment Polymer Morphology (PM)
NASA Technical Reports Server (NTRS)
1989-01-01
Cutaway line drawing shows components of STS-34 middeck experiment Polymer Morphology (PM). Components include the EAC, heat exchanger, sample cell control (SCC), sample cells, source, interferometer, electronics, carousel drive, infrared (IR) beam, and carousel. PM, a 3M-developed organic materials processing experiment, is designed to explore the effects of microgravity on polymeric materials as they are processed in space. The samples of polymeric materials being studied in the PM experiment are thin films (25 microns or less) approximately 25mm in diameter. The samples are mounted between two infrared transparent windows in a specially designed infrared cell that provides the capability of thermally processing the samples to 200 degrees Celsius with a high degree of thermal control. The samples are mounted on a carousel that allows them to be positioned, one at a time, in the infrared beam where spectra may be acquired. The Generic Electronics Module (GEM) provides all carousel and
Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.
Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R
2018-05-26
Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Student Errors in Fractions and Possible Causes of These Errors
ERIC Educational Resources Information Center
Aksoy, Nuri Can; Yazlik, Derya Ozlem
2017-01-01
In this study, it was aimed to determine the errors and misunderstandings of 5th and 6th grade middle school students in fractions and operations with fractions. For this purpose, the case study model, which is a qualitative research design, was used in the research. In the study, maximum diversity sampling, which is a purposeful sampling method,…
Efficient Bayesian experimental design for contaminant source identification
NASA Astrophysics Data System (ADS)
Zhang, J.; Zeng, L.
2013-12-01
In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated.
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between logistical constraints and additional sampling performance should be carefully evaluated. PMID:28334046
Design-based and model-based inference in surveys of freshwater mollusks
Dorazio, R.M.
1999-01-01
Well-known concepts in statistical inference and sampling theory are used to develop recommendations for planning and analyzing the results of quantitative surveys of freshwater mollusks. Two methods of inference commonly used in survey sampling (design-based and model-based) are described and illustrated using examples relevant in surveys of freshwater mollusks. The particular objectives of a survey and the type of information observed in each unit of sampling can be used to help select the sampling design and the method of inference. For example, the mean density of a sparsely distributed population of mollusks can be estimated with higher precision by using model-based inference or by using design-based inference with adaptive cluster sampling than by using design-based inference with conventional sampling. More experience with quantitative surveys of natural assemblages of freshwater mollusks is needed to determine the actual benefits of different sampling designs and inferential procedures.
Planned Missing Data Designs with Small Sample Sizes: How Small Is Too Small?
ERIC Educational Resources Information Center
Jia, Fan; Moore, E. Whitney G.; Kinai, Richard; Crowe, Kelly S.; Schoemann, Alexander M.; Little, Todd D.
2014-01-01
Utilizing planned missing data (PMD) designs (ex. 3-form surveys) enables researchers to ask participants fewer questions during the data collection process. An important question, however, is just how few participants are needed to effectively employ planned missing data designs in research studies. This article explores this question by using…
Teaching Improvement Model Designed with DEA Method and Management Matrix
ERIC Educational Resources Information Center
Montoneri, Bernard
2014-01-01
This study uses student evaluation of teachers to design a teaching improvement matrix based on teaching efficiency and performance by combining management matrix and data envelopment analysis. This matrix is designed to formulate suggestions to improve teaching. The research sample consists of 42 classes of freshmen following a course of English…
Herzog, Sereina A; Blaizot, Stéphanie; Hens, Niel
2017-12-18
Mathematical models offer the possibility to investigate the infectious disease dynamics over time and may help in informing design of studies. A systematic review was performed in order to determine to what extent mathematical models have been incorporated into the process of planning studies and hence inform study design for infectious diseases transmitted between humans and/or animals. We searched Ovid Medline and two trial registry platforms (Cochrane, WHO) using search terms related to infection, mathematical model, and study design from the earliest dates to October 2016. Eligible publications and registered trials included mathematical models (compartmental, individual-based, or Markov) which were described and used to inform the design of infectious disease studies. We extracted information about the investigated infection, population, model characteristics, and study design. We identified 28 unique publications but no registered trials. Focusing on compartmental and individual-based models we found 12 observational/surveillance studies and 11 clinical trials. Infections studied were equally animal and human infectious diseases for the observational/surveillance studies, while all but one between humans for clinical trials. The mathematical models were used to inform, amongst other things, the required sample size (n = 16), the statistical power (n = 9), the frequency at which samples should be taken (n = 6), and from whom (n = 6). Despite the fact that mathematical models have been advocated to be used at the planning stage of studies or surveillance systems, they are used scarcely. With only one exception, the publications described theoretical studies, hence, not being utilised in real studies.
Trends in study design and the statistical methods employed in a leading general medicine journal.
Gosho, M; Sato, Y; Nagashima, K; Takahashi, S
2018-02-01
Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing after the presentation of the FDA guidance for adaptive design. © 2017 John Wiley & Sons Ltd.
Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R
2017-09-14
While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.
Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products
NASA Astrophysics Data System (ADS)
Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun
2011-10-01
To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."
Survey methods for assessing land cover map accuracy
Nusser, S.M.; Klaas, E.E.
2003-01-01
The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.
Trottier, Helen; Mayrand, Marie-Hélène; Coutlée, François; Monnier, Patricia; Laporte, Louise; Niyibizi, Joseph; Carceller, Ana-Maria; Fraser, William D; Brassard, Paul; Lacroix, Jacques; Francoeur, Diane; Bédard, Marie-Josée; Girard, Isabelle; Audibert, François
2016-12-01
Perinatal route of transmission of human papillomavirus (HPV) has been demonstrated in several small studies. We designed a large prospective cohort study (HERITAGE) to better understand perinatal HPV. The objective of this article is to present the study design and preliminary data. In the first phase of the study, we recruited 167 women in Montreal, Canada, during the first trimester of pregnancy. An additional 850 are currently being recruited in the ongoing phase. Cervicovaginal samples were obtained from mothers in the first trimester and tested for HPV DNA from 36 mucosal genotypes (and repeated in the third trimester for HPV-positive mothers). Placental samples were also taken for HPV DNA testing. Conjunctival, oral, pharyngeal and genital samples were collected for HPV DNA testing in children of HPV-positive mothers at every 3-6 months from birth until 2 years of age. Blood samples were collected in mother and children for HPV serology testing. We found a high prevalence of HPV in pregnant women (45%[95%CI:37-53%]) and in placentas (14%[8-21%]). The proportion of HPV positivity (any site) among children at birth/3-months was 11%[5-22%]. HPV was detected in children in multiple sites including the conjunctiva (5%[10-14%]). The ongoing HERITAGE cohort will help provide a better understanding of perinatal HPV. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
1981-01-01
The purpose of the Orbiting Quarantine Facility is to provide maximum protection of the terrestrial biosphere by ensuring that the returned Martian samples are safe to bring to Earth. The protocol designed to detect the presence of biologically active agents in the Martian soil is described. The protocol determines one of two things about the sample: (1) that it is free from nonterrestrial life forms and can be sent to a terrestrial containment facility where extensive chemical, biochemical, geological, and physical investigations can be conducted; or (2) that it exhibits "biological effects" of the type that dictate second order testing. The quarantine protocol is designed to be conducted on a small portion of the returned sample, leaving the bulk of the sample undisturbed for study on Earth.
Osborne, N J; Koplin, J J; Martin, P E; Gurrin, L C; Thiele, L; Tang, M L; Ponsonby, A-L; Dharmage, S C; Allen, K J
2010-10-01
The incidence of hospital admissions for food allergy-related anaphylaxis in Australia has increased, in line with world-wide trends. However, a valid measure of food allergy prevalence and risk factor data from a population-based study is still lacking. To describe the study design and methods used to recruit infants from a population for skin prick testing and oral food challenges, and the use of preliminary data to investigate the extent to which the study sample is representative of the target population. The study sampling frame design comprises 12-month-old infants presenting for routine scheduled vaccination at immunization clinics in Melbourne, Australia. We compared demographic features of participating families to population summary statistics from the Victorian Perinatal census database, and administered a survey to those non-responders who chose not to participate in the study. Study design proved acceptable to the community with good uptake (response rate 73.4%), with 2171 participants recruited. Demographic information on the study population mirrored the Victorian population with most the population parameters measured falling within our confidence intervals (CI). Use of a non-responder questionnaire revealed that a higher proportion of infants who declined to participate (non-responders) were already eating and tolerating peanuts, than those agreeing to participate (54.4%; 95% CI 50.8, 58.0 vs. 27.4%; 95% CI 25.5, 29.3 among participants). A high proportion of individuals approached in a community setting participated in a food allergy study. The study population differed from the eligible sample in relation to family history of allergy and prior consumption and peanut tolerance, providing some insights into the internal validity of the sample. The study exhibited external validity on general demographics to all births in Victoria. © 2010 Blackwell Publishing Ltd.
Abdulra'uf, Lukman Bola; Tan, Guan Huat
2013-12-15
Solid-phase microextraction (SPME) is a solvent-less sample preparation method which combines sample preparation, isolation, concentration and enrichment into one step. In this study, multivariate strategy was used to determine the significance of the factors affecting the solid phase microextraction of pesticide residues (fenobucarb, diazinon, chlorothalonil and chlorpyrifos) using a randomised factorial design. The interactions and effects of temperature, time and salt addition on the efficiency of the extraction of the pesticide residues were evaluated using 2(3) factorial designs. The analytes were extracted with 100 μm PDMS fibres according to the factorial design matrix and desorbed into a gas chromatography-mass spectrometry detector. The developed method was applied for the analysis of apple samples and the limits of detection were between 0.01 and 0.2 μg kg(-)(1), which were lower than the MRLs for apples. The relative standard deviations (RSD) were between 0.1% and 13.37% with average recovery of 80-105%. The linearity ranges from 0.5-50 μg kg(-)(1) with correlation coefficient greater than 0.99. Copyright © 2013 Elsevier Ltd. All rights reserved.
Abdullah, Kawsari; Thorpe, Kevin E; Mamak, Eva; Maguire, Jonathon L; Birken, Catherine S; Fehlings, Darcy; Hanley, Anthony J; Macarthur, Colin; Zlotkin, Stanley H; Parkin, Patricia C
2015-07-14
The OptEC trial aims to evaluate the effectiveness of oral iron in young children with non-anemic iron deficiency (NAID). The initial sample size calculated for the OptEC trial ranged from 112-198 subjects. Given the uncertainty regarding the parameters used to calculate the sample, an internal pilot study was conducted. The objectives of this internal pilot study were to obtain reliable estimate of parameters (standard deviation and design factor) to recalculate the sample size and to assess the adherence rate and reasons for non-adherence in children enrolled in the pilot study. The first 30 subjects enrolled into the OptEC trial constituted the internal pilot study. The primary outcome of the OptEC trial is the Early Learning Composite (ELC). For estimation of the SD of the ELC, descriptive statistics of the 4 month follow-up ELC scores were assessed within each intervention group. The observed SD within each group was then pooled to obtain an estimated SD (S2) of the ELC. Correlation (ρ) between the ELC measured at baseline and follow-up was assessed. Recalculation of the sample size was performed using analysis of covariance (ANCOVA) method which uses the design factor (1- ρ(2)). Adherence rate was calculated using a parent reported rate of missed doses of the study intervention. The new estimate of the SD of the ELC was found to be 17.40 (S2). The design factor was (1- ρ2) = 0.21. Using a significance level of 5%, power of 80%, S2 = 17.40 and effect estimate (Δ) ranging from 6-8 points, the new sample size based on ANCOVA method ranged from 32-56 subjects (16-28 per group). Adherence ranged between 14% and 100% with 44% of the children having an adherence rate ≥ 86%. Information generated from our internal pilot study was used to update the design of the full and definitive trial, including recalculation of sample size, determination of the adequacy of adherence, and application of strategies to improve adherence. ClinicalTrials.gov Identifier: NCT01481766 (date of registration: November 22, 2011).
Chambaere, Kenneth; Bilsen, Johan; Cohen, Joachim; Pousset, Geert; Onwuteaka-Philipsen, Bregje; Mortier, Freddy; Deliens, Luc
2008-01-01
Background Reliable studies of the incidence and characteristics of medical end-of-life decisions with a certain or possible life shortening effect (ELDs) are indispensable for an evidence-based medical and societal debate on this issue. This article presents the protocol drafted for the 2007 ELD Study in Flanders, Belgium, and outlines how the main aims and challenges of the study (i.e. making reliable incidence estimates of end-of-life decisions, even rare ones, and describing their characteristics; allowing comparability with past ELD studies; guaranteeing strict anonymity given the sensitive nature of the research topic; and attaining a sufficient response rate) are addressed in a post-mortem survey using a representative sample of death certificates. Study design Reliable incidence estimates are achievable by using large at random samples of death certificates of deceased persons in Flanders (aged one year or older). This entails the cooperation of the appropriate administrative authorities. To further ensure the reliability of the estimates and descriptions, especially of less prevalent end-of-life decisions (e.g. euthanasia), a stratified sample is drawn. A questionnaire is sent out to the certifying physician of each death sampled. The questionnaire, tested thoroughly and avoiding emotionally charged terms is based largely on questions that have been validated in previous national and European ELD studies. Anonymity of both patient and physician is guaranteed through a rigorous procedure, involving a lawyer as intermediary between responding physicians and researchers. To increase response we follow the Total Design Method (TDM) with a maximum of three follow-up mailings. Also, a non-response survey is conducted to gain insight into the reasons for lack of response. Discussion The protocol of the 2007 ELD Study in Flanders, Belgium, is appropriate for achieving the objectives of the study; as past studies in Belgium, the Netherlands, and other European countries have shown, strictly anonymous and thorough surveys among physicians using a large, stratified, and representative death certificate sample are most suitable in nationwide studies of incidence and characteristics of end-of-life decisions. There are however also some limitations to the study design. PMID:18752659
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
ERIC Educational Resources Information Center
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
Design Difficulties in Stand Density Studies
Frank A. Bennett
1969-01-01
Designing unbiased stand density studies is difficult. An acceptable sample requires stratification of the plots of age, site, and density. When basal area, percent stocking, or Reineke's stand density index is used as the density measure, this stratification forces a high negative correlation between site and number of trees per acre. Mortality in trees per acre...
ERIC Educational Resources Information Center
Lahey, Benjamin B.; D'Onofrio, Brian M.; Waldman, Irwin D.
2009-01-01
Epidemiology uses strong sampling methods and study designs to test refutable hypotheses regarding the causes of important health, mental health, and social outcomes. Epidemiologic methods are increasingly being used to move developmental psychopathology from studies that catalogue correlates of child and adolescent mental health to designs that…
Entrepreneurial Intentions of University Students: A Study of Design Undergraduates in Spain
ERIC Educational Resources Information Center
Ubierna, Francisco; Arranz, Nieves; Fdez de Arroyabe, J. C.
2014-01-01
This paper presents an analysis of the entrepreneurial intentions of university undergraduate students, with particular regard to those studying design. Attitudinal, social and capabilities variables are analysed in order to determine the profile of an entrepreneur. Using a sample of 521 undergraduate students, the findings show that design…
Accuracy or precision: Implications of sample design and methodology on abundance estimation
Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.
2015-01-01
Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.
NASA Astrophysics Data System (ADS)
Liu, Yongfang; Zhao, Yu; Chen, Guanrong
2016-11-01
This paper studies the distributed consensus and containment problems for a group of harmonic oscillators with a directed communication topology. First, for consensus without a leader, a class of distributed consensus protocols is designed by using motion planning and Pontryagin's principle. The proposed protocol only requires relative information measurements at the sampling instants, without requiring information exchange over the sampled interval. By using stability theory and the properties of stochastic matrices, it is proved that the distributed consensus problem can be solved in the motion planning framework. Second, for the case with multiple leaders, a class of distributed containment protocols is developed for followers such that their positions and velocities can ultimately converge to the convex hull formed by those of the leaders. Compared with the existing consensus algorithms, a remarkable advantage of the proposed sampled-data-based protocols is that the sampling periods, communication topologies and control gains are all decoupled and can be separately designed, which relaxes many restrictions in controllers design. Finally, some numerical examples are given to illustrate the effectiveness of the analytical results.
Comparing population size estimators for plethodontid salamanders
Bailey, L.L.; Simons, T.R.; Pollock, K.H.
2004-01-01
Despite concern over amphibian declines, few studies estimate absolute abundances because of logistic and economic constraints and previously poor estimator performance. Two estimation approaches recommended for amphibian studies are mark-recapture and depletion (or removal) sampling. We compared abundance estimation via various mark-recapture and depletion methods, using data from a three-year study of terrestrial salamanders in Great Smoky Mountains National Park. Our results indicate that short-term closed-population, robust design, and depletion methods estimate surface population of salamanders (i.e., those near the surface and available for capture during a given sampling occasion). In longer duration studies, temporary emigration violates assumptions of both open- and closed-population mark-recapture estimation models. However, if the temporary emigration is completely random, these models should yield unbiased estimates of the total population (superpopulation) of salamanders in the sampled area. We recommend using Pollock's robust design in mark-recapture studies because of its flexibility to incorporate variation in capture probabilities and to estimate temporary emigration probabilities.
Using GIS to generate spatially balanced random survey designs for natural resource applications.
Theobald, David M; Stevens, Don L; White, Denis; Urquhart, N Scott; Olsen, Anthony R; Norman, John B
2007-07-01
Sampling of a population is frequently required to understand trends and patterns in natural resource management because financial and time constraints preclude a complete census. A rigorous probability-based survey design specifies where to sample so that inferences from the sample apply to the entire population. Probability survey designs should be used in natural resource and environmental management situations because they provide the mathematical foundation for statistical inference. Development of long-term monitoring designs demand survey designs that achieve statistical rigor and are efficient but remain flexible to inevitable logistical or practical constraints during field data collection. Here we describe an approach to probability-based survey design, called the Reversed Randomized Quadrant-Recursive Raster, based on the concept of spatially balanced sampling and implemented in a geographic information system. This provides environmental managers a practical tool to generate flexible and efficient survey designs for natural resource applications. Factors commonly used to modify sampling intensity, such as categories, gradients, or accessibility, can be readily incorporated into the spatially balanced sample design.
Curriculum Design Orientations Preference Scale of Teachers: Validity and Reliability Study
ERIC Educational Resources Information Center
Bas, Gokhan
2013-01-01
The purpose of this study was to develop a valid and reliable scale for preferences of teachers in regard of their curriculum design orientations. Because there was no scale development study similar to this one in Turkey, it was considered as an urgent need to develop such a scale in the study. The sample of the research consisted of 300…
Moser, Barry Kurt; Halabi, Susan
2013-01-01
In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described. PMID:25530661
NASA Astrophysics Data System (ADS)
Saeed, O.; Duru, L.; Yulin, D.
2018-05-01
A proposed microfluidic design has been fabricated and simulated using COMSOL Multiphysics software, based on two physical models included in this design. The device’s ability to create a narrow stream of the core sample by controlling the sheath flow rates Qs1 and Qs2 in both peripheral channels was investigated. The main target of this paper is to study the possibility of combing the hydrodynamic and magnetic techniques, in order to achieve a high rate of cancer cells separation from a cell mixture and/or buffer sample. The study has been conducted in two stages, firstly, the effects of the sheath flow rates (Qs1 and Qs2) on the sample stream focusing were studied, to find the proposed device effectiveness optimal conditions and its capability in cell focusing, and then the magnetic mechanism has been utilized to finalize the pre-labelled cells separation process.
Assessment of sampling stability in ecological applications of discriminant analysis
Williams, B.K.; Titus, K.
1988-01-01
A simulation study was undertaken to assess the sampling stability of the variable loadings in linear discriminant function analysis. A factorial design was used for the factors of multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. A review of 60 published studies and 142 individual analyses indicated that sample sizes in ecological studies often have met that requirement. However, individual group sample sizes frequently were very unequal, and checks of assumptions usually were not reported. The authors recommend that ecologists obtain group sample sizes that are at least three times as large as the number of variables measured.
Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali
2013-01-01
Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering. PMID:23510128
Oskooei, Ali; Kaigala, Govind V
2017-06-01
We present a method for nonintrusive localization and reagent delivery on immersed biological samples with topographical variation on the order of hundreds of micrometers. Our technique, which we refer to as the deep-reaching hydrodynamic flow confinement (DR-HFC), is simple and passive: it relies on a deep-reaching hydrodynamic confinement delivered through a simple microfluidic probe design to perform localized microscale alterations on substrates as deep as 600 μm. Designed to scan centimeter-scale areas of biological substrates, our method passively prevents sample intrusion by maintaining a large gap between the probe and the substrate. The gap prevents collision of the probe and the substrate and reduces the shear stress experienced by the sample. We present two probe designs: linear and annular DR-HFC. Both designs comprise a reagent-injection aperture and aspiration apertures that serve to confine the reagent. We identify the design parameters affecting reagent localization and depth by DR-HFC and study their individual influence on the operation of DR-HFC numerically. Using DR-HFC, we demonstrate localized binding of antihuman immunoglobulin G (IgG) onto an activated substrate at various depths from 50 to 600 μm. DR-HFC provides a readily implementable approach for noninvasive processing of biological samples applicable to the next generation of diagnostic and bioanalytical devices.
Factorial experimental design intended for the optimization of the alumina purification conditions
NASA Astrophysics Data System (ADS)
Brahmi, Mounaouer; Ba, Mohamedou; Hidri, Yassine; Hassen, Abdennaceur
2018-04-01
The objective of this study was to determine the optimal conditions by using the experimental design methodology for the removal of some impurities associated with the alumina. So, three alumina qualities of different origins were investigated under the same conditions. The application of full-factorial designs on the samples of different qualities of alumina has followed the removal rates of the sodium oxide. However, a factorial experimental design was developed to describe the elimination of sodium oxide associated with the alumina. The experimental results showed that chemical analyze followed by XRF prior treatment of the samples, provided a primary idea concerning these prevailing impurities. Therefore, it appeared that the sodium oxide constituted the largest amount among all impurities. After the application of experimental design, analysis of the effectors different factors and their interactions showed that to have a better result, we should reduce the alumina quantity investigated and by against increase the stirring time for the first two samples, whereas, it was necessary to increase the alumina quantity in the case of the third sample. To expand and improve this research, we should take into account all existing impurities, since we found during this investigation that the levels of partial impurities increased after the treatment.
Stratified Sampling Design Based on Data Mining
Kim, Yeonkook J.; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon
2013-01-01
Objectives To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. Methods We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Results Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. Conclusions This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea. PMID:24175117
Mérida-López, Sergio; Extremera, Natalio; Rey, Lourdes
2018-01-01
Objective: In the last decades, increasing attention has been paid to examining psychological resources that might contribute to our understanding of suicide risk. Although Emotional Intelligence (EI) is one dimension that has been linked with decreased suicidal ideation and behaviors, we detected several gaps in the literature in this area regarding the research designs and samples involved. In this research, we aimed to test a mediator model considering self-report EI, psychological distress and suicide risk across samples adopting both cross-sectional and prospective designs in two independent studies. Method: In Study 1, our purpose was to examine the potential role of psychological distress as a mediator in the relationship between self-report EI and suicide risk in a community sample comprised of 438 adults (270 women; mean age: 33.21 years). In Study 2, we sought to examine the proposed mediator model considering a 2-month prospective design in a sample of college students (n = 330 in T1; n = 311 in T2; 264 women; mean age: 22.22 years). Results: In Study 1, we found that psychological distress partially mediated the effect of self-report EI on suicide risk. More interestingly, findings from Study 2 showed that psychological distress fully mediated the relationship between self-report EI and suicide risk at Time 2. Conclusion: These results point out the role of psychological distress as a mediator in the association between self-report EI and suicide risk. These findings suggest an underlying process by which self-report EI may act as a protective factor against suicidal ideation and behaviors. In line with the limitations of our work, plausible avenues for future research and interventions are discussed. PMID:29867607
Witnessing of Cheating-in-Exams Behavior and Factors Sustaining Integrity
ERIC Educational Resources Information Center
Starovoytova, Diana; Arimi, Milton
2017-01-01
This study is a fraction of a larger research on cheating, at the School of Engineering (SOE). The study design used a descriptive survey approach and a document analysis. A designed confidential self-report questioner was used as the main instrument, for this study, with the sample size of 100 subjects and response rate of 95%. The tool was…
An Assessment on Awareness and Acceptability of Child Adoption in Edo State
ERIC Educational Resources Information Center
Aluyor, P.; Salami, L. I.
2017-01-01
The study examines the awareness and acceptability of child adoption in Edo State. The design used for the study was survey design. The population for the study is made up of adults male and female in Esan West Local Government Area. One hundred respondents were randomly selected using random sampling techniques. The validity was ascertained by…
ERIC Educational Resources Information Center
Asteris, Mark M., Jr.
2012-01-01
This study was designed to investigate the differences in Motivational Interviewing (MI) skill acquisition and retention among probation officers. This study had a randomized, experimental, pretest-posttest control group design using the MITI 3.1.1 and the VASE-R to measure MI skill acquisition and retention. A random sample (n = 24) of probation…
Overview of the Mars Sample Return Earth Entry Vehicle
NASA Technical Reports Server (NTRS)
Dillman, Robert; Corliss, James
2008-01-01
NASA's Mars Sample Return (MSR) project will bring Mars surface and atmosphere samples back to Earth for detailed examination. Langley Research Center's MSR Earth Entry Vehicle (EEV) is a core part of the mission, protecting the sample container during atmospheric entry, descent, and landing. Planetary protection requirements demand a higher reliability from the EEV than for any previous planetary entry vehicle. An overview of the EEV design and preliminary analysis is presented, with a follow-on discussion of recommended future design trade studies to be performed over the next several years in support of an MSR launch in 2018 or 2020. Planned topics include vehicle size for impact protection of a range of sample container sizes, outer mold line changes to achieve surface sterilization during re-entry, micrometeoroid protection, aerodynamic stability, thermal protection, and structural materials selection.
Assessing the Alcohol-BMI Relationship in a US National Sample of College Students
ERIC Educational Resources Information Center
Barry, Adam E.; Piazza-Gardner, Anna K.; Holton, M. Kim
2015-01-01
Objective: This study sought to assess the body mass index (BMI)-alcohol relationship among a US national sample of college students. Design: Secondary data analysis using the Fall 2011 National College Health Assessment (NCHA). Setting: A total of 44 US higher education institutions. Methods: Participants included a national sample of college…
NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR TRACKING SYSTEM (UA-D-28.0)
The NHEXAS Arizona project designed a system that tracks what occurs to a sample and provides the status of that sample at any given time. In essence, the tracking system provides an electronic chain of custody record for each sample as it moves through the project. This is ach...
Kalkhan, M.A.; Stafford, E.J.; Stohlgren, T.J.
2007-01-01
Geospatial statistical modelling and thematic maps have recently emerged as effective tools for the management of natural areas at the landscape scale. Traditional methods for the collection of field data pertaining to questions of landscape were developed without consideration for the parameters of these applications. We introduce an alternative field sampling design based on smaller unbiased random plot and subplot locations called the pixel nested plot (PNP). We demonstrate the applicability of the PNP design of 15 m x 15 m to assess patterns of plant diversity and species richness across the landscape at Rocky Mountain National Park (RMNP), Colorado, USA in a time (cost)-efficient manner for field data collection. Our results produced comparable results to a previous study in the Beaver Meadow study (BMS) area within RMNP, where there was a demonstrated focus of plant diversity. Our study used the smaller PNP sampling design for field data collection which could be linked to geospatial information data and could be used for landscape-scale analyses and assessment applications. In 2003, we established 61 PNP in the eastern region of RMNP. We present a comparison between this approach using a sub-sample of 19 PNP from this data set and 20 of Modified Whittaker nested plots (MWNP) of 20 m x 50 m that were collected in the BMS area. The PNP captured 266 unique plant species while the MWNP captured 275 unique species. Based on a comparison of PNP and MWNP in the Beaver Meadows area, RMNP, the PNP required less time and area sampled to achieve a similar number of species sampled. Using the PNP approach for data collection can facilitate the ecological monitoring of these vulnerable areas at the landscape scale in a time- and therefore cost-effective manner. ?? 2007 The Authors.
Bayes factor design analysis: Planning for compelling evidence.
Schönbrodt, Felix D; Wagenmakers, Eric-Jan
2018-02-01
A sizeable literature exists on the use of frequentist power analysis in the null-hypothesis significance testing (NHST) paradigm to facilitate the design of informative experiments. In contrast, there is almost no literature that discusses the design of experiments when Bayes factors (BFs) are used as a measure of evidence. Here we explore Bayes Factor Design Analysis (BFDA) as a useful tool to design studies for maximum efficiency and informativeness. We elaborate on three possible BF designs, (a) a fixed-n design, (b) an open-ended Sequential Bayes Factor (SBF) design, where researchers can test after each participant and can stop data collection whenever there is strong evidence for either [Formula: see text] or [Formula: see text], and (c) a modified SBF design that defines a maximal sample size where data collection is stopped regardless of the current state of evidence. We demonstrate how the properties of each design (i.e., expected strength of evidence, expected sample size, expected probability of misleading evidence, expected probability of weak evidence) can be evaluated using Monte Carlo simulations and equip researchers with the necessary information to compute their own Bayesian design analyses.
GEOSTATISTICAL SAMPLING DESIGNS FOR HAZARDOUS WASTE SITES
This chapter discusses field sampling design for environmental sites and hazardous waste sites with respect to random variable sampling theory, Gy's sampling theory, and geostatistical (kriging) sampling theory. The literature often presents these sampling methods as an adversari...
Incorporating the sampling design in weighting adjustments for panel attrition
Chen, Qixuan; Gelman, Andrew; Tracy, Melissa; Norris, Fran H.; Galea, Sandro
2015-01-01
We review weighting adjustment methods for panel attrition and suggest approaches for incorporating design variables, such as strata, clusters and baseline sample weights. Design information can typically be included in attrition analysis using multilevel models or decision tree methods such as the CHAID algorithm. We use simulation to show that these weighting approaches can effectively reduce bias in the survey estimates that would occur from omitting the effect of design factors on attrition while keeping the resulted weights stable. We provide a step-by-step illustration on creating weighting adjustments for panel attrition in the Galveston Bay Recovery Study, a survey of residents in a community following a disaster, and provide suggestions to analysts in decision making about weighting approaches. PMID:26239405
Crude incidence in two-phase designs in the presence of competing risks.
Rebora, Paola; Antolini, Laura; Glidden, David V; Valsecchi, Maria Grazia
2016-01-11
In many studies, some information might not be available for the whole cohort, some covariates, or even the outcome, might be ascertained in selected subsamples. These studies are part of a broad category termed two-phase studies. Common examples include the nested case-control and the case-cohort designs. For two-phase studies, appropriate weighted survival estimates have been derived; however, no estimator of cumulative incidence accounting for competing events has been proposed. This is relevant in the presence of multiple types of events, where estimation of event type specific quantities are needed for evaluating outcome. We develop a non parametric estimator of the cumulative incidence function of events accounting for possible competing events. It handles a general sampling design by weights derived from the sampling probabilities. The variance is derived from the influence function of the subdistribution hazard. The proposed method shows good performance in simulations. It is applied to estimate the crude incidence of relapse in childhood acute lymphoblastic leukemia in groups defined by a genotype not available for everyone in a cohort of nearly 2000 patients, where death due to toxicity acted as a competing event. In a second example the aim was to estimate engagement in care of a cohort of HIV patients in resource limited setting, where for some patients the outcome itself was missing due to lost to follow-up. A sampling based approach was used to identify outcome in a subsample of lost patients and to obtain a valid estimate of connection to care. A valid estimator for cumulative incidence of events accounting for competing risks under a general sampling design from an infinite target population is derived.
Lavender, S A; Marras, W S; Sabol, R J
2002-01-01
As more and more manufacturing is moved to Mexico, the need for anthropometric data describing the Mexican working population becomes more pronounced. The purpose of this study was to obtain data on 21 anthropometric measures that could readily be used to design workplaces in light manufacturing operations. Eighty-seven females, representing 26% of the plant's employees, were sampled. Measurements were made with the shoes on. The mean stature (height) and elbow heights of this sample were 156 cm and 97 cm. Another recently published survey of female factory workers near the U.S. border included 12 anthropometric dimensions. Five of the dimensions were measured in both studies. Hand lengths were nearly identical; however, the 2 to 3 cm differences in the heights measured in the current study are consistent with the incorporation of the footwear in the current measurements. Thus, this study adds to the growing database that can be used when designing these light manufacturing jobs in Mexico.
Strategies for Improving Power in School-Randomized Studies of Professional Development.
Kelcey, Ben; Phelps, Geoffrey
2013-12-01
Group-randomized designs are well suited for studies of professional development because they can accommodate programs that are delivered to intact groups (e.g., schools), the collaborative nature of professional development, and extant teacher/school assignments. Though group designs may be theoretically favorable, prior evidence has suggested that they may be challenging to conduct in professional development studies because well-powered designs will typically require large sample sizes or expect large effect sizes. Using teacher knowledge outcomes in mathematics, we investigated when and the extent to which there is evidence that covariance adjustment on a pretest, teacher certification, or demographic covariates can reduce the sample size necessary to achieve reasonable power. Our analyses drew on multilevel models and outcomes in five different content areas for over 4,000 teachers and 2,000 schools. Using these estimates, we assessed the minimum detectable effect sizes for several school-randomized designs with and without covariance adjustment. The analyses suggested that teachers' knowledge is substantially clustered within schools in each of the five content areas and that covariance adjustment for a pretest or, to a lesser extent, teacher certification, has the potential to transform designs that are unreasonably large for professional development studies into viable studies. © The Author(s) 2014.
Consideration of sample return and the exploration strategy for Mars
NASA Technical Reports Server (NTRS)
Bogard, D. C.; Duke, M. B.; Gibson, E. K.; Minear, J. W.; Nyquist, L. E.; Phinney, W. C.
1979-01-01
The scientific rationale and requirements for a Mars surface sample return were examined and the experience gained from the analysis and study of the returned lunar samples were incorporated into the science requirements and engineering design for the Mars sample return mission. The necessary data sets for characterizing Mars are presented. If further analyses of surface samples are to be made, the best available method is for the analysis to be conducted in terrestrial laboratories.
Visual Sample Plan Version 7.0 User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzke, Brett D.; Newburn, Lisa LN; Hathaway, John E.
2014-03-01
User's guide for VSP 7.0 This user's guide describes Visual Sample Plan (VSP) Version 7.0 and provides instructions for using the software. VSP selects the appropriate number and location of environmental samples to ensure that the results of statistical tests performed to provide input to risk decisions have the required confidence and performance. VSP Version 7.0 provides sample-size equations or algorithms needed by specific statistical tests appropriate for specific environmental sampling objectives. It also provides data quality assessment and statistical analysis functions to support evaluation of the data and determine whether the data support decisions regarding sites suspected of contamination.more » The easy-to-use program is highly visual and graphic. VSP runs on personal computers with Microsoft Windows operating systems (XP, Vista, Windows 7, and Windows 8). Designed primarily for project managers and users without expertise in statistics, VSP is applicable to two- and three-dimensional populations to be sampled (e.g., rooms and buildings, surface soil, a defined layer of subsurface soil, water bodies, and other similar applications) for studies of environmental quality. VSP is also applicable for designing sampling plans for assessing chem/rad/bio threat and hazard identification within rooms and buildings, and for designing geophysical surveys for unexploded ordnance (UXO) identification.« less
Design and optimization of reverse-transcription quantitative PCR experiments.
Tichopad, Ales; Kitchen, Rob; Riedmaier, Irmgard; Becker, Christiane; Ståhlberg, Anders; Kubista, Mikael
2009-10-01
Quantitative PCR (qPCR) is a valuable technique for accurately and reliably profiling and quantifying gene expression. Typically, samples obtained from the organism of study have to be processed via several preparative steps before qPCR. We estimated the errors of sample withdrawal and extraction, reverse transcription (RT), and qPCR that are introduced into measurements of mRNA concentrations. We performed hierarchically arranged experiments with 3 animals, 3 samples, 3 RT reactions, and 3 qPCRs and quantified the expression of several genes in solid tissue, blood, cell culture, and single cells. A nested ANOVA design was used to model the experiments, and relative and absolute errors were calculated with this model for each processing level in the hierarchical design. We found that intersubject differences became easily confounded by sample heterogeneity for single cells and solid tissue. In cell cultures and blood, the noise from the RT and qPCR steps contributed substantially to the overall error because the sampling noise was less pronounced. We recommend the use of sample replicates preferentially to any other replicates when working with solid tissue, cell cultures, and single cells, and we recommend the use of RT replicates when working with blood. We show how an optimal sampling plan can be calculated for a limited budget. .
Methodological Rigor in Preclinical Cardiovascular Studies
Ramirez, F. Daniel; Motazedian, Pouya; Jung, Richard G.; Di Santo, Pietro; MacDonald, Zachary D.; Moreland, Robert; Simard, Trevor; Clancy, Aisling A.; Russo, Juan J.; Welch, Vivian A.; Wells, George A.
2017-01-01
Rationale: Methodological sources of bias and suboptimal reporting contribute to irreproducibility in preclinical science and may negatively affect research translation. Randomization, blinding, sample size estimation, and considering sex as a biological variable are deemed crucial study design elements to maximize the quality and predictive value of preclinical experiments. Objective: To examine the prevalence and temporal patterns of recommended study design element implementation in preclinical cardiovascular research. Methods and Results: All articles published over a 10-year period in 5 leading cardiovascular journals were reviewed. Reports of in vivo experiments in nonhuman mammals describing pathophysiology, genetics, or therapeutic interventions relevant to specific cardiovascular disorders were identified. Data on study design and animal model use were collected. Citations at 60 months were additionally examined as a surrogate measure of research impact in a prespecified subset of studies, stratified by individual and cumulative study design elements. Of 28 636 articles screened, 3396 met inclusion criteria. Randomization was reported in 21.8%, blinding in 32.7%, and sample size estimation in 2.3%. Temporal and disease-specific analyses show that the implementation of these study design elements has overall not appreciably increased over the past decade, except in preclinical stroke research, which has uniquely demonstrated significant improvements in methodological rigor. In a subset of 1681 preclinical studies, randomization, blinding, sample size estimation, and inclusion of both sexes were not associated with increased citations at 60 months. Conclusions: Methodological shortcomings are prevalent in preclinical cardiovascular research, have not substantially improved over the past 10 years, and may be overlooked when basing subsequent studies. Resultant risks of bias and threats to study validity have the potential to hinder progress in cardiovascular medicine as preclinical research often precedes and informs clinical trials. Stroke research quality has uniquely improved in recent years, warranting a closer examination for interventions to model in other cardiovascular fields. PMID:28373349
Ramirez, F Daniel; Motazedian, Pouya; Jung, Richard G; Di Santo, Pietro; MacDonald, Zachary D; Moreland, Robert; Simard, Trevor; Clancy, Aisling A; Russo, Juan J; Welch, Vivian A; Wells, George A; Hibbert, Benjamin
2017-06-09
Methodological sources of bias and suboptimal reporting contribute to irreproducibility in preclinical science and may negatively affect research translation. Randomization, blinding, sample size estimation, and considering sex as a biological variable are deemed crucial study design elements to maximize the quality and predictive value of preclinical experiments. To examine the prevalence and temporal patterns of recommended study design element implementation in preclinical cardiovascular research. All articles published over a 10-year period in 5 leading cardiovascular journals were reviewed. Reports of in vivo experiments in nonhuman mammals describing pathophysiology, genetics, or therapeutic interventions relevant to specific cardiovascular disorders were identified. Data on study design and animal model use were collected. Citations at 60 months were additionally examined as a surrogate measure of research impact in a prespecified subset of studies, stratified by individual and cumulative study design elements. Of 28 636 articles screened, 3396 met inclusion criteria. Randomization was reported in 21.8%, blinding in 32.7%, and sample size estimation in 2.3%. Temporal and disease-specific analyses show that the implementation of these study design elements has overall not appreciably increased over the past decade, except in preclinical stroke research, which has uniquely demonstrated significant improvements in methodological rigor. In a subset of 1681 preclinical studies, randomization, blinding, sample size estimation, and inclusion of both sexes were not associated with increased citations at 60 months. Methodological shortcomings are prevalent in preclinical cardiovascular research, have not substantially improved over the past 10 years, and may be overlooked when basing subsequent studies. Resultant risks of bias and threats to study validity have the potential to hinder progress in cardiovascular medicine as preclinical research often precedes and informs clinical trials. Stroke research quality has uniquely improved in recent years, warranting a closer examination for interventions to model in other cardiovascular fields. © 2017 The Authors.
A Bayesian model for estimating population means using a link-tracing sampling design.
St Clair, Katherine; O'Connell, Daniel
2012-03-01
Link-tracing sampling designs can be used to study human populations that contain "hidden" groups who tend to be linked together by a common social trait. These links can be used to increase the sampling intensity of a hidden domain by tracing links from individuals selected in an initial wave of sampling to additional domain members. Chow and Thompson (2003, Survey Methodology 29, 197-205) derived a Bayesian model to estimate the size or proportion of individuals in the hidden population for certain link-tracing designs. We propose an addition to their model that will allow for the modeling of a quantitative response. We assess properties of our model using a constructed population and a real population of at-risk individuals, both of which contain two domains of hidden and nonhidden individuals. Our results show that our model can produce good point and interval estimates of the population mean and domain means when our population assumptions are satisfied. © 2011, The International Biometric Society.
Practical characteristics of adaptive design in phase 2 and 3 clinical trials.
Sato, A; Shimura, M; Gosho, M
2018-04-01
Adaptive design methods are expected to be ethical, reflect real medical practice, increase the likelihood of research and development success and reduce the allocation of patients into ineffective treatment groups by the early termination of clinical trials. However, the comprehensive details regarding which types of clinical trials will include adaptive designs remain unclear. We examined the practical characteristics of adaptive design used in clinical trials. We conducted a literature search of adaptive design clinical trials published from 2012 to 2015 using PubMed, EMBASE, and the Cochrane Central Register of Controlled Trials, with common search terms related to adaptive design. We systematically assessed the types and characteristics of adaptive designs and disease areas employed in the adaptive design trials. Our survey identified 245 adaptive design clinical trials. The number of trials by the publication year increased from 2012 to 2013 and did not greatly change afterwards. The most frequently used adaptive design was group sequential design (n = 222, 90.6%), especially for neoplasm or cardiovascular disease trials. Among the other types of adaptive design, adaptive dose/treatment group selection (n = 21, 8.6%) and adaptive sample-size adjustment (n = 19, 7.8%) were frequently used. The adaptive randomization (n = 8, 3.3%) and adaptive seamless design (n = 6, 2.4%) were less frequent. Adaptive dose/treatment group selection and adaptive sample-size adjustment were frequently used (up to 23%) in "certain infectious and parasitic diseases," "diseases of nervous system," and "mental and behavioural disorders" in comparison with "neoplasms" (<6.6%). For "mental and behavioural disorders," adaptive randomization was used in two trials of eight trials in total (25%). Group sequential design and adaptive sample-size adjustment were used frequently in phase 3 trials or in trials where study phase was not specified, whereas the other types of adaptive designs were used more in phase 2 trials. Approximately 82% (202 of 245 trials) resulted in early termination at the interim analysis. Among the 202 trials, 132 (54% of 245 trials) had fewer randomized patients than initially planned. This result supports the motive to use adaptive design to make study durations shorter and include a smaller number of subjects. We found that adaptive designs have been applied to clinical trials in various therapeutic areas and interventions. The applications were frequently reported in neoplasm or cardiovascular clinical trials. The adaptive dose/treatment group selection and sample-size adjustment are increasingly common, and these adaptations generally follow the Food and Drug Administration's (FDA's) recommendations. © 2017 John Wiley & Sons Ltd.
Review of research on the health of caregiving grandparents.
Grinstead, Linda Nicholson; Leder, Sharon; Jensen, Susan; Bond, Linda
2003-11-01
To provide a critical review of research literature on the health of grandparents raising grandchildren, and identify directions for future research. Approaches used to access the research studies for this review included a comprehensive search using relevant electronic databases and a thorough examination of the references in each published study. All studies but one were published after 1990. Samples consisted primarily of African-American and Caucasian grandmothers in the United States of America. Earlier studies tended to describe health and other related concepts while more recent studies began to examine relationships between concepts. Most of the studies had a cross-sectional design and only one evaluated interventions. Inconsistencies in the results of these studies were prevalent. Evaluation studies, longitudinal designs, and more varied study samples including cross-cultural comparisons are needed to advance knowledge about grandparent caregivers' health.
USDA-ARS?s Scientific Manuscript database
TA study was conducted to compare nutrient flows determined by a reticular sampling technique with those made by sampling of digesta from the omasal canal. Six lactating dairy cows fitted with ruminal cannulas were used in a design with a 3 x 2 factorial arrangement of treatments and 4 periods. Trea...
Review of atrazine sampling by polar organic chemical integrative samplers and Chemcatcher.
Booij, Kees; Chen, Sunmao
2018-04-24
A key success factor for the performance of passive samplers is the proper calibration of sampling rates. Sampling rates for a wide range of polar organic compounds are available for Chemcatchers and polar organic chemical integrative samplers (POCIS), but the mechanistic models that are needed to understand the effects of exposure conditions on sampling rates need improvement. Literature data on atrazine sampling rates by these samplers were reviewed with the aim of assessing what can be learned from literature reports of this well-studied compound and identifying knowledge gaps related to the effects of flow and temperature. The flow dependency of sampling rates could be described by a mass transfer resistance model with 1 (POCIS) or 2 (Chemcatcher) adjustable parameters. Literature data were insufficient to evaluate the temperature effect on the sampling rates. An evaluation of reported sampler configurations showed that standardization of sampler design can be improved: for POCIS with respect to surface area and sorbent mass, and for Chemcatcher with respect to housing design. Several reports on atrazine sampling could not be used because the experimental setups were insufficiently described with respect to flow conditions. Recommendations are made for standardization of sampler layout and documentation of flow conditions in calibration studies. Environ Toxicol Chem 2018;9999:1-13. © 2018 SETAC. © 2018 SETAC.
Evaluation of 1H NMR metabolic profiling using biofluid mixture design.
Athersuch, Toby J; Malik, Shahid; Weljie, Aalim; Newton, Jack; Keun, Hector C
2013-07-16
A strategy for evaluating the performance of quantitative spectral analysis tools in conditions that better approximate background variation in a metabonomics experiment is presented. Three different urine samples were mixed in known proportions according to a {3, 3} simplex lattice experimental design and analyzed in triplicate by 1D (1)H NMR spectroscopy. Fifty-four urinary metabolites were subsequently quantified from the sample spectra using two methods common in metabolic profiling studies: (1) targeted spectral fitting and (2) targeted spectral integration. Multivariate analysis using partial least-squares (PLS) regression showed the latent structure of the spectral set recapitulated the experimental mixture design. The goodness-of-prediction statistic (Q(2)) of each metabolite variable in a PLS model was calculated as a metric for the reliability of measurement, across the sample compositional space. Several metabolites were observed to have low Q(2) values, largely as a consequence of their spectral resonances having low s/n or strong overlap with other sample components. This strategy has the potential to allow evaluation of spectral features obtained from metabolic profiling platforms in the context of the compositional background found in real biological sample sets, which may be subject to considerable variation. We suggest that it be incorporated into metabolic profiling studies to improve the estimation of matrix effects that confound accurate metabolite measurement. This novel method provides a rational basis for exploiting information from several samples in an efficient manner and avoids the use of multiple spike-in authentic standards, which may be difficult to obtain.
30 CFR 71.208 - Bimonthly sampling; designated work positions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Bimonthly sampling; designated work positions... COAL MINE SAFETY AND HEALTH MANDATORY HEALTH STANDARDS-SURFACE COAL MINES AND SURFACE WORK AREAS OF UNDERGROUND COAL MINES Sampling Procedures § 71.208 Bimonthly sampling; designated work positions. (a) Each...
Designing Studies That Would Address the Multilayered Nature of Health Care
Pennell, Michael; Rhoda, Dale; Hade, Erinn M.; Paskett, Electra D.
2010-01-01
We review design and analytic methods available for multilevel interventions in cancer research with particular attention to study design, sample size requirements, and potential to provide statistical evidence for causal inference. The most appropriate methods will depend on the stage of development of the research and whether randomization is possible. Early on, fractional factorial designs may be used to screen intervention components, particularly when randomization of individuals is possible. Quasi-experimental designs, including time-series and multiple baseline designs, can be useful once the intervention is designed because they require few sites and can provide the preliminary evidence to plan efficacy studies. In efficacy and effectiveness studies, group-randomized trials are preferred when randomization is possible and regression discontinuity designs are preferred otherwise if assignment based on a quantitative score is possible. Quasi-experimental designs may be used, especially when combined with recent developments in analytic methods to reduce bias in effect estimates. PMID:20386057
Improving the quality of biomarker discovery research: the right samples and enough of them.
Pepe, Margaret S; Li, Christopher I; Feng, Ziding
2015-06-01
Biomarker discovery research has yielded few biomarkers that validate for clinical use. A contributing factor may be poor study designs. The goal in discovery research is to identify a subset of potentially useful markers from a large set of candidates assayed on case and control samples. We recommend the PRoBE design for selecting samples. We propose sample size calculations that require specifying: (i) a definition for biomarker performance; (ii) the proportion of useful markers the study should identify (Discovery Power); and (iii) the tolerable number of useless markers amongst those identified (False Leads Expected, FLE). We apply the methodology to a study of 9,000 candidate biomarkers for risk of colon cancer recurrence where a useful biomarker has positive predictive value ≥ 30%. We find that 40 patients with recurrence and 160 without recurrence suffice to filter out 98% of useless markers (2% FLE) while identifying 95% of useful biomarkers (95% Discovery Power). Alternative methods for sample size calculation required more assumptions. Biomarker discovery research should utilize quality biospecimen repositories and include sample sizes that enable markers meeting prespecified performance characteristics for well-defined clinical applications to be identified. The scientific rigor of discovery research should be improved. ©2015 American Association for Cancer Research.
Reduction of Martian Sample Return Mission Launch Mass with Solar Sail Propulsion
NASA Technical Reports Server (NTRS)
Russell, Tiffany E.; Heaton, Andy F.; Young, Roy; Baysinger, Mike; Schnell, Andrew R.
2013-01-01
Solar sails have the potential to provide mass and cost savings for spacecraft traveling within the innter solar system. Companies like L'Garde have demonstrated sail manufacturability and various i-space development methods. The purpose of this study was to evaluate a current Mars sample return architecture and to determine how cost and mass would be reduced by incorporating a solar sail propulsion system. The team validated the design proposed by L'Garde, and scaled the design based on a trajectory analysis. Using the solar sail design reduced the required mass, eliminating one of the three launches required in the original architecture.
Reduction of Martian Sample Return Mission Launch Mass with Solar Sail Propulsion
NASA Technical Reports Server (NTRS)
Russell, Tiffany E.; Heaton, Andrew; Thomas, Scott; Thomas, Dan; Young, Roy; Baysinger, Mike; Capizzo, Pete; Fabisinski, Leo; Hornsby, Linda; Maples, Dauphne;
2013-01-01
Solar sails have the potential to provide mass and cost savings for spacecraft traveling within the inner solar system. Companies like L'Garde have demonstrated sail manufacturability and various in-space deployment methods. The purpose of this study was to evaluate a current Mars sample return architecture and to determine how cost and mass would be reduced by incorporating a solar sail propulsion system. The team validated the design proposed by L'Garde, and scaled the design based on a trajectory analysis. Using the solar sail design reduced the required mass, eliminating one of the three launches required in the original architecture.
Steinberg, David M.; Fine, Jason; Chappell, Rick
2009-01-01
Important properties of diagnostic methods are their sensitivity, specificity, and positive and negative predictive values (PPV and NPV). These methods are typically assessed via case–control samples, which include one cohort of cases known to have the disease and a second control cohort of disease-free subjects. Such studies give direct estimates of sensitivity and specificity but only indirect estimates of PPV and NPV, which also depend on the disease prevalence in the tested population. The motivating example arises in assay testing, where usage is contemplated in populations with known prevalences. Further instances include biomarker development, where subjects are selected from a population with known prevalence and assessment of PPV and NPV is crucial, and the assessment of diagnostic imaging procedures for rare diseases, where case–control studies may be the only feasible designs. We develop formulas for optimal allocation of the sample between the case and control cohorts and for computing sample size when the goal of the study is to prove that the test procedure exceeds pre-stated bounds for PPV and/or NPV. Surprisingly, the optimal sampling schemes for many purposes are highly unbalanced, even when information is desired on both PPV and NPV. PMID:18556677
Armstrong, Jenna L; Fitzpatrick, Cole F; Loftus, Christine T; Yost, Michael G; Tchong-French, Maria; Karr, Catherine J
2013-09-01
This research describes the design, deployment, performance, and acceptability of a novel outdoor active air sampler to provide simultaneous measurements of multiple contaminants at timed intervals for the Aggravating Factors of Asthma in Rural Environment (AFARE) study-a longitudinal cohort of 50 children in Yakima Valley, Washington. The sampler was constructed of multiple sampling media connected to individual critical orifices and a rotary vane vacuum pump. It was connected to a timed control valve system to collect 24 hours samples every six days over 18 months. We describe a spatially representative approach with both quantitative and qualitative location criteria to deploy a network of 14 devices at participant residences in a rural region (20 × 60 km). Overall the sampler performed well, as the concurrent mean sample flow rates were within or above the ranges of recommended sampling rates for each exposure metric of interest. Acceptability was high among the study population of Hispanic farmworker participant households. The sampler design may prove useful for future urban and rural community-based studies with aims at collecting multiple contaminant data during specific time periods.
ERIC Educational Resources Information Center
Al Rabadi, Wail Minwer; Salem, Rifqa Khleif
2018-01-01
The study was designed to identify the effect of high-order thinking on the quality of life among Ajloun University students. The study used the associative method. The randomly selected sample consisted of 147 students from Ajloun University College. The study used two tools: The two measures were applied to the sample of the current study after…
ERIC Educational Resources Information Center
Erford, Bradley T.; Giguere, Monica; Glenn, Kacie; Ciarlone, Hallie
2015-01-01
Patterns of articles published in "Professional School Counseling" (PSC) from the first 15 volumes were reviewed in this meta-study. Author characteristics (e.g., sex, employment setting, nation of domicile) and article characteristics (e.g., topic, type, design, sample, sample size, participant type, statistical procedures and…
Three-Step Validation of Exercise Behavior Processes of Change in an Adolescent Sample
ERIC Educational Resources Information Center
Rhodes, Ryan E.; Berry, Tanya; Naylor, Patti-Jean; Higgins, S. Joan Wharf
2004-01-01
Though the processes of change are conceived as the core constructs of the transtheoretical model (TTM), few researchers have examined their construct validity in the physical activity domain. Further, only 1 study was designed to investigate the processes of change in an adolescent sample. The purpose of this study was to examine the exercise…
ERIC Educational Resources Information Center
Yaki, Akawo Angwal; Babagana, Mohammed
2016-01-01
The paper examined the effects of a Technological Instructional Package (TIP) on secondary school students' performance in biology. The study adopted a pre-test, post-test experimental control group design. The sample size of the study was 80 students from Minna metropolis, Niger state, Nigeria; the samples were randomly assigned into treatment…
Teachers' Adoptation Level of Student Centered Education Approach
ERIC Educational Resources Information Center
Arseven, Zeynep; Sahin, Seyma; Kiliç, Abdurrahman
2016-01-01
The aim of this study is to identify how far the student centered education approach is applied in the primary, middle and high schools in Düzce. Explanatory design which is one type of mixed research methods and "sequential mixed methods sampling" were used in the study. 685 teachers constitute the research sample of the quantitative…
MSR ESA Earth Return Orbiter Mission Design Trades
NASA Astrophysics Data System (ADS)
Sanchez Perez, J. M.; Varga, G. I.; Huesing, J.; Beyer, F.
2018-04-01
The paper describes the work performed at ESOC in support of the Mars Sample Return ESA Earth Return Orbiter definition studies by exploring the trajectory optimization and mission design trade spaces of Mars return missions using electric and chemical propulsion.
NASA Technical Reports Server (NTRS)
Nguyen, T. M.; Yeh, H.-G.
1993-01-01
The baseline design and implementation of the digital baseband architecture for advanced deep space transponders is investigated and identified. Trade studies on the selection of the number of bits for the analog-to-digital converter (ADC) and optimum sampling schemes are presented. In addition, the proposed optimum sampling scheme is analyzed in detail. Descriptions of possible implementations for the digital baseband (or digital front end) and digital phase-locked loop (DPLL) for carrier tracking are also described.
Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won
2012-01-01
Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.
NASA Astrophysics Data System (ADS)
Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul
2018-07-01
Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.