Noordzij, Marlies; Dekker, Friedo W; Zoccali, Carmine; Jager, Kitty J
2011-01-01
The sample size is the number of patients or other experimental units that need to be included in a study to answer the research question. Pre-study calculation of the sample size is important; if a sample size is too small, one will not be able to detect an effect, while a sample that is too large may be a waste of time and money. Methods to calculate the sample size are explained in statistical textbooks, but because there are many different formulas available, it can be difficult for investigators to decide which method to use. Moreover, these calculations are prone to errors, because small changes in the selected parameters can lead to large differences in the sample size. This paper explains the basic principles of sample size calculations and demonstrates how to perform such a calculation for a simple study design. PMID:21293154
Taniguchi, Ryosuke; Miura, Yutaka; Koyama, Hiroyuki; Chida, Tsukasa; Anraku, Yasutaka; Kishimura, Akihiro; Shigematsu, Kunihiro; Kataoka, Kazunori; Watanabe, Toshiaki
2016-06-01
In atherosclerotic lesions, the endothelial barrier against the bloodstream can become compromised, resulting in the exposure of the extracellular matrix (ECM) and intimal cells beneath. In theory, this allows adequately sized nanocarriers in circulation to infiltrate into the intimal lesion intravascularly. We sought to evaluate this possibility using rat carotid arteries with induced neointima. Cy5-labeled polyethylene glycol-conjugated polyion complex (PIC) micelles and vesicles, with diameters of 40, 100, or 200 nm (PICs-40, PICs-100, and PICs-200, respectively) were intravenously administered to rats after injury to the carotid artery using a balloon catheter. High accumulation and long retention of PICs-40 in the induced neointima was confirmed by in vivo imaging, while the accumulation of PICs-100 and PICs-200 was limited, indicating that the size of nanocarriers is a crucial factor for efficient delivery. Furthermore, epirubicin-incorporated polymeric micelles with a diameter similar to that of PICs-40 showed significant curative effects in rats with induced neointima, in terms of lesion size and cell number. Specific and effective drug delivery to pre-existing neointimal lesions was demonstrated with adequate size control of the nanocarriers. We consider that this nanocarrier-based drug delivery system could be utilized for the treatment of atherosclerosis. PMID:27183493
Sample Size and Correlational Inference
ERIC Educational Resources Information Center
Anderson, Richard B.; Doherty, Michael E.; Friedrich, Jeff C.
2008-01-01
In 4 studies, the authors examined the hypothesis that the structure of the informational environment makes small samples more informative than large ones for drawing inferences about population correlations. The specific purpose of the studies was to test predictions arising from the signal detection simulations of R. B. Anderson, M. E. Doherty,…
Sample size requirements for training high-dimensional risk predictors
Dobbin, Kevin K.; Song, Xiao
2013-01-01
A common objective of biomarker studies is to develop a predictor of patient survival outcome. Determining the number of samples required to train a predictor from survival data is important for designing such studies. Existing sample size methods for training studies use parametric models for the high-dimensional data and cannot handle a right-censored dependent variable. We present a new training sample size method that is non-parametric with respect to the high-dimensional vectors, and is developed for a right-censored response. The method can be applied to any prediction algorithm that satisfies a set of conditions. The sample size is chosen so that the expected performance of the predictor is within a user-defined tolerance of optimal. The central method is based on a pilot dataset. To quantify uncertainty, a method to construct a confidence interval for the tolerance is developed. Adequacy of the size of the pilot dataset is discussed. An alternative model-based version of our method for estimating the tolerance when no adequate pilot dataset is available is presented. The model-based method requires a covariance matrix be specified, but we show that the identity covariance matrix provides adequate sample size when the user specifies three key quantities. Application of the sample size method to two microarray datasets is discussed. PMID:23873895
How to Show that Sample Size Matters
ERIC Educational Resources Information Center
Kozak, Marcin
2009-01-01
This article suggests how to explain a problem of small sample size when considering correlation between two Normal variables. Two techniques are shown: one based on graphs and the other on simulation. (Contains 3 figures and 1 table.)
Sample sizes for confidence limits for reliability.
Darby, John L.
2010-02-01
We recently performed an evaluation of the implications of a reduced stockpile of nuclear weapons for surveillance to support estimates of reliability. We found that one technique developed at Sandia National Laboratories (SNL) under-estimates the required sample size for systems-level testing. For a large population the discrepancy is not important, but for a small population it is important. We found that another technique used by SNL provides the correct required sample size. For systems-level testing of nuclear weapons, samples are selected without replacement, and the hypergeometric probability distribution applies. Both of the SNL techniques focus on samples without defects from sampling without replacement. We generalized the second SNL technique to cases with defects in the sample. We created a computer program in Mathematica to automate the calculation of confidence for reliability. We also evaluated sampling with replacement where the binomial probability distribution applies.
Experimental determination of size distributions: analyzing proper sample sizes
NASA Astrophysics Data System (ADS)
Buffo, A.; Alopaeus, V.
2016-04-01
The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used.
Finite sample size effects in transformation kinetics
NASA Technical Reports Server (NTRS)
Weinberg, M. C.
1985-01-01
The effect of finite sample size on the kinetic law of phase transformations is considered. The case where the second phase develops by a nucleation and growth mechanism is treated under the assumption of isothermal conditions and constant and uniform nucleation rate. It is demonstrated that for spherical particle growth, a thin sample transformation formula given previously is an approximate version of a more general transformation law. The thin sample approximation is shown to be reliable when a certain dimensionless thickness is small. The latter quantity, rather than the actual sample thickness, determines when the usual law of transformation kinetics valid for bulk (large dimension) samples must be modified.
Sample size calculation in metabolic phenotyping studies.
Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J
2015-09-01
The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. PMID:25600654
Improved sample size determination for attributes and variables sampling
Stirpe, D.; Picard, R.R.
1985-01-01
Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs.
Exploratory Factor Analysis with Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
2009-01-01
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
A New Sample Size Formula for Regression.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…
Predicting sample size required for classification performance
2012-01-01
Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p < 0.05). Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning. PMID:22336388
Statistical Analysis Techniques for Small Sample Sizes
NASA Technical Reports Server (NTRS)
Navard, S. E.
1984-01-01
The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.
Planning sample sizes when effect sizes are uncertain: The power-calibrated effect size approach.
McShane, Blakeley B; Böckenholt, Ulf
2016-03-01
Statistical power and thus the sample size required to achieve some desired level of power depend on the size of the effect of interest. However, effect sizes are seldom known exactly in psychological research. Instead, researchers often possess an estimate of an effect size as well as a measure of its uncertainty (e.g., a standard error or confidence interval). Previous proposals for planning sample sizes either ignore this uncertainty thereby resulting in sample sizes that are too small and thus power that is lower than the desired level or overstate the impact of this uncertainty thereby resulting in sample sizes that are too large and thus power that is higher than the desired level. We propose a power-calibrated effect size (PCES) approach to sample size planning that accounts for the uncertainty associated with an effect size estimate in a properly calibrated manner: sample sizes determined on the basis of the PCES are neither too small nor too large and thus provide the desired level of power. We derive the PCES for comparisons of independent and dependent means, comparisons of independent and dependent proportions, and tests of correlation coefficients. We also provide a tutorial on setting sample sizes for a replication study using data from prior studies and discuss an easy-to-use website and code that implement our PCES approach to sample size planning. PMID:26651984
Sample-size requirements for evaluating population size structure
Vokoun, J.C.; Rabeni, C.F.; Stanovick, J.S.
2001-01-01
A method with an accompanying computer program is described to estimate the number of individuals needed to construct a sample length-frequency with a given accuracy and precision. First, a reference length-frequency assumed to be accurate for a particular sampling gear and collection strategy was constructed. Bootstrap procedures created length-frequencies with increasing sample size that were randomly chosen from the reference data and then were compared with the reference length-frequency by calculating the mean squared difference. Outputs from two species collected with different gears and an artificial even length-frequency are used to describe the characteristics of the method. The relations between the number of individuals used to construct a length-frequency and the similarity to the reference length-frequency followed a negative exponential distribution and showed the importance of using 300-400 individuals whenever possible.
Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven
2010-01-01
This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
40 CFR 80.127 - Sample size guidelines.
Code of Federal Regulations, 2011 CFR
2011-07-01
... attest engagement, the auditor shall sample relevant populations to which agreed-upon procedures will be... population; and (b) Sample size shall be determined using one of the following options: (1) Option 1. Determine the sample size using the following table: Sample Size, Based Upon Population Size No....
(Sample) Size Matters! An Examination of Sample Size from the SPRINT Trial
Bhandari, Mohit; Tornetta, Paul; Rampersad, Shelly-Ann; Sprague, Sheila; Heels-Ansdell, Diane; Sanders, David W.; Schemitsch, Emil H.; Swiontkowski, Marc; Walter, Stephen
2012-01-01
Introduction Inadequate sample size and power in randomized trials can result in misleading findings. This study demonstrates the effect of sample size in a large, clinical trial by evaluating the results of the SPRINT (Study to Prospectively evaluate Reamed Intramedullary Nails in Patients with Tibial fractures) trial as it progressed. Methods The SPRINT trial evaluated reamed versus unreamed nailing of the tibia in 1226 patients, as well as in open and closed fracture subgroups (N=400 and N=826, respectively). We analyzed the re-operation rates and relative risk comparing treatment groups at 50, 100 and then increments of 100 patients up to the final sample size. Results at various enrollments were compared to the final SPRINT findings. Results In the final analysis, there was a statistically significant decreased risk of re-operation with reamed nails for closed fractures (relative risk reduction 35%). Results for the first 35 patients enrolled suggested reamed nails increased the risk of reoperation in closed fractures by 165%. Only after 543 patients with closed fractures were enrolled did the results reflect the final advantage for reamed nails in this subgroup. Similarly, the trend towards an increased risk of re-operation for open fractures (23%) was not seen until 62 patients with open fractures were enrolled. Conclusions Our findings highlight the risk of conducting a trial with insufficient sample size and power. Such studies are not only at risk of missing true effects, but also of giving misleading results. Level of Evidence N/A PMID:23525086
NASA Astrophysics Data System (ADS)
Cienciala, Piotr; Hassan, Marwan A.
2016-03-01
Adequate description of hydraulic variables based on a sample of field measurements is challenging in coarse-bed streams, a consequence of high spatial heterogeneity in flow properties that arises due to the complexity of channel boundary. By applying a resampling procedure based on bootstrapping to an extensive field data set, we have estimated sampling variability and its relationship with sample size in relation to two common methods of representing flow characteristics, spatially averaged velocity profiles and fitted probability distributions. The coefficient of variation in bed shear stress and roughness length estimated from spatially averaged velocity profiles and in shape and scale parameters of gamma distribution fitted to local values of bed shear stress, velocity, and depth was high, reaching 15-20% of the parameter value even at the sample size of 100 (sampling density 1 m-2). We illustrated implications of these findings with two examples. First, sensitivity analysis of a 2-D hydrodynamic model to changes in roughness length parameter showed that the sampling variability range observed in our resampling procedure resulted in substantially different frequency distributions and spatial patterns of modeled hydraulic variables. Second, using a bedload formula, we showed that propagation of uncertainty in the parameters of a gamma distribution used to model bed shear stress led to the coefficient of variation in predicted transport rates exceeding 50%. Overall, our findings underscore the importance of reporting the precision of estimated hydraulic parameters. When such estimates serve as input into models, uncertainty propagation should be explicitly accounted for by running ensemble simulations.
Public Opinion Polls, Chicken Soup and Sample Size
ERIC Educational Resources Information Center
Nguyen, Phung
2005-01-01
Cooking and tasting chicken soup in three different pots of very different size serves to demonstrate that it is the absolute sample size that matters the most in determining the accuracy of the findings of the poll, not the relative sample size, i.e. the size of the sample in relation to its population.
7 CFR 52.803 - Sample unit size.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample unit size. 52.803 Section 52.803 Agriculture... United States Standards for Grades of Frozen Red Tart Pitted Cherries Sample Unit Size § 52.803 Sample unit size. Compliance with requirements for size and the various quality factors is based on...
7 CFR 52.3757 - Standard sample unit size.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Ripe Olives 1 Product Description, Types, Styles, and Grades § 52.3757 Standard sample unit size... following standard sample unit size for the applicable style: (a) Whole and pitted—50 olives. (b)...
7 CFR 52.3757 - Standard sample unit size.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Ripe Olives 1 Product Description, Types, Styles, and Grades § 52.3757 Standard sample unit size... following standard sample unit size for the applicable style: (a) Whole and pitted—50 olives. (b)...
Considerations when calculating the sample size for an inequality test
2016-01-01
Click here for Korean Translation. Calculating the sample size is a vital step during the planning of a study in order to ensure the desired power for detecting clinically meaningful differences. However, estimating the sample size is not always straightforward. A number of key components should be considered to calculate a suitable sample size. In this paper, general considerations for conducting sample size calculations for inequality tests are summarized. PMID:27482308
Kandemir, Nurgün; Özön, Zeynep Alev; Gönç, Nazlı; Alikaşifoğlu, Ayfer
2011-01-01
Objective: Gonadotropin stimulation test is the gold standard to document precocious puberty. However, the test is costly, time-consuming and uncomfortable. The aim of this study was to simplify the intravenous gonadotropin-releasing hormone (GnRH) stimulation test in the diagnosis of precocious puberty and in the assessment of pubertal suppression. Methods: Data pertaining to 584 GnRH stimulation tests (314 testsfor diagnosis and 270 for assessment of pubertal suppression) were analyzed. Results: Forty-minute post-injection samples had the greatest frequency of “peaking luteinizing hormone (LH)” (p<0.001) in the diagnostic tests when the cut-off value was taken as 5 IU/L for LH, 40th minute sample was found to have 98% sensitivity and 100% specificity in the diagnosis of precocious puberty, while the sensitivity and specificity of the 20th minute sample was 100% in the assessment of pubertal suppression. Conclusion: LH level at the 40th minute post-injection in the diagnosis of central precocious puberty and at the 20th minute post-injection in the assessment of pubertal suppression is highly sensitive and specific. A single sample at these time points can be used in the diagnosis of early puberty and in the assessment of pubertal suppression. Conflict of interest:None declared. PMID:21448328
The Relationship between Sample Sizes and Effect Sizes in Systematic Reviews in Education
ERIC Educational Resources Information Center
Slavin, Robert; Smith, Dewi
2009-01-01
Research in fields other than education has found that studies with small sample sizes tend to have larger effect sizes than those with large samples. This article examines the relationship between sample size and effect size in education. It analyzes data from 185 studies of elementary and secondary mathematics programs that met the standards of…
Strategies for Field Sampling When Large Sample Sizes are Required
Technology Transfer Automated Retrieval System (TEKTRAN)
Estimates of prevalence or incidence of infection with a pathogen endemic in a fish population can be valuable information for development and evaluation of aquatic animal health management strategies. However, hundreds of unbiased samples may be required in order to accurately estimate these parame...
7 CFR 52.775 - Sample unit size.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Cherries 1 Sample Unit Size § 52.775 Sample unit size. Compliance with requirements for the size and the..., color, pits, and character—20 ounces of drained cherries. (b) Defects (other than harmless extraneous material)—100 cherries. (c) Harmless extraneous material—The total contents of each container in the...
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999385
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
7 CFR 51.2341 - Sample size for grade determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample size for grade determination. 51.2341 Section..., AND STANDARDS) United States Standards for Grades of Kiwifruit § 51.2341 Sample size for grade determination. For fruit place-packed in tray pack containers, the sample shall consist of the contents of...
A computer program for sample size computations for banding studies
Wilson, K.R.; Nichols, J.D.; Hines, J.E.
1989-01-01
Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.
Estimating optimal sampling unit sizes for satellite surveys
NASA Technical Reports Server (NTRS)
Hallum, C. R.; Perry, C. R., Jr.
1984-01-01
This paper reports on an approach for minimizing data loads associated with satellite-acquired data, while improving the efficiency of global crop area estimates using remotely sensed, satellite-based data. Results of a sampling unit size investigation are given that include closed-form models for both nonsampling and sampling error variances. These models provide estimates of the sampling unit sizes that effect minimal costs. Earlier findings from foundational sampling unit size studies conducted by Mahalanobis, Jessen, Cochran, and others are utilized in modeling the sampling error variance as a function of sampling unit size. A conservative nonsampling error variance model is proposed that is realistic in the remote sensing environment where one is faced with numerous unknown nonsampling errors. This approach permits the sampling unit size selection in the global crop inventorying environment to be put on a more quantitative basis while conservatively guarding against expected component error variances.
A review of software for sample size determination.
Dattalo, Patrick
2009-09-01
The size of a sample is an important element in determining the statistical precision with which population values can be estimated. This article identifies and describes free and commercial programs for sample size determination. Programs are categorized as follows: (a) multiple procedure for sample size determination; (b) single procedure for sample size determination; and (c) Web-based. Programs are described in terms of (a) cost; (b) ease of use, including interface, operating system and hardware requirements, and availability of documentation and technical support; (c) file management, including input and output formats; and (d) analytical and graphical capabilities. PMID:19696082
7 CFR 52.775 - Sample unit size.
Code of Federal Regulations, 2011 CFR
2011-01-01
... United States Standards for Grades of Canned Red Tart Pitted Cherries 1 Sample Unit Size § 52.775 Sample... drained cherries. (b) Defects (other than harmless extraneous material)—100 cherries. (c)...
40 CFR 80.127 - Sample size guidelines.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the attest engagement, the auditor shall sample relevant populations to which agreed-upon procedures will...
40 CFR 80.127 - Sample size guidelines.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the attest engagement, the auditor shall sample relevant populations to which agreed-upon procedures will...
40 CFR 80.127 - Sample size guidelines.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the attest engagement, the auditor shall sample relevant populations to which agreed-upon procedures will...
40 CFR 80.127 - Sample size guidelines.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the attest engagement, the auditor shall sample relevant populations to which agreed-upon procedures will...
Sample Sizes when Using Multiple Linear Regression for Prediction
ERIC Educational Resources Information Center
Knofczynski, Gregory T.; Mundfrom, Daniel
2008-01-01
When using multiple regression for prediction purposes, the issue of minimum required sample size often needs to be addressed. Using a Monte Carlo simulation, models with varying numbers of independent variables were examined and minimum sample sizes were determined for multiple scenarios at each number of independent variables. The scenarios…
7 CFR 52.775 - Sample unit size.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample unit size. 52.775 Section 52.775 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... United States Standards for Grades of Canned Red Tart Pitted Cherries 1 Sample Unit Size § 52.775...
Minimum Sample Size Recommendations for Conducting Factor Analyses
ERIC Educational Resources Information Center
Mundfrom, Daniel J.; Shaw, Dale G.; Ke, Tian Lu
2005-01-01
There is no shortage of recommendations regarding the appropriate sample size to use when conducting a factor analysis. Suggested minimums for sample size include from 3 to 20 times the number of variables and absolute ranges from 100 to over 1,000. For the most part, there is little empirical evidence to support these recommendations. This…
Power Analysis and Sample Size Determination in Metabolic Phenotyping.
Blaise, Benjamin J; Correia, Gonçalo; Tin, Adrienne; Young, J Hunter; Vergnaud, Anne-Claire; Lewis, Matthew; Pearce, Jake T M; Elliott, Paul; Nicholson, Jeremy K; Holmes, Elaine; Ebbels, Timothy M D
2016-05-17
Estimation of statistical power and sample size is a key aspect of experimental design. However, in metabolic phenotyping, there is currently no accepted approach for these tasks, in large part due to the unknown nature of the expected effect. In such hypothesis free science, neither the number or class of important analytes nor the effect size are known a priori. We introduce a new approach, based on multivariate simulation, which deals effectively with the highly correlated structure and high-dimensionality of metabolic phenotyping data. First, a large data set is simulated based on the characteristics of a pilot study investigating a given biomedical issue. An effect of a given size, corresponding either to a discrete (classification) or continuous (regression) outcome is then added. Different sample sizes are modeled by randomly selecting data sets of various sizes from the simulated data. We investigate different methods for effect detection, including univariate and multivariate techniques. Our framework allows us to investigate the complex relationship between sample size, power, and effect size for real multivariate data sets. For instance, we demonstrate for an example pilot data set that certain features achieve a power of 0.8 for a sample size of 20 samples or that a cross-validated predictivity QY(2) of 0.8 is reached with an effect size of 0.2 and 200 samples. We exemplify the approach for both nuclear magnetic resonance and liquid chromatography-mass spectrometry data from humans and the model organism C. elegans. PMID:27116637
Sample size re-estimation in a breast cancer trial
Hade, Erinn; Jarjoura, David; Wei, Lai
2016-01-01
Background During the recruitment phase of a randomized breast cancer trial, investigating the time to recurrence, we found evidence that the failure probabilities used at the design stage were too high. Since most of the methodological research involving sample size re-estimation has focused on normal or binary outcomes, we developed a method which preserves blinding to re-estimate sample size in our time to event trial. Purpose A mistakenly high estimate of the failure rate at the design stage may reduce the power unacceptably for a clinically important hazard ratio. We describe an ongoing trial and an application of a sample size re-estimation method that combines current trial data with prior trial data or assumes a parametric model to re-estimate failure probabilities in a blinded fashion. Methods Using our current blinded trial data and additional information from prior studies, we re-estimate the failure probabilities to be used in sample size re-calculation. We employ bootstrap resampling to quantify uncertainty in the re-estimated sample sizes. Results At the time of re-estimation data from 278 patients was available, averaging 1.2 years of follow up. Using either method, we estimated an increase of 0 for the hazard ratio proposed at the design stage. We show that our method of blinded sample size re-estimation preserves the Type I error rate. We show that when the initial guess of the failure probabilities are correct; the median increase in sample size is zero. Limitations Either some prior knowledge of an appropriate survival distribution shape or prior data is needed for re-estimation. Conclusions In trials when the accrual period is lengthy, blinded sample size re-estimation near the end of the planned accrual period should be considered. In our examples, when assumptions about failure probabilities and HRs are correct the methods usually do not increase sample size or otherwise increase it by very little. PMID:20392786
Methods for sample size determination in cluster randomized trials
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-01-01
Background: The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. Methods: We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. Results: We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. Conclusions: There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. PMID:26174515
Anand, Suraj P; Murray, Sharon C; Koch, Gary G
2010-05-01
The cost for conducting a "thorough QT/QTc study" is substantial and an unsuccessful outcome of the study can be detrimental to the safety profile of the drug, so sample size calculations play a very important role in ensuring adequate power for a thorough QT study. Current literature offers some help in designing such studies, but these methods have limitations and mostly apply only in the context of linear mixed models with compound symmetry covariance structure. It is not evident that such models can satisfactorily be employed to represent all kinds of QTc data, and the existing literature inadequately addresses whether there is a change in sample size and power for more general covariance structures for the linear mixed models. We assess the use of some of the existing methods to design a thorough QT study through data arising from a GlaxoSmithKline (GSK)-conducted thorough QT study, and explore newer models for sample size calculation. We also provide a new method to calculate the sample size required to detect assay sensitivity with adequate power. PMID:20358438
Sample Size Requirements for Comparing Two Alpha Coefficients.
ERIC Educational Resources Information Center
Bonnett, Douglas G.
2003-01-01
Derived general formulas to determine the sample size requirements for hypothesis testing with desired power and interval estimation with desired precision. Illustrated the approach with the example of a screening test for adolescent attention deficit disorder. (SLD)
7 CFR 52.803 - Sample unit size.
Code of Federal Regulations, 2011 CFR
2011-01-01
... PROCESSED FRUITS AND VEGETABLES, PROCESSED PRODUCTS THEREOF, AND CERTAIN OTHER PROCESSED FOOD PRODUCTS 1 United States Standards for Grades of Frozen Red Tart Pitted Cherries Sample Unit Size § 52.803...
7 CFR 52.803 - Sample unit size.
Code of Federal Regulations, 2012 CFR
2012-01-01
... PROCESSED FRUITS AND VEGETABLES, PROCESSED PRODUCTS THEREOF, AND CERTAIN OTHER PROCESSED FOOD PRODUCTS 1 United States Standards for Grades of Frozen Red Tart Pitted Cherries Sample Unit Size § 52.803...
The Precision Efficacy Analysis for Regression Sample Size Method.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…
The Sample Size Needed for the Trimmed "t" Test when One Group Size Is Fixed
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2009-01-01
The sample size determination is an important issue for planning research. However, limitations in size have seldom been discussed in the literature. Thus, how to allocate participants into different treatment groups to achieve the desired power is a practical issue that still needs to be addressed when one group size is fixed. The authors focused…
Two-stage chain sampling inspection plans with different sample sizes in the two stages
NASA Technical Reports Server (NTRS)
Stephens, K. S.; Dodge, H. F.
1976-01-01
A further generalization of the family of 'two-stage' chain sampling inspection plans is developed - viz, the use of different sample sizes in the two stages. Evaluation of the operating characteristics is accomplished by the Markov chain approach of the earlier work, modified to account for the different sample sizes. Markov chains for a number of plans are illustrated and several algebraic solutions are developed. Since these plans involve a variable amount of sampling, an evaluation of the average sampling number (ASN) is developed. A number of OC curves and ASN curves are presented. Some comparisons with plans having only one sample size are presented and indicate that improved discrimination is achieved by the two-sample-size plans.
Sample size calculation for the proportional hazards cure model.
Wang, Songfeng; Zhang, Jiajia; Lu, Wenbin
2012-12-20
In clinical trials with time-to-event endpoints, it is not uncommon to see a significant proportion of patients being cured (or long-term survivors), such as trials for the non-Hodgkins lymphoma disease. The popularly used sample size formula derived under the proportional hazards (PH) model may not be proper to design a survival trial with a cure fraction, because the PH model assumption may be violated. To account for a cure fraction, the PH cure model is widely used in practice, where a PH model is used for survival times of uncured patients and a logistic distribution is used for the probability of patients being cured. In this paper, we develop a sample size formula on the basis of the PH cure model by investigating the asymptotic distributions of the standard weighted log-rank statistics under the null and local alternative hypotheses. The derived sample size formula under the PH cure model is more flexible because it can be used to test the differences in the short-term survival and/or cure fraction. Furthermore, we also investigate as numerical examples the impacts of accrual methods and durations of accrual and follow-up periods on sample size calculation. The results show that ignoring the cure rate in sample size calculation can lead to either underpowered or overpowered studies. We evaluate the performance of the proposed formula by simulation studies and provide an example to illustrate its application with the use of data from a melanoma trial. PMID:22786805
Sample sizes in dosage investigational clinical trials: a systematic evaluation.
Huang, Ji-Han; Su, Qian-Min; Yang, Juan; Lv, Ying-Hua; He, Ying-Chun; Chen, Jun-Chao; Xu, Ling; Wang, Kun; Zheng, Qing-Shan
2015-01-01
The main purpose of investigational phase II clinical trials is to explore indications and effective doses. However, as yet, there is no clear rule and no related published literature about the precise suitable sample sizes to be used in phase II clinical trials. To explore this, we searched for clinical trials in the ClinicalTrials.gov registry using the keywords "dose-finding" or "dose-response" and "Phase II". The time span of the search was September 20, 1999, to December 31, 2013. A total of 2103 clinical trials were finally included in our review. Regarding sample sizes, 1,156 clinical trials had <40 participants in each group, accounting for 55.0% of the studies reviewed, and only 17.2% of the studies reviewed had >100 patient cases in a single group. Sample sizes used in parallel study designs tended to be larger than those of crossover designs (median sample size 151 and 37, respectively). In conclusion, in the earlier phases of drug research and development, there are a variety of designs for dosage investigational studies. The sample size of each trial should be comprehensively considered and selected according to the study design and purpose. PMID:25609916
Aircraft studies of size-dependent aerosol sampling through inlets
NASA Technical Reports Server (NTRS)
Porter, J. N.; Clarke, A. D.; Ferry, G.; Pueschel, R. F.
1992-01-01
Representative measurement of aerosol from aircraft-aspirated systems requires special efforts in order to maintain near isokinetic sampling conditions, estimate aerosol losses in the sample system, and obtain a measurement of sufficient duration to be statistically significant for all sizes of interest. This last point is especially critical for aircraft measurements which typically require fast response times while sampling in clean remote regions. This paper presents size-resolved tests, intercomparisons, and analysis of aerosol inlet performance as determined by a custom laser optical particle counter. Measurements discussed here took place during the Global Backscatter Experiment (1988-1989) and the Central Pacific Atmospheric Chemistry Experiment (1988). System configurations are discussed including (1) nozzle design and performance, (2) system transmission efficiency, (3) nonadiabatic effects in the sample line and its effect on the sample-line relative humidity, and (4) the use and calibration of a virtual impactor.
Sample Size Determination for One- and Two-Sample Trimmed Mean Tests
ERIC Educational Resources Information Center
Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng
2008-01-01
Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…
Sample size considerations for livestock movement network data.
Pfeiffer, Caitlin N; Firestone, Simon M; Campbell, Angus J D; Larsen, John W A; Stevenson, Mark A
2015-12-01
The movement of animals between farms contributes to infectious disease spread in production animal populations, and is increasingly investigated with social network analysis methods. Tangible outcomes of this work include the identification of high-risk premises for targeting surveillance or control programs. However, knowledge of the effect of sampling or incomplete network enumeration on these studies is limited. In this study, a simulation algorithm is presented that provides an estimate of required sampling proportions based on predicted network size, density and degree value distribution. The algorithm may be applied a priori to ensure network analyses based on sampled or incomplete data provide population estimates of known precision. Results demonstrate that, for network degree measures, sample size requirements vary with sampling method. The repeatability of the algorithm output under constant network and sampling criteria was found to be consistent for networks with at least 1000 nodes (in this case, farms). Where simulated networks can be constructed to closely mimic the true network in a target population, this algorithm provides a straightforward approach to determining sample size under a given sampling procedure for a network measure of interest. It can be used to tailor study designs of known precision, for investigating specific livestock movement networks and their impact on disease dissemination within populations. PMID:26276397
Heidel, R Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power. PMID:27073717
Heidel, R. Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power. PMID:27073717
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
ERIC Educational Resources Information Center
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Bartsch, L.A.; Richardson, W.B.; Naimo, T.J.
1998-01-01
Estimation of benthic macroinvertebrate populations over large spatial scales is difficult due to the high variability in abundance and the cost of sample processing and taxonomic analysis. To determine a cost-effective, statistically powerful sample design, we conducted an exploratory study of the spatial variation of benthic macroinvertebrates in a 37 km reach of the Upper Mississippi River. We sampled benthos at 36 sites within each of two strata, contiguous backwater and channel border. Three standard ponar (525 cm(2)) grab samples were obtained at each site ('Original Design'). Analysis of variance and sampling cost of strata-wide estimates for abundance of Oligochaeta, Chironomidae, and total invertebrates showed that only one ponar sample per site ('Reduced Design') yielded essentially the same abundance estimates as the Original Design, while reducing the overall cost by 63%. A posteriori statistical power analysis (alpha = 0.05, beta = 0.20) on the Reduced Design estimated that at least 18 sites per stratum were needed to detect differences in mean abundance between contiguous backwater and channel border areas for Oligochaeta, Chironomidae, and total invertebrates. Statistical power was nearly identical for the three taxonomic groups. The abundances of several taxa of concern (e.g., Hexagenia mayflies and Musculium fingernail clams) were too spatially variable to estimate power with our method. Resampling simulations indicated that to achieve adequate sampling precision for Oligochaeta, at least 36 sample sites per stratum would be required, whereas a sampling precision of 0.2 would not be attained with any sample size for Hexagenia in channel border areas, or Chironomidae and Musculium in both strata given the variance structure of the original samples. Community-wide diversity indices (Brillouin and 1-Simpsons) increased as sample area per site increased. The backwater area had higher diversity than the channel border area. The number of sampling sites
Approximate sample sizes required to estimate length distributions
Miranda, L.E.
2007-01-01
The sample sizes required to estimate fish length were determined by bootstrapping from reference length distributions. Depending on population characteristics and species-specific maximum lengths, 1-cm length-frequency histograms required 375-1,200 fish to estimate within 10% with 80% confidence, 2.5-cm histograms required 150-425 fish, proportional stock density required 75-140 fish, and mean length required 75-160 fish. In general, smaller species, smaller populations, populations with higher mortality, and simpler length statistics required fewer samples. Indices that require low sample sizes may be suitable for monitoring population status, and when large changes in length are evident, additional sampling effort may be allocated to more precisely define length status with more informative estimators. ?? Copyright by the American Fisheries Society 2007.
Rubow, K.L.; Marple, V.A.; Cantrell, B.K.
1995-12-31
Researchers are becoming increasingly concerned with airborne particulate matter, not only in the respirable size range, but also in larger size ranges. International Standards Organization (ISO) and the American Conference of Governmental Industrial Hygienist (ACGIH) have developed standards for {open_quotes}inhalable{close_quotes} and {open_quotes}thoracic{close_quotes} particulate matter. These require sampling particles up to approximately 100 {mu}m in diameter. The size distribution and mass concentration of airborne particulate matter have been measured in air quality studies of the working sections of more than 20 underground mines by University of Minnesota and U.S. Bureau of Mines personnel. Measurements have been made in more than 15 coal mines and five metal/nonmetal mines over the past eight years. Although mines using diesel-powered equipment were emphasized, mines using all-electric powered equipment were also included. Particle sampling was conducted at fixed locations, i.e., mine portal, ventilation intake entry, haulageways, ventilation return entry, and near raincars, bolters and load-haul-dump equipment. The primary sampling device used was the MSP Model 100 micro-orifice uniform deposit impactor (MOUDI). The MOUDI samples at a flow rate of 30 LPM and. provides particle size distribution information for particles primarily in the 0.1 to 18 {mu}m size range. Up to five MOUDI samplers were simultaneously deployed at the fixed locations. Sampling times were typically 4 to 6 hrs/shift. Results from these field studies have been summarized to determine the average size distributions and mass concentrations at various locations in the mine section sampled. From these average size distributions, predictions are made regarding the expected levels of respirable and thoracic mass concentrations as defined by various health-based size-selective aerosol-sampling criteria.
A simulation study of sample size for DNA barcoding.
Luo, Arong; Lan, Haiqiang; Ling, Cheng; Zhang, Aibing; Shi, Lei; Ho, Simon Y W; Zhu, Chaodong
2015-12-01
For some groups of organisms, DNA barcoding can provide a useful tool in taxonomy, evolutionary biology, and biodiversity assessment. However, the efficacy of DNA barcoding depends on the degree of sampling per species, because a large enough sample size is needed to provide a reliable estimate of genetic polymorphism and for delimiting species. We used a simulation approach to examine the effects of sample size on four estimators of genetic polymorphism related to DNA barcoding: mismatch distribution, nucleotide diversity, the number of haplotypes, and maximum pairwise distance. Our results showed that mismatch distributions derived from subsamples of ≥20 individuals usually bore a close resemblance to that of the full dataset. Estimates of nucleotide diversity from subsamples of ≥20 individuals tended to be bell-shaped around that of the full dataset, whereas estimates from smaller subsamples were not. As expected, greater sampling generally led to an increase in the number of haplotypes. We also found that subsamples of ≥20 individuals allowed a good estimate of the maximum pairwise distance of the full dataset, while smaller ones were associated with a high probability of underestimation. Overall, our study confirms the expectation that larger samples are beneficial for the efficacy of DNA barcoding and suggests that a minimum sample size of 20 individuals is needed in practice for each population. PMID:26811761
Sample Size Bias in Judgments of Perceptual Averages
ERIC Educational Resources Information Center
Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.
2014-01-01
Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…
Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests.
Duncanson, L; Rourke, O; Dubayah, R
2015-01-01
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from -4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233
Sample Size Tables, "t" Test, and a Prevalent Psychometric Distribution.
ERIC Educational Resources Information Center
Sawilowsky, Shlomo S.; Hillman, Stephen B.
Psychology studies often have low statistical power. Sample size tables, as given by J. Cohen (1988), may be used to increase power, but they are based on Monte Carlo studies of relatively "tame" mathematical distributions, as compared to psychology data sets. In this study, Monte Carlo methods were used to investigate Type I and Type II error…
Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests
Duncanson, L.; Rourke, O.; Dubayah, R.
2015-01-01
Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233
An Investigation of Sample Size Splitting on ATFIND and DIMTEST
ERIC Educational Resources Information Center
Socha, Alan; DeMars, Christine E.
2013-01-01
Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…
The Fisher-Yates Exact Test and Unequal Sample Sizes
ERIC Educational Resources Information Center
Johnson, Edgar M.
1972-01-01
A computational short cut suggested by Feldman and Klinger for the one-sided Fisher-Yates exact test is clarified and is extended to the calculation of probability values for certain two-sided tests when sample sizes are unequal. (Author)
Sampling and surface reconstruction with adaptive-size meshes
NASA Astrophysics Data System (ADS)
Huang, Wen-Chen; Goldgof, Dmitry B.
1992-03-01
This paper presents a new approach to sampling and surface reconstruction which uses the physically based models. We introduce adaptive-size meshes which automatically update the size of the meshes as the distance between the nodes changes. We have implemented the adaptive-size algorithm to the following three applications: (1) Sampling of the intensity data. (2) Surface reconstruction of the range data. (3) Surface reconstruction of the 3-D computed tomography left ventricle data. The LV data was acquired by the 3-D computed tomography (CT) scanner. It was provided by Dr. Eric Hoffman at University of Pennsylvania Medical school and consists of 16 volumetric (128 X 128 X 118) images taken through the heart cycle.
ERIC Educational Resources Information Center
Smith, Margaret H.
2004-01-01
Unless the sample encompasses a substantial portion of the population, the standard error of an estimator depends on the size of the sample, but not the size of the population. This is a crucial statistical insight that students find very counterintuitive. After trying several ways of convincing students of the validity of this principle, I have…
Effective Sample Size in Diffuse Reflectance Near-IR Spectrometry.
Berntsson, O; Burger, T; Folestad, S; Danielsson, L G; Kuhn, J; Fricke, J
1999-02-01
Two independent methods for determination of the effectively sampled mass per unit area are presented and compared. The first method combines directional-hemispherical transmittance and reflectance measurements. A three-flux approximation of the equation of radiative transfer is used, to separately determine the specific absorption and scattering coefficients of the powder material, which subsequently are used to determine the effective sample size. The second method uses a number of diffuse reflectance measurements on layers of controlled powder thickness in an empirical approach. The two methods are shown to agree well and thus confirm each other. From the determination of the effective sample size at each measured wavelength in the visible-NIR region for two different model powder materials, large differences was found, both between the two analyzed powders and between different wavelengths. As an example, the effective sample size ranges between 15 and 70 mg/cm(2) for microcrystalline cellulose and between 70 and 300 mg/cm(2) for film-coated pellets. However, the contribution to the spectral information obtained from a certain layer decreases rapidly with increasing distance from the powder surface. With both methods, the extent of contribution from various depths of a powder sample to the visible-NIR diffuse reflection signal is characterized. This information is valuable for validation of analytical applications of diffuse reflectance visible-NIR spectrometry. PMID:21662719
Detecting Neuroimaging Biomarkers for Psychiatric Disorders: Sample Size Matters.
Schnack, Hugo G; Kahn, René S
2016-01-01
In a recent review, it was suggested that much larger cohorts are needed to prove the diagnostic value of neuroimaging biomarkers in psychiatry. While within a sample, an increase of diagnostic accuracy of schizophrenia (SZ) with number of subjects (N) has been shown, the relationship between N and accuracy is completely different between studies. Using data from a recent meta-analysis of machine learning (ML) in imaging SZ, we found that while low-N studies can reach 90% and higher accuracy, above N/2 = 50 the maximum accuracy achieved steadily drops to below 70% for N/2 > 150. We investigate the role N plays in the wide variability in accuracy results in SZ studies (63-97%). We hypothesize that the underlying cause of the decrease in accuracy with increasing N is sample heterogeneity. While smaller studies more easily include a homogeneous group of subjects (strict inclusion criteria are easily met; subjects live close to study site), larger studies inevitably need to relax the criteria/recruit from large geographic areas. A SZ prediction model based on a heterogeneous group of patients with presumably a heterogeneous pattern of structural or functional brain changes will not be able to capture the whole variety of changes, thus being limited to patterns shared by most patients. In addition to heterogeneity (sample size), we investigate other factors influencing accuracy and introduce a ML effect size. We derive a simple model of how the different factors, such as sample heterogeneity and study setup determine this ML effect size, and explain the variation in prediction accuracies found from the literature, both in cross-validation and independent sample testing. From this, we argue that smaller-N studies may reach high prediction accuracy at the cost of lower generalizability to other samples. Higher-N studies, on the other hand, will have more generalization power, but at the cost of lower accuracy. In conclusion, when comparing results from different
Detecting Neuroimaging Biomarkers for Psychiatric Disorders: Sample Size Matters
Schnack, Hugo G.; Kahn, René S.
2016-01-01
In a recent review, it was suggested that much larger cohorts are needed to prove the diagnostic value of neuroimaging biomarkers in psychiatry. While within a sample, an increase of diagnostic accuracy of schizophrenia (SZ) with number of subjects (N) has been shown, the relationship between N and accuracy is completely different between studies. Using data from a recent meta-analysis of machine learning (ML) in imaging SZ, we found that while low-N studies can reach 90% and higher accuracy, above N/2 = 50 the maximum accuracy achieved steadily drops to below 70% for N/2 > 150. We investigate the role N plays in the wide variability in accuracy results in SZ studies (63–97%). We hypothesize that the underlying cause of the decrease in accuracy with increasing N is sample heterogeneity. While smaller studies more easily include a homogeneous group of subjects (strict inclusion criteria are easily met; subjects live close to study site), larger studies inevitably need to relax the criteria/recruit from large geographic areas. A SZ prediction model based on a heterogeneous group of patients with presumably a heterogeneous pattern of structural or functional brain changes will not be able to capture the whole variety of changes, thus being limited to patterns shared by most patients. In addition to heterogeneity (sample size), we investigate other factors influencing accuracy and introduce a ML effect size. We derive a simple model of how the different factors, such as sample heterogeneity and study setup determine this ML effect size, and explain the variation in prediction accuracies found from the literature, both in cross-validation and independent sample testing. From this, we argue that smaller-N studies may reach high prediction accuracy at the cost of lower generalizability to other samples. Higher-N studies, on the other hand, will have more generalization power, but at the cost of lower accuracy. In conclusion, when comparing results from different
(Sample) Size Matters: Defining Error in Planktic Foraminiferal Isotope Measurement
NASA Astrophysics Data System (ADS)
Lowery, C.; Fraass, A. J.
2015-12-01
Planktic foraminifera have been used as carriers of stable isotopic signals since the pioneering work of Urey and Emiliani. In those heady days, instrumental limitations required hundreds of individual foraminiferal tests to return a usable value. This had the fortunate side-effect of smoothing any seasonal to decadal changes within the planktic foram population, which generally turns over monthly, removing that potential noise from each sample. With the advent of more sensitive mass spectrometers, smaller sample sizes have now become standard. This has been a tremendous advantage, allowing longer time series with the same investment of time and energy. Unfortunately, the use of smaller numbers of individuals to generate a data point has lessened the amount of time averaging in the isotopic analysis and decreased precision in paleoceanographic datasets. With fewer individuals per sample, the differences between individual specimens will result in larger variation, and therefore error, and less precise values for each sample. Unfortunately, most workers (the authors included) do not make a habit of reporting the error associated with their sample size. We have created an open-source model in R to quantify the effect of sample sizes under various realistic and highly modifiable parameters (calcification depth, diagenesis in a subset of the population, improper identification, vital effects, mass, etc.). For example, a sample in which only 1 in 10 specimens is diagenetically altered can be off by >0.3‰ δ18O VPDB or ~1°C. Additionally, and perhaps more importantly, we show that under unrealistically ideal conditions (perfect preservation, etc.) it takes ~5 individuals from the mixed-layer to achieve an error of less than 0.1‰. Including just the unavoidable vital effects inflates that number to ~10 individuals to achieve ~0.1‰. Combining these errors with the typical machine error inherent in mass spectrometers make this a vital consideration moving forward.
Rock sampling. [method for controlling particle size distribution
NASA Technical Reports Server (NTRS)
Blum, P. (Inventor)
1971-01-01
A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.
Air sampling filtration media: Collection efficiency for respirable size-selective sampling
Soo, Jhy-Charm; Monaghan, Keenan; Lee, Taekhee; Kashon, Mike; Harper, Martin
2016-01-01
The collection efficiencies of commonly used membrane air sampling filters in the ultrafine particle size range were investigated. Mixed cellulose ester (MCE; 0.45, 0.8, 1.2, and 5 μm pore sizes), polycarbonate (0.4, 0.8, 2, and 5 μm pore sizes), polytetrafluoroethylene (PTFE; 0.45, 1, 2, and 5 μm pore sizes), polyvinyl chloride (PVC; 0.8 and 5 μm pore sizes), and silver membrane (0.45, 0.8, 1.2, and 5 μm pore sizes) filters were exposed to polydisperse sodium chloride (NaCl) particles in the size range of 10–400 nm. Test aerosols were nebulized and introduced into a calm air chamber through a diffusion dryer and aerosol neutralizer. The testing filters (37 mm diameter) were mounted in a conductive polypropylene filter-holder (cassette) within a metal testing tube. The experiments were conducted at flow rates between 1.7 and 11.2 l min−1. The particle size distributions of NaCl challenge aerosol were measured upstream and downstream of the test filters by a scanning mobility particle sizer (SMPS). Three different filters of each type with at least three repetitions for each pore size were tested. In general, the collection efficiency varied with airflow, pore size, and sampling duration. In addition, both collection efficiency and pressure drop increased with decreased pore size and increased sampling flow rate, but they differed among filter types and manufacturer. The present study confirmed that the MCE, PTFE, and PVC filters have a relatively high collection efficiency for challenge particles much smaller than their nominal pore size and are considerably more efficient than polycarbonate and silver membrane filters, especially at larger nominal pore sizes. PMID:26834310
Sample size determination for longitudinal designs with binary response.
Kapur, Kush; Bhaumik, Runa; Tang, X Charlene; Hur, Kwan; Reda, Domenic J; Bhaumik, Dulal K
2014-09-28
In this article, we develop appropriate statistical methods for determining the required sample size while comparing the efficacy of an intervention to a control with repeated binary response outcomes. Our proposed methodology incorporates the complexity of the hierarchical nature of underlying designs and provides solutions when varying attrition rates are present over time. We explore how the between-subject variability and attrition rates jointly influence the computation of sample size formula. Our procedure also shows how efficient estimation methods play a crucial role in power analysis. A practical guideline is provided when information regarding individual variance component is unavailable. The validity of our methods is established by extensive simulation studies. Results are illustrated with the help of two randomized clinical trials in the areas of contraception and insomnia. PMID:24820424
Effect of sample size on deformation in amorphous metals
NASA Astrophysics Data System (ADS)
Volkert, C. A.; Donohue, A.; Spaepen, F.
2008-04-01
Uniaxial compression tests were performed on micron-sized columns of amorphous PdSi to investigate the effect of sample size on deformation behavior. Cylindrical columns with diameters between 8μm and 140nm were fabricated from sputtered amorphous Pd77Si23 films on Si substrates by focused ion beam machining and compression tests were performed with a nanoindenter outfitted with a flat diamond punch. The columns exhibited elastic behavior until they yielded by either shear band formation on a plane at 50° to the loading axis or by homogenous deformation. Shear band formation occurred only in columns with diameters larger than 400nm. The change in deformation mechanism from shear band formation to homogeneous deformation with decreasing column size is attributed to a required critical strained volume for shear band formation.
GLIMMPSE Lite: Calculating Power and Sample Size on Smartphone Devices
Munjal, Aarti; Sakhadeo, Uttara R.; Muller, Keith E.; Glueck, Deborah H.; Kreidler, Sarah M.
2014-01-01
Researchers seeking to develop complex statistical applications for mobile devices face a common set of difficult implementation issues. In this work, we discuss general solutions to the design challenges. We demonstrate the utility of the solutions for a free mobile application designed to provide power and sample size calculations for univariate, one-way analysis of variance (ANOVA), GLIMMPSE Lite. Our design decisions provide a guide for other scientists seeking to produce statistical software for mobile platforms. PMID:25541688
Tooth Wear Prevalence and Sample Size Determination : A Pilot Study
Abd. Karim, Nama Bibi Saerah; Ismail, Noorliza Mastura; Naing, Lin; Ismail, Abdul Rashid
2008-01-01
Tooth wear is the non-carious loss of tooth tissue, which results from three processes namely attrition, erosion and abrasion. These can occur in isolation or simultaneously. Very mild tooth wear is a physiological effect of aging. This study aims to estimate the prevalence of tooth wear among 16-year old Malay school children and determine a feasible sample size for further study. Fifty-five subjects were examined clinically, followed by the completion of self-administered questionnaires. Questionnaires consisted of socio-demographic and associated variables for tooth wear obtained from the literature. The Smith and Knight tooth wear index was used to chart tooth wear. Other oral findings were recorded using the WHO criteria. A software programme was used to determine pathological tooth wear. About equal ratio of male to female were involved. It was found that 18.2% of subjects have no tooth wear, 63.6% had very mild tooth wear, 10.9% mild tooth wear, 5.5% moderate tooth wear and 1.8 % severe tooth wear. In conclusion 18.2% of subjects were deemed to have pathological tooth wear (mild, moderate & severe). Exploration with all associated variables gave a sample size ranging from 560 – 1715. The final sample size for further study greatly depends on available time and resources. PMID:22589636
Improving Microarray Sample Size Using Bootstrap Data Combination
Phan, John H.; Moffitt, Richard A.; Barrett, Andrea B.; Wang, May D.
2016-01-01
Microarray technology has enabled us to simultaneously measure the expression of thousands of genes. Using this high-throughput technology, we can examine subtle genetic changes between biological samples and build predictive models for clinical applications. Although microarrays have dramatically increased the rate of data collection, sample size is still a major issue when selecting features. Previous methods show that combining multiple microarray datasets improves feature selection using simple methods such as fold change. We propose a wrapper-based gene selection technique that combines bootstrap estimated classification errors for individual genes across multiple datasets and reduces the contribution of datasets with high variance. We use the bootstrap because it is an unbiased estimator of classification error that is also effective for small sample data. Coupled with data combination across multiple datasets, we show that our meta-analytic approach improves the biological relevance of gene selection using prostate and renal cancer microarray data. PMID:19164001
ERIC Educational Resources Information Center
Lawson, Chris A.; Fisher, Anna V.
2011-01-01
Developmental studies have provided mixed evidence with regard to the question of whether children consider sample size and sample diversity in their inductive generalizations. Results from four experiments with 105 undergraduates, 105 school-age children (M = 7.2 years), and 105 preschoolers (M = 4.9 years) showed that preschoolers made a higher…
Decadal predictive skill assessment - ensemble and hindcast sample size impact
NASA Astrophysics Data System (ADS)
Sienz, Frank; Müller, Wolfgang; Pohlmann, Holger
2015-04-01
Hindcast, respectively retrospective prediction experiments have to be performed to validate decadal prediction systems. These are necessarily restricted in the number due to the computational constrains. From weather and seasonal prediction it is known that, the ensemble size is crucial. A similar dependency is likely for decadal predictions but, differences are expected due to the differing time-scales of the involved processes and the longer prediction horizon. It is shown here, that the ensemble and hindcast sample size have a large impact on the uncertainty assessment of the ensemble mean, as well as for the detection of prediction skill. For that purpose a conceptual model is developed, which enables the systematic analysis of statistical properties and its dependencies in a framework close to that of real decadal predictions. In addition, a set of extended range hindcast experiments have been undertaken, covering the entire 20th century.
Efficient Coalescent Simulation and Genealogical Analysis for Large Sample Sizes
Kelleher, Jerome; Etheridge, Alison M; McVean, Gilean
2016-01-01
A central challenge in the analysis of genetic variation is to provide realistic genome simulation across millions of samples. Present day coalescent simulations do not scale well, or use approximations that fail to capture important long-range linkage properties. Analysing the results of simulations also presents a substantial challenge, as current methods to store genealogies consume a great deal of space, are slow to parse and do not take advantage of shared structure in correlated trees. We solve these problems by introducing sparse trees and coalescence records as the key units of genealogical analysis. Using these tools, exact simulation of the coalescent with recombination for chromosome-sized regions over hundreds of thousands of samples is possible, and substantially faster than present-day approximate methods. We can also analyse the results orders of magnitude more quickly than with existing methods. PMID:27145223
Automated sampling assessment for molecular simulations using the effective sample size
Zhang, Xin; Bhatt, Divesh; Zuckerman, Daniel M.
2010-01-01
To quantify the progress in the development of algorithms and forcefields used in molecular simulations, a general method for the assessment of the sampling quality is needed. Statistical mechanics principles suggest the populations of physical states characterize equilibrium sampling in a fundamental way. We therefore develop an approach for analyzing the variances in state populations, which quantifies the degree of sampling in terms of the effective sample size (ESS). The ESS estimates the number of statistically independent configurations contained in a simulated ensemble. The method is applicable to both traditional dynamics simulations as well as more modern (e.g., multi–canonical) approaches. Our procedure is tested in a variety of systems from toy models to atomistic protein simulations. We also introduce a simple automated procedure to obtain approximate physical states from dynamic trajectories: this allows sample–size estimation in systems for which physical states are not known in advance. PMID:21221418
Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses
Lanfear, Robert; Hua, Xia; Warren, Dan L.
2016-01-01
Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794
2014-01-01
Background Despite the widespread use of patient-reported Outcomes (PRO) in clinical studies, their design remains a challenge. Justification of study size is hardly provided, especially when a Rasch model is planned for analysing the data in a 2-group comparison study. The classical sample size formula (CLASSIC) for comparing normally distributed endpoints between two groups has shown to be inadequate in this setting (underestimated study sizes). A correction factor (RATIO) has been proposed to reach an adequate sample size from the CLASSIC when a Rasch model is intended to be used for analysis. The objective was to explore the impact of the parameters used for study design on the RATIO and to identify the most relevant to provide a simple method for sample size determination for Rasch modelling. Methods A large combination of parameters used for study design was simulated using a Monte Carlo method: variance of the latent trait, group effect, sample size per group, number of items and items difficulty parameters. A linear regression model explaining the RATIO and including all the former parameters as covariates was fitted. Results The most relevant parameters explaining the ratio’s variations were the number of items and the variance of the latent trait (R2 = 99.4%). Conclusions Using the classical sample size formula adjusted with the proposed RATIO can provide a straightforward and reliable formula for sample size computation for 2-group comparison of PRO data using Rasch models. PMID:24996957
Computing Power and Sample Size for Informational Odds Ratio †
Efird, Jimmy T.
2013-01-01
The informational odds ratio (IOR) measures the post-exposure odds divided by the pre-exposure odds (i.e., information gained after knowing exposure status). A desirable property of an adjusted ratio estimate is collapsibility, wherein the combined crude ratio will not change after adjusting for a variable that is not a confounder. Adjusted traditional odds ratios (TORs) are not collapsible. In contrast, Mantel-Haenszel adjusted IORs, analogous to relative risks (RRs) generally are collapsible. IORs are a useful measure of disease association in case-referent studies, especially when the disease is common in the exposed and/or unexposed groups. This paper outlines how to compute power and sample size in the simple case of unadjusted IORs. PMID:24157518
Quantum state discrimination bounds for finite sample size
Audenaert, Koenraad M. R.; Mosonyi, Milan; Verstraete, Frank
2012-12-15
In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein's lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.
MEPAG Recommendations for a 2018 Mars Sample Return Caching Lander - Sample Types, Number, and Sizes
NASA Technical Reports Server (NTRS)
Allen, Carlton C.
2011-01-01
The return to Earth of geological and atmospheric samples from the surface of Mars is among the highest priority objectives of planetary science. The MEPAG Mars Sample Return (MSR) End-to-End International Science Analysis Group (MEPAG E2E-iSAG) was chartered to propose scientific objectives and priorities for returned sample science, and to map out the implications of these priorities, including for the proposed joint ESA-NASA 2018 mission that would be tasked with the crucial job of collecting and caching the samples. The E2E-iSAG identified four overarching scientific aims that relate to understanding: (A) the potential for life and its pre-biotic context, (B) the geologic processes that have affected the martian surface, (C) planetary evolution of Mars and its atmosphere, (D) potential for future human exploration. The types of samples deemed most likely to achieve the science objectives are, in priority order: (1A). Subaqueous or hydrothermal sediments (1B). Hydrothermally altered rocks or low temperature fluid-altered rocks (equal priority) (2). Unaltered igneous rocks (3). Regolith, including airfall dust (4). Present-day atmosphere and samples of sedimentary-igneous rocks containing ancient trapped atmosphere Collection of geologically well-characterized sample suites would add considerable value to interpretations of all collected rocks. To achieve this, the total number of rock samples should be about 30-40. In order to evaluate the size of individual samples required to meet the science objectives, the E2E-iSAG reviewed the analytical methods that would likely be applied to the returned samples by preliminary examination teams, for planetary protection (i.e., life detection, biohazard assessment) and, after distribution, by individual investigators. It was concluded that sample size should be sufficient to perform all high-priority analyses in triplicate. In keeping with long-established curatorial practice of extraterrestrial material, at least 40% by
7 CFR 51.2838 - Samples for grade and size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... or Jumbo size or larger the package shall be the sample. When individual packages contain less than... 7 Agriculture 2 2010-01-01 2010-01-01 false Samples for grade and size determination. 51.2838... Creole Types) Samples for Grade and Size Determination § 51.2838 Samples for grade and size...
Statistical identifiability and sample size calculations for serial seroepidemiology
Vinh, Dao Nguyen; Boni, Maciej F.
2015-01-01
Inference on disease dynamics is typically performed using case reporting time series of symptomatic disease. The inferred dynamics will vary depending on the reporting patterns and surveillance system for the disease in question, and the inference will miss mild or underreported epidemics. To eliminate the variation introduced by differing reporting patterns and to capture asymptomatic or subclinical infection, inferential methods can be applied to serological data sets instead of case reporting data. To reconstruct complete disease dynamics, one would need to collect a serological time series. In the statistical analysis presented here, we consider a particular kind of serological time series with repeated, periodic collections of population-representative serum. We refer to this study design as a serial seroepidemiology (SSE) design, and we base the analysis on our epidemiological knowledge of influenza. We consider a study duration of three to four years, during which a single antigenic type of influenza would be circulating, and we evaluate our ability to reconstruct disease dynamics based on serological data alone. We show that the processes of reinfection, antibody generation, and antibody waning confound each other and are not always statistically identifiable, especially when dynamics resemble a non-oscillating endemic equilibrium behavior. We introduce some constraints to partially resolve this confounding, and we show that transmission rates and basic reproduction numbers can be accurately estimated in SSE study designs. Seasonal forcing is more difficult to identify as serology-based studies only detect oscillations in antibody titers of recovered individuals, and these oscillations are typically weaker than those observed for infected individuals. To accurately estimate the magnitude and timing of seasonal forcing, serum samples should be collected every two months and 200 or more samples should be included in each collection; this sample size estimate
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans....
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans....
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., AND STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Sample for Grade Or Size Determination § 51.1406 Sample for grade or size determination. Each sample shall consist of 100 pecans....
Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests
Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits
Axelrod, M
2005-08-18
Discovery sampling is a tool used in a discovery auditing. The purpose of such an audit is to provide evidence that some (usually large) inventory of items complies with a defined set of criteria by inspecting (or measuring) a representative sample drawn from the inventory. If any of the items in the sample fail compliance (defective items), then the audit has discovered an impropriety, which often triggers some action. However finding defective items in a sample is an unusual event--auditors expect the inventory to be in compliance because they come to the audit with an ''innocent until proven guilty attitude''. As part of their work product, the auditors must provide a confidence statement about compliance level of the inventory. Clearly the more items they inspect, the greater their confidence, but more inspection means more cost. Audit costs can be purely economic, but in some cases, the cost is political because more inspection means more intrusion, which communicates an attitude of distrust. Thus, auditors have every incentive to minimize the number of items in the sample. Indeed, in some cases the sample size can be specifically limited by a prior agreement or an ongoing policy. Statements of confidence about the results of a discovery sample generally use the method of confidence intervals. After finding no defectives in the sample, the auditors provide a range of values that bracket the number of defective items that could credibly be in the inventory. They also state a level of confidence for the interval, usually 90% or 95%. For example, the auditors might say: ''We believe that this inventory of 1,000 items contains no more than 10 defectives with a confidence of 95%''. Frequently clients ask their auditors questions such as: How many items do you need to measure to be 95% confident that there are no more than 10 defectives in the entire inventory? Sometimes when the auditors answer with big numbers like ''300'', their clients balk. They balk because a
7 CFR 51.3200 - Samples for grade and size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Samples for grade and size determination. 51.3200... Grade and Size Determination § 51.3200 Samples for grade and size determination. Individual samples.... When individual packages contain 20 pounds or more and the onions are packed for Large or Jumbo size...
Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin
2014-01-01
This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the
7 CFR 51.1548 - Samples for grade and size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Samples for grade and size determination. 51.1548..., AND STANDARDS) United States Standards for Grades of Potatoes 1 Samples for Grade and Size Determination § 51.1548 Samples for grade and size determination. Individual samples shall consist of at...
7 CFR 51.629 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample for grade or size determination. 51.629 Section..., California, and Arizona) Sample for Grade Or Size Determination § 51.629 Sample for grade or size determination. Each sample shall consist of 33 grapefruit. When individual packages contain at least...
7 CFR 51.690 - Sample for grade or size determination.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample for grade or size determination. 51.690 Section..., California, and Arizona) Sample for Grade Or Size Determination § 51.690 Sample for grade or size determination. Each sample shall consist of 50 oranges. When individual packages contain at least 50...
ERIC Educational Resources Information Center
Shieh, Gwowen
2013-01-01
The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
Coles, Henry C.; Qin, Yong; Price, Phillip N.
2014-11-01
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a
Alternative sample sizes for verification dose experiments and dose audits
NASA Astrophysics Data System (ADS)
Taylor, W. A.; Hansen, J. M.
1999-01-01
ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
ERIC Educational Resources Information Center
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
A contemporary decennial global sample of changing agricultural field sizes
NASA Astrophysics Data System (ADS)
White, E.; Roy, D. P.
2011-12-01
In the last several hundred years agriculture has caused significant human induced Land Cover Land Use Change (LCLUC) with dramatic cropland expansion and a marked increase in agricultural productivity. The size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLUC. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, diffusion of disease pathogens and pests, and loss or degradation in buffers to nutrient, herbicide and pesticide flows. In this study, globally distributed locations with significant contemporary field size change were selected guided by a global map of agricultural yield and literature review and were selected to be representative of different driving forces of field size change (associated with technological innovation, socio-economic conditions, government policy, historic patterns of land cover land use, and environmental setting). Seasonal Landsat data acquired on a decadal basis (for 1980, 1990, 2000 and 2010) were used to extract field boundaries and the temporal changes in field size quantified and their causes discussed.
Julious, Steven A; Cooper, Cindy L; Campbell, Michael J
2015-01-01
Sample size justification is an important consideration when planning a clinical trial, not only for the main trial but also for any preliminary pilot trial. When the outcome is a continuous variable, the sample size calculation requires an accurate estimate of the standard deviation of the outcome measure. A pilot trial can be used to get an estimate of the standard deviation, which could then be used to anticipate what may be observed in the main trial. However, an important consideration is that pilot trials often estimate the standard deviation parameter imprecisely. This paper looks at how we can choose an external pilot trial sample size in order to minimise the sample size of the overall clinical trial programme, that is, the pilot and the main trial together. We produce a method of calculating the optimal solution to the required pilot trial sample size when the standardised effect size for the main trial is known. However, as it may not be possible to know the standardised effect size to be used prior to the pilot trial, approximate rules are also presented. For a main trial designed with 90% power and two-sided 5% significance, we recommend pilot trial sample sizes per treatment arm of 75, 25, 15 and 10 for standardised effect sizes that are extra small (≤0.1), small (0.2), medium (0.5) or large (0.8), respectively. PMID:26092476
Grain size measurements using the point-sampled intercept technique
Srinivasan, S. ); Russ, J.C.; Scattergood, R.O. . Dept. of Materials Science and Engineering)
1991-01-01
Recent developments in the field of stereology and measurement of three-dimensional size scales from two-dimensional sections have emanated from the medical field, particularly in the area of pathology. Here, the measurement of biological cell sizes and their distribution are critical for diagnosis and treatment of such deadly diseases as cancer. The purpose of this paper is to introduce these new methods to the materials science community and to illustrate their application using a series of typical microstructures found in polycrystalline ceramics. As far as the current authors are aware, these methods have not been applied in materials-science related applications.
ERIC Educational Resources Information Center
Bill, Anthony; Henderson, Sally; Penman, John
2010-01-01
Two test items that examined high school students' beliefs of sample size for large populations using the context of opinion polls conducted prior to national and state elections were developed. A trial of the two items with 21 male and 33 female Year 9 students examined their naive understanding of sample size: over half of students chose a…
Tavernier, Elsa; Trinquart, Ludovic; Giraudeau, Bruno
2016-01-01
Sample sizes for randomized controlled trials are typically based on power calculations. They require us to specify values for parameters such as the treatment effect, which is often difficult because we lack sufficient prior information. The objective of this paper is to provide an alternative design which circumvents the need for sample size calculation. In a simulation study, we compared a meta-experiment approach to the classical approach to assess treatment efficacy. The meta-experiment approach involves use of meta-analyzed results from 3 randomized trials of fixed sample size, 100 subjects. The classical approach involves a single randomized trial with the sample size calculated on the basis of an a priori-formulated hypothesis. For the sample size calculation in the classical approach, we used observed articles to characterize errors made on the formulated hypothesis. A prospective meta-analysis of data from trials of fixed sample size provided the same precision, power and type I error rate, on average, as the classical approach. The meta-experiment approach may provide an alternative design which does not require a sample size calculation and addresses the essential need for study replication; results may have greater external validity. PMID:27362939
Basic concepts for sample size calculation: Critical step for any clinical trials!
Gupta, KK; Attri, JP; Singh, A; Kaur, H; Kaur, G
2016-01-01
Quality of clinical trials has improved steadily over last two decades, but certain areas in trial methodology still require special attention like in sample size calculation. The sample size is one of the basic steps in planning any clinical trial and any negligence in its calculation may lead to rejection of true findings and false results may get approval. Although statisticians play a major role in sample size estimation basic knowledge regarding sample size calculation is very sparse among most of the anesthesiologists related to research including under trainee doctors. In this review, we will discuss how important sample size calculation is for research studies and the effects of underestimation or overestimation of sample size on project's results. We have highlighted the basic concepts regarding various parameters needed to calculate the sample size along with examples. PMID:27375390
Sample size and scene identification (cloud) - Effect on albedo
NASA Technical Reports Server (NTRS)
Vemury, S. K.; Stowe, L.; Jacobowitz, H.
1984-01-01
Scan channels on the Nimbus 7 Earth Radiation Budget instrument sample radiances from underlying earth scenes at a number of incident and scattering angles. A sampling excess toward measurements at large satellite zenith angles is noted. Also, at large satellite zenith angles, the present scheme for scene selection causes many observations to be classified as cloud, resulting in higher flux averages. Thus the combined effect of sampling bias and scene identification errors is to overestimate the computed albedo. It is shown, using a process of successive thresholding, that observations with satellite zenith angles greater than 50-60 deg lead to incorrect cloud identification. Elimination of these observations has reduced the albedo from 32.2 to 28.8 percent. This reduction is very nearly the same and in the right direction as the discrepancy between the albedoes derived from the scanner and the wide-field-of-view channels.
7 CFR 201.43 - Size of sample.
Code of Federal Regulations, 2012 CFR
2012-01-01
... units. Coated seed for germination test only shall consist of at least 1,000 seed units. ..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT... of samples of agricultural seed, vegetable seed and screenings to be submitted for analysis, test,...
7 CFR 201.43 - Size of sample.
Code of Federal Regulations, 2014 CFR
2014-01-01
... units. Coated seed for germination test only shall consist of at least 1,000 seed units. ..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT... of samples of agricultural seed, vegetable seed and screenings to be submitted for analysis, test,...
7 CFR 201.43 - Size of sample.
Code of Federal Regulations, 2013 CFR
2013-01-01
... units. Coated seed for germination test only shall consist of at least 1,000 seed units. ..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT... of samples of agricultural seed, vegetable seed and screenings to be submitted for analysis, test,...
7 CFR 201.43 - Size of sample.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT... of samples of agricultural seed, vegetable seed and screenings to be submitted for analysis, test, or examination: (a) Two ounces (57 grams) of grass seed not otherwise mentioned, white or alsike clover, or...
7 CFR 201.43 - Size of sample.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) FEDERAL SEED ACT FEDERAL SEED ACT... of samples of agricultural seed, vegetable seed and screenings to be submitted for analysis, test, or examination: (a) Two ounces (57 grams) of grass seed not otherwise mentioned, white or alsike clover, or...
Utility of Inferential Norming with Smaller Sample Sizes
ERIC Educational Resources Information Center
Zhu, Jianjun; Chen, Hsin-Yi
2011-01-01
We examined the utility of inferential norming using small samples drawn from the larger "Wechsler Intelligence Scales for Children-Fourth Edition" (WISC-IV) standardization data set. The quality of the norms was estimated with multiple indexes such as polynomial curve fit, percentage of cases receiving the same score, average absolute score…
Geoscience Education Research Methods: Thinking About Sample Size
NASA Astrophysics Data System (ADS)
Slater, S. J.; Slater, T. F.; CenterAstronomy; Physics Education Research
2011-12-01
Geoscience education research is at a critical point in which conditions are sufficient to propel our field forward toward meaningful improvements in geosciences education practices. Our field has now reached a point where the outcomes of our research is deemed important to endusers and funding agencies, and where we now have a large number of scientists who are either formally trained in geosciences education research, or who have dedicated themselves to excellence in this domain. At this point we now must collectively work through our epistemology, our rules of what methodologies will be considered sufficiently rigorous, and what data and analysis techniques will be acceptable for constructing evidence. In particular, we have to work out our answer to that most difficult of research questions: "How big should my 'N' be??" This paper presents a very brief answer to that question, addressing both quantitative and qualitative methodologies. Research question/methodology alignment, effect size and statistical power will be discussed, in addition to a defense of the notion that bigger is not always better.
Sample Size in Differential Item Functioning: An Application of Hierarchical Linear Modeling
ERIC Educational Resources Information Center
Acar, Tulin
2011-01-01
The purpose of this study is to examine the number of DIF items detected by HGLM at different sample sizes. Eight different sized data files have been composed. The population of the study is 798307 students who had taken the 2006 OKS Examination. 10727 students of 798307 are chosen by random sampling method as the sample of the study. Turkish,…
Tian, Guo-Liang; Tang, Man-Lai; Zhenqiu Liu; Ming Tan; Tang, Nian-Sheng
2011-06-01
Sample size determination is an essential component in public health survey designs on sensitive topics (e.g. drug abuse, homosexuality, induced abortions and pre or extramarital sex). Recently, non-randomised models have been shown to be an efficient and cost effective design when comparing with randomised response models. However, sample size formulae for such non-randomised designs are not yet available. In this article, we derive sample size formulae for the non-randomised triangular design based on the power analysis approach. We first consider the one-sample problem. Power functions and their corresponding sample size formulae for the one- and two-sided tests based on the large-sample normal approximation are derived. The performance of the sample size formulae is evaluated in terms of (i) the accuracy of the power values based on the estimated sample sizes and (ii) the sample size ratio of the non-randomised triangular design and the design of direct questioning (DDQ). We also numerically compare the sample sizes required for the randomised Warner design with those required for the DDQ and the non-randomised triangular design. Theoretical justification is provided. Furthermore, we extend the one-sample problem to the two-sample problem. An example based on an induced abortion study in Taiwan is presented to illustrate the proposed methods. PMID:19221169
A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models
ERIC Educational Resources Information Center
Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.
2013-01-01
Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…
Structured estimation - Sample size reduction for adaptive pattern classification
NASA Technical Reports Server (NTRS)
Morgera, S.; Cooper, D. B.
1977-01-01
The Gaussian two-category classification problem with known category mean value vectors and identical but unknown category covariance matrices is considered. The weight vector depends on the unknown common covariance matrix, so the procedure is to estimate the covariance matrix in order to obtain an estimate of the optimum weight vector. The measure of performance for the adapted classifier is the output signal-to-interference noise ratio (SIR). A simple approximation for the expected SIR is gained by using the general sample covariance matrix estimator; this performance is both signal and true covariance matrix independent. An approximation is also found for the expected SIR obtained by using a Toeplitz form covariance matrix estimator; this performance is found to be dependent on both the signal and the true covariance matrix.
Sample size estimation for the van Elteren test--a stratified Wilcoxon-Mann-Whitney test.
Zhao, Yan D
2006-08-15
The van Elteren test is a type of stratified Wilcoxon-Mann-Whitney test for comparing two treatments accounting for strata. In this paper, we study sample size estimation methods for the asymptotic version of the van Elteren test, assuming that the stratum fractions (ratios of each stratum size to the total sample size) and the treatment fractions (ratios of each treatment size to the stratum size) are known in the study design. In particular, we develop three large-sample sample size estimation methods and present a real data example to illustrate the necessary information in the study design phase in order to apply the methods. Simulation studies are conducted to compare the performance of the methods and recommendations are made for method choice. Finally, sample size estimation for the van Elteren test when the stratum fractions are unknown is also discussed. PMID:16372389
Model Choice and Sample Size in Item Response Theory Analysis of Aphasia Tests
ERIC Educational Resources Information Center
Hula, William D.; Fergadiotis, Gerasimos; Martin, Nadine
2012-01-01
Purpose: The purpose of this study was to identify the most appropriate item response theory (IRT) measurement model for aphasia tests requiring 2-choice responses and to determine whether small samples are adequate for estimating such models. Method: Pyramids and Palm Trees (Howard & Patterson, 1992) test data that had been collected from…
Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide.
de Bekker-Grob, Esther W; Donkers, Bas; Jonker, Marcel F; Stolk, Elly A
2015-10-01
Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies; (2) to introduce and explain the required sample size for parameter estimates in DCEs; and (3) to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in health care. PMID:25726010
Evaluation of design flood estimates with respect to sample size
NASA Astrophysics Data System (ADS)
Kobierska, Florian; Engeland, Kolbjorn
2016-04-01
Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.
ERIC Educational Resources Information Center
Eisenberg, Sarita L.; Guo, Ling-Yu
2015-01-01
Purpose: The purpose of this study was to investigate whether a shorter language sample elicited with fewer pictures (i.e., 7) would yield a percent grammatical utterances (PGU) score similar to that computed from a longer language sample elicited with 15 pictures for 3-year-old children. Method: Language samples were elicited by asking forty…
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26865383
Dziak, John J.; Lanza, Stephanie T.; Tan, Xianming
2014-01-01
Selecting the number of different classes which will be assumed to exist in the population is an important step in latent class analysis (LCA). The bootstrap likelihood ratio test (BLRT) provides a data-driven way to evaluate the relative adequacy of a (K −1)-class model compared to a K-class model. However, very little is known about how to predict the power or the required sample size for the BLRT in LCA. Based on extensive Monte Carlo simulations, we provide practical effect size measures and power curves which can be used to predict power for the BLRT in LCA given a proposed sample size and a set of hypothesized population parameters. Estimated power curves and tables provide guidance for researchers wishing to size a study to have sufficient power to detect hypothesized underlying latent classes. PMID:25328371
Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F
2014-07-10
In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention. PMID:25019136
Brändle, E; Melzer, H; Gomez-Anson, B; Flohr, P; Kleinschmidt, K; Sieberth, H G; Hautmann, R E
1996-03-01
The gold standard for metabolic evaluation of stone-forming patients is the 24-h urine specimen. Recently, some authors have suggested that for routine metabolic evaluation spot urine samples are as valuable as the 24-h urine specimen. The purpose of our study, was to determine the value of the spot urine sample in comparison with the 24-h urine specimens. Eighty-eight healthy volunteers on different diets were investigated (32 vegetarians, 12 body-builders without protein concentrates, 28 body-builders on protein concentrates, and 16 subjects on a regular European diet). Using 24-h specimens, excretion rates of oxalate, calcium, sodium and potassium were determined. The concentration ratio of these electrolytes to creatinine was calculated for spot urine samples. A highly significant correlation between the excretion rates and the results of the spot urine samples was found for all parameters. However, the correlations showed considerable variations. On the other hand, we were able to show that creatinine excretion is highly dependent on daily protein intake, body weight and glomerular filtration rate. This leads to a considerable inter- and intraindividual variation in creatinine excretion. This variation of the creatinine excretion is the major cause for the variation in the results of spot urine samples. It is concluded that spot urine samples are an inadequate substitute for the 24-h urine specimen and that the 24-h urine specimen is still the basis for metabolic evaluation in stone patients. PMID:8650847
ERIC Educational Resources Information Center
Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.
2013-01-01
Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb.…
ERIC Educational Resources Information Center
Algina, James; Olejnik, Stephen
2003-01-01
Tables for selecting sample size in correlation studies are presented. Some of the tables allow selection of sample size so that r (or r[squared], depending on the statistic the researcher plans to interpret) will be within a target interval around the population parameter with probability 0.95. The intervals are [plus or minus] 0.05, [plus or…
EFFECTS OF SAMPLING NOZZLES ON THE PARTICLE COLLECTION CHARACTERISTICS OF INERTIAL SIZING DEVICES
In several particle-sizing samplers, the sample extraction nozzle is necessarily closely coupled to the first inertial sizing stage. Devices of this type include small sampling cyclones, right angle impactor precollectors for in-stack impactors, and the first impaction stage of s...
Using the Student's "t"-Test with Extremely Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C .F.
2013-01-01
Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…
Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model
ERIC Educational Resources Information Center
Custer, Michael
2015-01-01
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
A Comparative Study of Power and Sample Size Calculations for Multivariate General Linear Models
ERIC Educational Resources Information Center
Shieh, Gwowen
2003-01-01
Repeated measures and longitudinal studies arise often in social and behavioral science research. During the planning stage of such studies, the calculations of sample size are of particular interest to the investigators and should be an integral part of the research projects. In this article, we consider the power and sample size calculations for…
Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis
ERIC Educational Resources Information Center
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio
2010-01-01
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
Thermomagnetic behavior of magnetic susceptibility – heating rate and sample size effects
NASA Astrophysics Data System (ADS)
Jordanova, Diana; Jordanova, Neli
2015-12-01
Thermomagnetic analysis of magnetic susceptibility k(T) was carried out for a number of natural powder materials from soils, baked clay and anthropogenic dust samples using fast (11oC/min) and slow (6.5oC/min) heating rates available in the furnace of Kappabridge KLY2 (Agico). Based on the additional data for mineralogy, grain size and magnetic properties of the studied samples, behaviour of k(T) cycles and the observed differences in the curves for fast and slow heating rate are interpreted in terms of mineralogical transformations and Curie temperatures (Tc). The effect of different sample size is also explored, using large volume and small volume of powder material. It is found that soil samples show enhanced information on mineralogical transformations and appearance of new strongly magnetic phases when using fast heating rate and large sample size. This approach moves the transformation at higher temperature, but enhances the amplitude of the signal of newly created phase. Large sample size gives prevalence of the local micro- environment, created by evolving gases, released during transformations. The example from archeological brick reveals the effect of different sample sizes on the observed Curie temperatures on heating and cooling curves, when the magnetic carrier is substituted magnetite (Mn0.2Fe2.70O4). Large sample size leads to bigger differences in Tcs on heating and cooling, while small sample size results in similar Tcs for both heating rates.
Bouman, A. C.; ten Cate-Hoek, A. J.; Ramaekers, B. L. T.; Joore, M. A.
2015-01-01
Background Non-inferiority trials are performed when the main therapeutic effect of the new therapy is expected to be not unacceptably worse than that of the standard therapy, and the new therapy is expected to have advantages over the standard therapy in costs or other (health) consequences. These advantages however are not included in the classic frequentist approach of sample size calculation for non-inferiority trials. In contrast, the decision theory approach of sample size calculation does include these factors. The objective of this study is to compare the conceptual and practical aspects of the frequentist approach and decision theory approach of sample size calculation for non-inferiority trials, thereby demonstrating that the decision theory approach is more appropriate for sample size calculation of non-inferiority trials. Methods The frequentist approach and decision theory approach of sample size calculation for non-inferiority trials are compared and applied to a case of a non-inferiority trial on individually tailored duration of elastic compression stocking therapy compared to two years elastic compression stocking therapy for the prevention of post thrombotic syndrome after deep vein thrombosis. Results The two approaches differ substantially in conceptual background, analytical approach, and input requirements. The sample size calculated according to the frequentist approach yielded 788 patients, using a power of 80% and a one-sided significance level of 5%. The decision theory approach indicated that the optimal sample size was 500 patients, with a net value of €92 million. Conclusions This study demonstrates and explains the differences between the classic frequentist approach and the decision theory approach of sample size calculation for non-inferiority trials. We argue that the decision theory approach of sample size estimation is most suitable for sample size calculation of non-inferiority trials. PMID:26076354
Fujita, Masahiro; Yajima, Tomonari; Iijima, Kazuaki; Sato, Kiyoshi
2012-05-01
The uncertainty in pesticide residue levels (UPRL) associated with sampling size was estimated using individual acetamiprid and cypermethrin residue data from preharvested apple, broccoli, cabbage, grape, and sweet pepper samples. The relative standard deviation from the mean of each sampling size (n = 2(x), where x = 1-6) of randomly selected samples was defined as the UPRL for each sampling size. The estimated UPRLs, which were calculated on the basis of the regulatory sampling size recommended by the OECD Guidelines on Crop Field Trials (weights from 1 to 5 kg, and commodity unit numbers from 12 to 24), ranged from 2.1% for cypermethrin in sweet peppers to 14.6% for cypermethrin in cabbage samples. The percentages of commodity exceeding the maximum residue limits (MRLs) specified by the Japanese Food Sanitation Law may be predicted from the equation derived from this study, which was based on samples of various size ranges with mean residue levels below the MRL. The estimated UPRLs have confirmed that sufficient sampling weight and numbers are required for analysis and/or re-examination of subsamples to provide accurate values of pesticide residue levels for the enforcement of MRLs. The equation derived from the present study would aid the estimation of more accurate residue levels even from small sampling sizes. PMID:22475588
Sample Size under Inverse Negative Binomial Group Testing for Accuracy in Parameter Estimation
Montesinos-López, Osval Antonio; Montesinos-López, Abelardo; Crossa, José; Eskridge, Kent
2012-01-01
Background The group testing method has been proposed for the detection and estimation of genetically modified plants (adventitious presence of unwanted transgenic plants, AP). For binary response variables (presence or absence), group testing is efficient when the prevalence is low, so that estimation, detection, and sample size methods have been developed under the binomial model. However, when the event is rare (low prevalence <0.1), and testing occurs sequentially, inverse (negative) binomial pooled sampling may be preferred. Methodology/Principal Findings This research proposes three sample size procedures (two computational and one analytic) for estimating prevalence using group testing under inverse (negative) binomial sampling. These methods provide the required number of positive pools (), given a pool size (k), for estimating the proportion of AP plants using the Dorfman model and inverse (negative) binomial sampling. We give real and simulated examples to show how to apply these methods and the proposed sample-size formula. The Monte Carlo method was used to study the coverage and level of assurance achieved by the proposed sample sizes. An R program to create other scenarios is given in Appendix S2. Conclusions The three methods ensure precision in the estimated proportion of AP because they guarantee that the width (W) of the confidence interval (CI) will be equal to, or narrower than, the desired width (), with a probability of . With the Monte Carlo study we found that the computational Wald procedure (method 2) produces the more precise sample size (with coverage and assurance levels very close to nominal values) and that the samples size based on the Clopper-Pearson CI (method 1) is conservative (overestimates the sample size); the analytic Wald sample size method we developed (method 3) sometimes underestimated the optimum number of pools. PMID:22457714
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-04-01
In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous
Chaibub Neto, Elias
2015-01-01
In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965
The economic efficiency of sampling size: the case of beef trim revisited.
Powell, Mark R
2013-03-01
A recent paper by Ferrier and Buzby provides a framework for selecting the sample size when testing a lot of beef trim for Escherichia coli O157:H7 that equates the averted costs of recalls and health damages from contaminated meats sold to consumers with the increased costs of testing while allowing for uncertainty about the underlying prevalence of contamination. Ferrier and Buzby conclude that the optimal sample size is larger than the current sample size. However, Ferrier and Buzby's optimization model has a number of errors, and their simulations failed to consider available evidence about the likelihood of the scenarios explored under the model. After correctly modeling microbial prevalence as dependent on portion size and selecting model inputs based on available evidence, the model suggests that the optimal sample size is zero under most plausible scenarios. It does not follow, however, that sampling beef trim for E. coli O157:H7, or food safety sampling more generally, should be abandoned. Sampling is not generally cost effective as a direct consumer safety control measure due to the extremely large sample sizes required to provide a high degree of confidence of detecting very low acceptable defect levels. Food safety verification sampling creates economic incentives for food producing firms to develop, implement, and maintain effective control measures that limit the probability and degree of noncompliance with regulatory limits or private contract specifications. PMID:23496435
Effects of Sample Size on Estimates of Population Growth Rates Calculated with Matrix Models
Fiske, Ian J.; Bruna, Emilio M.; Bolker, Benjamin M.
2008-01-01
Background Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (λ) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of λ–Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of λ due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of λ. Methodology/Principal Findings Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating λ for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of λ with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. Conclusions/Significance We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities. PMID:18769483
Guo, Ling-Yu
2015-01-01
Purpose The purpose of this study was to investigate whether a shorter language sample elicited with fewer pictures (i.e., 7) would yield a percent grammatical utterances (PGU) score similar to that computed from a longer language sample elicited with 15 pictures for 3-year-old children. Method Language samples were elicited by asking forty 3-year-old children with varying language skills to talk about pictures in response to prompts. PGU scores were computed for each of two 7-picture sets and for the full set of 15 pictures. Results PGU scores for the two 7-picture sets did not differ significantly from, and were highly correlated with, PGU scores for the full set and with each other. Agreement for making pass–fail decisions between each 7-picture set and the full set and between the two 7-picture sets ranged from 80% to 100%. Conclusion The current study suggests that the PGU measure is robust enough that it can be computed on the basis of 7, at least in 3-year-old children whose language samples were elicited using similar procedures. PMID:25615691
Tai, Bee-Choo; Grundy, Richard; Machin, David
2011-03-15
Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.
The PowerAtlas: a power and sample size atlas for microarray experimental design and research
Page, Grier P; Edwards, Jode W; Gadbury, Gary L; Yelisetti, Prashanth; Wang, Jelai; Trivedi, Prinal; Allison, David B
2006-01-01
Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas [1]. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO). The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC). Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes. PMID:16504070
Wolf, Erika J.; Harrington, Kelly M.; Clark, Shaunna L.; Miller, Mark W.
2015-01-01
Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers. Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb. This study used Monte Carlo data simulation techniques to evaluate sample size requirements for common applied SEMs. Across a series of simulations, we systematically varied key model properties, including number of indicators and factors, magnitude of factor loadings and path coefficients, and amount of missing data. We investigated how changes in these parameters affected sample size requirements with respect to statistical power, bias in the parameter estimates, and overall solution propriety. Results revealed a range of sample size requirements (i.e., from 30 to 460 cases), meaningful patterns of association between parameters and sample size, and highlight the limitations of commonly cited rules-of-thumb. The broad “lessons learned” for determining SEM sample size requirements are discussed. PMID:25705052
Parameter Estimation with Small Sample Size: A Higher-Order IRT Model Approach
ERIC Educational Resources Information Center
de la Torre, Jimmy; Hong, Yuan
2010-01-01
Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…
Teoh, Wei Lin; Khoo, Michael B C; Teh, Sin Yin
2013-01-01
Designs of the double sampling (DS) X chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed. PMID:23935873
Teoh, Wei Lin; Khoo, Michael B. C.; Teh, Sin Yin
2013-01-01
Designs of the double sampling (DS) chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA and Shewhart charts demonstrate the superiority of the proposed optimal MRL-based DS chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS chart in reducing the sample size needed. PMID:23935873
Link, W.A.; Nichols, J.D.
1994-01-01
Our purpose here is to emphasize the need to properly deal with sampling variance when studying population variability and to present a means of doing so. We present an estimator for temporal variance of population size for the general case in which there are both sampling variances and covariances associated with estimates of population size. We illustrate the estimation approach with a series of population size estimates for black-capped chickadees (Parus atricapillus) wintering in a Connecticut study area and with a series of population size estimates for breeding populations of ducks in southwestern Manitoba.
Lawson, Chris A
2014-07-01
Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. PMID:24439115
NASA Technical Reports Server (NTRS)
Bice, K.; Clement, S. C.
1981-01-01
X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
McCain, J.D.; Dawes, S.S.; Farthing, W.E.
1986-05-01
The report is Attachment No. 2 to the Final Report of ARB Contract A3-092-32 and provides a tutorial on the use of Cascade (Series) Cyclones to obtain size-fractionated particulate samples from industrial flue gases at stationary sources. The instrumentation and procedures described are designed to protect the purity of the collected samples so that post-test chemical analysis may be performed for organic and inorganic compounds, including instrumental analysis for trace elements. The instrumentation described collects bulk quantities for each of six size fractions over the range 10 to 0.4 micrometer diameter. The report describes the operating principles, calibration, and empirical modeling of small cyclone performance. It also discusses the preliminary calculations, operation, sample retrieval, and data analysis associated with the use of cyclones to obtain size-segregated samples and to measure particle-size distributions.
NASA Technical Reports Server (NTRS)
Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.
2003-01-01
This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.
Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model
Taylor, Douglas J.; Muller, Keith E.
2013-01-01
The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting uncertainty associated with such point estimates. Previous authors studied an asymptotically unbiased method of obtaining confidence intervals for noncentrality and power of the general linear univariate model in this setting. We provide exact confidence intervals for noncentrality, power, and sample size. Such confidence intervals, particularly one-sided intervals, help in planning a future study and in evaluating existing studies. PMID:24039272
NASA Astrophysics Data System (ADS)
Zan, Jinbo; Fang, Xiaomin; Yang, Shengli; Yan, Maodu
2015-01-01
studies demonstrate that particle size separation based on gravitational settling and detailed rock magnetic measurements of the resulting fractionated samples constitutes an effective approach to evaluating the relative contributions of pedogenic and detrital components in the loess and paleosol sequences on the Chinese Loess Plateau. So far, however, similar work has not been undertaken on the loess deposits in Central Asia. In this paper, 17 loess and paleosol samples from three representative loess sections in Central Asia were separated into four grain size fractions, and then systematic rock magnetic measurements were made on the fractions. Our results demonstrate that the content of the <4 μm fraction in the Central Asian loess deposits is relatively low and that the samples generally have a unimodal particle distribution with a peak in the medium-coarse silt range. We find no significant difference between the particle size distributions obtained by the laser diffraction and the pipette and wet sieving methods. Rock magnetic studies further demonstrate that the medium-coarse silt fraction (e.g., the 20-75 μm fraction) provides the main control on the magnetic properties of the loess and paleosol samples in Central Asia. The contribution of pedogenically produced superparamagnetic (SP) and stable single-domain (SD) magnetic particles to the bulk magnetic properties is very limited. In addition, the coarsest fraction (>75 μm) exhibits the minimum values of χ, χARM, and SIRM, demonstrating that the concentrations of ferrimagnetic grains are not positively correlated with the bulk particle size in the Central Asian loess deposits.
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
ERIC Educational Resources Information Center
Kelley, Ken; Rausch, Joseph R.
2011-01-01
Longitudinal studies are necessary to examine individual change over time, with group status often being an important variable in explaining some individual differences in change. Although sample size planning for longitudinal studies has focused on statistical power, recent calls for effect sizes and their corresponding confidence intervals…
A margin based approach to determining sample sizes via tolerance bounds.
Newcomer, Justin T.; Freeland, Katherine Elizabeth
2013-09-01
This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.
Exact Power and Sample Size Calculations for the Two One-Sided Tests of Equivalence.
Shieh, Gwowen
2016-01-01
Equivalent testing has been strongly recommended for demonstrating the comparability of treatment effects in a wide variety of research fields including medical studies. Although the essential properties of the favorable two one-sided tests of equivalence have been addressed in the literature, the associated power and sample size calculations were illustrated mainly for selecting the most appropriate approximate method. Moreover, conventional power analysis does not consider the allocation restrictions and cost issues of different sample size choices. To extend the practical usefulness of the two one-sided tests procedure, this article describes exact approaches to sample size determinations under various allocation and cost considerations. Because the presented features are not generally available in common software packages, both R and SAS computer codes are presented to implement the suggested power and sample size computations for planning equivalence studies. The exact power function of the TOST procedure is employed to compute optimal sample sizes under four design schemes allowing for different allocation and cost concerns. The proposed power and sample size methodology should be useful for medical sciences to plan equivalence studies. PMID:27598468
Size and modal analyses of fines and ultrafines from some Apollo 17 samples
NASA Technical Reports Server (NTRS)
Greene, G. M.; King, D. T., Jr.; Banholzer, G. S., Jr.; King, E. A.
1975-01-01
Scanning electron and optical microscopy techniques have been used to determine the grain-size frequency distributions and morphology-based modal analyses of fine and ultrafine fractions of some Apollo 17 regolith samples. There are significant and large differences between the grain-size frequency distributions of the less than 10-micron size fraction of Apollo 17 samples, but there are no clear relations to the local geologic setting from which individual samples have been collected. This may be due to effective lateral mixing of regolith particles in this size range by micrometeoroid impacts. None of the properties of the frequency distributions support the idea of selective transport of any fine grain-size fraction, as has been proposed by other workers. All of the particle types found in the coarser size fractions also occur in the less than 10-micron particles. In the size range from 105 to 10 microns there is a strong tendency for the percentage of regularly shaped glass to increase as the graphic mean grain size of the less than 1-mm size fraction decreases, both probably being controlled by exposure age.
Sample size calculations for surveys to substantiate freedom of populations from infectious agents.
Johnson, Wesley O; Su, Chun-Lung; Gardner, Ian A; Christensen, Ronald
2004-03-01
We develop a Bayesian approach to sample size computations for surveys designed to provide evidence of freedom from a disease or from an infectious agent. A population is considered "disease-free" when the prevalence or probability of disease is less than some threshold value. Prior distributions are specified for diagnostic test sensitivity and specificity and we test the null hypothesis that the prevalence is below the threshold. Sample size computations are developed using hypergeometric sampling for finite populations and binomial sampling for infinite populations. A normal approximation is also developed. Our procedures are compared with the frequentist methods of Cameron and Baldock (1998a, Preventive Veterinary Medicine34, 1-17.) using an example of foot-and-mouth disease. User-friendly programs for sample size calculation and analysis of survey data are available at http://www.epi.ucdavis.edu/diagnostictests/. PMID:15032786
A normative inference approach for optimal sample sizes in decisions from experience.
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
"Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Minimum Sample Size for Cronbach's Coefficient Alpha: A Monte-Carlo Study
ERIC Educational Resources Information Center
Yurdugul, Halil
2008-01-01
The coefficient alpha is the most widely used measure of internal consistency for composite scores in the educational and psychological studies. However, due to the difficulties of data gathering in psychometric studies, the minimum sample size for the sample coefficient alpha has been frequently debated. There are various suggested minimum sample…
Generating Random Samples of a Given Size Using Social Security Numbers.
ERIC Educational Resources Information Center
Erickson, Richard C.; Brauchle, Paul E.
1984-01-01
The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)
Computer program for sample sizes required to determine disease incidence in fish populations
Ossiander, Frank J.; Wedemeyer, Gary
1973-01-01
A computer program is described for generating the sample size tables required in fish hatchery disease inspection and certification. The program was designed to aid in detection of infectious pancreatic necrosis (IPN) in salmonids, but it is applicable to any fish disease inspection when the sampling plan follows the hypergeometric distribution.
The Effects of Sample Size, Estimation Methods, and Model Specification on SEM Indices.
ERIC Educational Resources Information Center
Fan, Xitao; And Others
A Monte Carlo simulation study was conducted to investigate the effects of sample size, estimation method, and model specification on structural equation modeling (SEM) fit indices. Based on a balanced 3x2x5 design, a total of 6,000 samples were generated from a prespecified population covariance matrix, and eight popular SEM fit indices were…
Macrobenthic data from samples taken in 1980, 1983 and 1985 along a pollution gradient in the Southern California Bight (USA) were analyzed at 5 taxonomic levels (species, genus, family, order, phylum) to determIne the taxon and sample size sufficient for assessing pollution impa...
Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests
ERIC Educational Resources Information Center
Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.
2015-01-01
The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…
ERIC Educational Resources Information Center
Finch, W. Holmes; Finch, Maria E. Hernandez
2016-01-01
Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…
Fienen, Michael N.; Selbig, William R.
2012-01-01
A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.
Study on Proper Sample Size for Multivariate Frequency Analysis for Rainfall Quantile
NASA Astrophysics Data System (ADS)
Joo, K.; Nam, W.; Choi, S.; Heo, J. H.
2014-12-01
For a given rainfall event, it can be characterized into some properties such as rainfall depth (amount), duration, and intensity. By considering these factors simultaneously, the actual phenomenon of rainfall event can be explained better than univariate model. Recently, applications of multivariate analysis for hydrological data such as extreme rainfall, drought and flood events are increasing rapidly. Theoretically, sample size on 2-dimension sample space needs n-square sample size if univariate frequency analysis needs n sample size. Main object of this study is to estimate of appropriate sample size of bivariate frequency analysis (especially using copula model) for rainfall data. Hourly recorded data (1961~2010) of Seoul weather station from Korea Meteorological Administration (KMA) is applied for frequency analysis and three copula models (Clayton, Frank, Gumbel) are used. Parameter estimation is performed by using pseudo-likelihood estimation and estimated mean square error (MSE) on various sample size by peaks over threshold (POT) concept. As a result, estimated thresholds of rainfall depth are 65.4 mm for Clayton, 74.2 mm for Frank, and 76.9 mm for Gumbel, respectively
Constrained statistical inference: sample-size tables for ANOVA and regression
Vanbrabant, Leonard; Van De Schoot, Rens; Rosseel, Yves
2015-01-01
Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient β1 is larger than β2 and β3. The corresponding hypothesis is H: β1 > {β2, β3} and this is known as an (order) constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a pre-specified power (say, 0.80) for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30–50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., β1 > β2) results in a higher power than assigning a positive or a negative sign to the parameters (e.g., β1 > 0). PMID:25628587
EFFECTS OF SAMPLE SIZE ON THE STRESS-PERMEABILITY RELATIONSHIP FOR NATURAL FRACTURES
Gale, J. E.; Raven, K. G.
1980-10-01
Five granite cores (10.0, 15.0, 19.3, 24.5, and 29.4 cm in diameter) containing natural fractures oriented normal to the core axis, were used to study the effect of sample size on the permeability of natural fractures. Each sample, taken from the same fractured plane, was subjected to three uniaxial compressive loading and unloading cycles with a maximum axial stress of 30 MPa. For each loading and unloading cycle, the flowrate through the fracture plane from a central borehole under constant (±2% of the pressure increment) injection pressures was measured at specified increments of effective normal stress. Both fracture deformation and flowrate exhibited highly nonlinear variation with changes in normal stress. Both fracture deformation and flowrate hysteresis between loading and unloading cycles were observed for all samples, but this hysteresis decreased with successive loading cycles. The results of this study suggest that a sample-size effect exists. Fracture deformation and flowrate data indicate that crushing of the fracture plane asperities occurs in the smaller samples because of a poorer initial distribution of contact points than in the larger samples, which deform more elastically. Steady-state flow tests also suggest a decrease in minimum fracture permeability at maximum normal stress with increasing sample size for four of the five samples. Regression analyses of the flowrate and fracture closure data suggest that deformable natural fractures deviate from the cubic relationship between fracture aperture and flowrate and that this is especially true for low flowrates and small apertures, when the fracture sides are in intimate contact under high normal stress conditions, In order to confirm the trends suggested in this study, it is necessary to quantify the scale and variation of fracture plane roughness and to determine, from additional laboratory studies, the degree of variation in the stress-permeability relationship between samples of the same
Brera, Carlo; De Santis, Barbara; Prantera, Elisabetta; Debegnach, Francesca; Pannunzi, Elena; Fasano, Floriana; Berdini, Clara; Slate, Andrew B; Miraglia, Marina; Whitaker, Thomas B
2010-08-11
Use of proper sampling methods throughout the agri-food chain is crucial when it comes to effectively detecting contaminants in foods and feeds. The objective of the study was to estimate the performance of sampling plan designs to determine aflatoxin B(1) (AFB(1)) contamination in corn fields. A total of 840 ears were selected from a corn field suspected of being contaminated with aflatoxin. The mean and variance among the aflatoxin values for each ear were 10.6 mug/kg and 2233.3, respectively. The variability and confidence intervals associated with sample means of a given size could be predicted using an equation associated with the normal distribution. Sample sizes of 248 and 674 ears would be required to estimate the true field concentration of 10.6 mug/kg within +/-50 and +/-30%, respectively. Using the distribution information from the study, operating characteristic curves were developed to show the performance of various sampling plan designs. PMID:20608734
Power and sample size calculations for Mendelian randomization studies using one genetic instrument.
Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary
2013-08-01
Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size. PMID:23934314
Son, Dae-Soon; Lee, DongHyuk; Lee, Kyusang; Jung, Sin-Ho; Ahn, Taejin; Lee, Eunjin; Sohn, Insuk; Chung, Jongsuk; Park, Woongyang; Huh, Nam; Lee, Jae Won
2015-02-01
An empirical method of sample size determination for building prediction models was proposed recently. Permutation method which is used in this procedure is a commonly used method to address the problem of overfitting during cross-validation while evaluating the performance of prediction models constructed from microarray data. But major drawback of such methods which include bootstrapping and full permutations is prohibitively high cost of computation required for calculating the sample size. In this paper, we propose that a single representative null distribution can be used instead of a full permutation by using both simulated and real data sets. During simulation, we have used a dataset with zero effect size and confirmed that the empirical type I error approaches to 0.05. Hence this method can be confidently applied to reduce overfitting problem during cross-validation. We have observed that pilot data set generated by random sampling from real data could be successfully used for sample size determination. We present our results using an experiment that was repeated for 300 times while producing results comparable to that of full permutation method. Since we eliminate full permutation, sample size estimation time is not a function of pilot data size. In our experiment we have observed that this process takes around 30min. With the increasing number of clinical studies, developing efficient sample size determination methods for building prediction models is critical. But empirical methods using bootstrap and permutation usually involve high computing costs. In this study, we propose a method that can reduce required computing time drastically by using representative null distribution of permutations. We use data from pilot experiments to apply this method for designing clinical studies efficiently for high throughput data. PMID:25555898
Shirazi, Mohammadali; Lord, Dominique; Geedipally, Srinivas Reddy
2016-08-01
The Highway Safety Manual (HSM) prediction models are fitted and validated based on crash data collected from a selected number of states in the United States. Therefore, for a jurisdiction to be able to fully benefit from applying these models, it is necessary to calibrate or recalibrate them to local conditions. The first edition of the HSM recommends calibrating the models using a one-size-fits-all sample-size of 30-50 locations with total of at least 100 crashes per year. However, the HSM recommendation is not fully supported by documented studies. The objectives of this paper are consequently: (1) to examine the required sample size based on the characteristics of the data that will be used for the calibration or recalibration process; and, (2) propose revised guidelines. The objectives were accomplished using simulation runs for different scenarios that characterized the sample mean and variance of the data. The simulation results indicate that as the ratio of the standard deviation to the mean (i.e., coefficient of variation) of the crash data increases, a larger sample-size is warranted to fulfill certain levels of accuracy. Taking this observation into account, sample-size guidelines were prepared based on the coefficient of variation of the crash data that are needed for the calibration process. The guidelines were then successfully applied to the two observed datasets. The proposed guidelines can be used for all facility types and both for segment and intersection prediction models. PMID:27183517
Demonstration of multi- and single-reader sample size program for diagnostic studies software
NASA Astrophysics Data System (ADS)
Hillis, Stephen L.; Schartz, Kevin M.
2015-03-01
The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies, written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.
Sample size estimation for the sorcerer's apprentice. Guide for the uninitiated and intimidated.
Ray, J. G.; Vermeulen, M. J.
1999-01-01
OBJECTIVE: To review the importance of and practical application of sample size determination for clinical studies in the primary care setting. QUALITY OF EVIDENCE: A MEDLINE search was performed from January 1966 to January 1998 using the MeSH headings and text words "sample size," "sample estimation," and "study design." Article references, medical statistics texts, and university colleagues were also consulted for recommended resources. Citations that offered a clear and simple approach to sample size estimation were accepted, specifically those related to statistical analyses commonly applied in primary care research. MAIN MESSAGE: The chance of committing an alpha statistical error, or finding that there is a difference between two groups when there really is none, is usually set at 5%. The probability of finding no difference between two groups, when, in actuality, there is a difference, is commonly accepted at 20%, and is called the beta error. The power of a study, usually set at 80% (i.e., 1 minus beta), defines the probability that a true difference will be observed between two groups. Using these parameters, we provide examples for estimating the required sample size for comparing two means (t test), comparing event rates between two groups, calculating an odds ratio or a correlation coefficient, or performing a meta-analysis. Estimation of sample size needed before initiation of a study enables statistical power to be maximized and bias minimized, increasing the validity of the study. CONCLUSION: Sample size estimation can be done by any novice researcher who wishes to maximize the quality of his or her study. PMID:10424273
Comparative studies of grain size separates of 60009. [lunar soil samples
NASA Technical Reports Server (NTRS)
Mckay, D. S.; Morris, R. V.; Dungan, M. A.; Fruland, R. M.; Fuhrman, R.
1976-01-01
Five samples from 60009, the lower half of a double drive tube, were analyzed via grain-size methods, with particle types classified and counted in the coarser grain sizes. Studies were undertaken of particle types and distributions by petrographic methods, of magnetic fractions, of the size splits and magnetic splits as analyzed by ferromagnetic resonance (FMR) techniques, of maturity (based on agglutinate content, FMR index Is/FeO, mean size of sub-cm material, magnetic fraction), of possible reworking or mixing in situ, and of depositional history. Maturity indices are in substantial agreement for all of the five samples. Strong positive correlation of percent agglutinates and percent bedrock-derived lithic fragments, combined with negative correlation of those components with percent single crystal plagioclase, argue against in situ reworking of the same soil.
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David
2013-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4-8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and type-I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of 8 fish could detect an increase of ∼ 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of ∼ 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2 this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of ∼ 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated by increased precision of composites for estimating mean
A practical comparison of blinded methods for sample size reviews in survival data clinical trials.
Todd, Susan; Valdés-Márquez, Elsa; West, Jodie
2012-01-01
This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates. PMID:22337635
Mesh-size effects on drift sample composition as determined with a triple net sampler
Slack, K.V.; Tilley, L.J.; Kennelly, S.S.
1991-01-01
Nested nets of three different mesh apertures were used to study mesh-size effects on drift collected in a small mountain stream. The innermost, middle, and outermost nets had, respectively, 425 ??m, 209 ??m and 106 ??m openings, a design that reduced clogging while partitioning collections into three size groups. The open area of mesh in each net, from largest to smallest mesh opening, was 3.7, 5.7 and 8.0 times the area of the net mouth. Volumes of filtered water were determined with a flowmeter. The results are expressed as (1) drift retained by each net, (2) drift that would have been collected by a single net of given mesh size, and (3) the percentage of total drift (the sum of the catches from all three nets) that passed through the 425 ??m and 209 ??m nets. During a two day period in August 1986, Chironomidae larvae were dominant numerically in all 209 ??m and 106 ??m samples and midday 425 ??m samples. Large drifters (Ephemerellidae) occurred only in 425 ??m or 209 ??m nets, but the general pattern was an increase in abundance and number of taxa with decreasing mesh size. Relatively more individuals occurred in the larger mesh nets at night than during the day. The two larger mesh sizes retained 70% of the total sediment/detritus in the drift collections, and this decreased the rate of clogging of the 106 ??m net. If an objective of a sampling program is to compare drift density or drift rate between areas or sampling dates, the same mesh size should be used for all sample collection and processing. The mesh aperture used for drift collection should retain all species and life stages of significance in a study. The nested net design enables an investigator to test the adequacy of drift samples. ?? 1991 Kluwer Academic Publishers.
NASA Technical Reports Server (NTRS)
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Estimation of grain size in asphalt samples using digital image analysis
NASA Astrophysics Data System (ADS)
Källén, Hanna; Heyden, Anders; Lindh, Per
2014-09-01
Asphalt is made of a mixture of stones of different sizes and a binder called bitumen, the size distribution of the stones is determined by the recipe of the asphalt. One quality check of asphalt is to see if the real size distribution of asphalt samples is consistent with the recipe. This is usually done by first extracting the binder using methylenchloride and the sieving the stones and see how much that pass every sieve size. Methylenchloride is highly toxic and it is desirable to find the size distribution in some other way. In this paper we find the size distribution by slicing up the asphalt sample and using image analysis techniques to analyze the cross-sections. First the stones are segmented from the background, bitumen, and then rectangles are fit to the detected stones. We then estimate the sizes of the stones by using the width of the rectangle. The result is compared with both the recipe for the asphalt and with the result from the standard analysis method, and our method shows good correlation with those.
Sample size determination for testing equality in a cluster randomized trial with noncompliance.
Lui, Kung-Jong; Chang, Kuang-Chao
2011-01-01
For administrative convenience or cost efficiency, we may often employ a cluster randomized trial (CRT), in which randomized units are clusters of patients rather than individual patients. Furthermore, because of ethical reasons or patient's decision, it is not uncommon to encounter data in which there are patients not complying with their assigned treatments. Thus, the development of a sample size calculation procedure for a CRT with noncompliance is important and useful in practice. Under the exclusion restriction model, we have developed an asymptotic test procedure using a tanh(-1)(x) transformation for testing equality between two treatments among compliers for a CRT with noncompliance. We have further derived a sample size formula accounting for both noncompliance and the intraclass correlation for a desired power 1 - β at a nominal α level. We have employed Monte Carlo simulation to evaluate the finite-sample performance of the proposed test procedure with respect to type I error and the accuracy of the derived sample size calculation formula with respect to power in a variety of situations. Finally, we use the data taken from a CRT studying vitamin A supplementation to reduce mortality among preschool children to illustrate the use of sample size calculation proposed here. PMID:21191850
Simulation analyses of space use: Home range estimates, variability, and sample size
Bekoff, M.; Mech, L.D.
1984-01-01
Simulations of space use by animals were run to determine the relationship among home range area estimates, variability, and sample size {number of locations}. As sample size increased, home range size increased asymptotically, whereas variability decreased among mean home range area estimates generated by multiple simulations for the same sample size. Our results suggest that field workers should ascertain between 100 and 200 locations in order to estimate reliably home range area. In some cases, this suggested guideline is higher than values found in the few published studies in which the relationship between home range area and number of locations is addressed. Sampling differences for small species occupying relatively small home ranges indicate that fewer locations may be sufficient to allow for a reliable estimate of home range. Intraspecffic variability in social status (group member, loner, resident, transient), age, sex, reproductive condition, and food resources also have to be considered, as do season, habitat, and differences in sampling and analytical methods. Comparative data still are needed.
Size-dependent Turbidimatric Quantification of Mobile Colloids in Field Samples
NASA Astrophysics Data System (ADS)
Yan, J.; Meng, X.; Jin, Y.
2015-12-01
Natural colloids, often defined as entities with sizes < 1.0 μm, have attracted much research attention because of their ability to facilitate the transport of contaminants in the subsurface environment. However, due to their small size and generally low concentrations in field samples, quantification of mobile colloids, especially the smaller fractions (< 0.45 µm), which are operationally defined as dissolved, is largely impeded and hence the natural colloidal pool is greatly overlooked and underestimated. The main objectives of this study are to: (1) develop an experimentally and economically efficient methodology to quantify natural colloids in different size fractions (0.1-0.45 and 0.45-1 µm); (2) quantify mobile colloids including small colloids, < 0.45 µm particularly, in different natural aquatic samples. We measured and generated correlations between mass concentration and turbidity of colloid suspensions, made by extracting and fractionating water dispersible colloids in 37 soils from different areas in the U.S. and Denmark, for colloid size fractions 0.1-0.45 and 0.45-1 µm. Results show that the correlation between turbidity and colloid mass concentration is largely affected by colloid size and iron content, indicating the need to generate different correlations for colloids with constrained size range and iron content. This method enabled quick quantification of colloid concentrations in a large number of field samples collected from freshwater, wetland and estuaries in different size fractions. As a general trend, we observed high concentrations of colloids in the < 0.45 µm fraction, which constitutes a significant percentage of the total mobile colloidal pool (< 1 µm). This observation suggests that the operationally defined cut-off size for "dissolved" phase can greatly underestimate colloid concentration therefore the role that colloids play in the transport of associated contaminants or other elements.
Sample-Size Effects on the Compression Behavior of a Ni-BASED Amorphous Alloy
NASA Astrophysics Data System (ADS)
Liang, Weizhong; Zhao, Guogang; Wu, Linzhi; Yu, Hongjun; Li, Ming; Zhang, Lin
Ni42Cu5Ti20Zr21.5Al8Si3.5 bulk metallic glasses rods with diameters of 1 mm and 3 mm, were prepared by arc melting of composing elements in a Ti-gettered argon atmosphere. The compressive deformation and fracture behavior of the amorphous alloy samples with different size were investigated by testing machine and scanning electron microscope. The compressive stress-strain curves of 1 mm and 3 mm samples exhibited 4.5% and 0% plastic strain, while the compressive fracture strength for 1 mm and 3 mm rod is 4691 MPa and 2631 MPa, respectively. The compressive fracture surface of different size sample consisted of shear zone and non-shear one. Typical vein patterns with some melting drops can be seen on the shear region of 1 mm rod, while fish-bone shape patterns can be observed on 3 mm specimen surface. Some interesting different spacing periodic ripples existed on the non-shear zone of 1 and 3 mm rods. On the side surface of 1 mm sample, high density of shear bands was observed. The skip of shear bands can be seen on 1 mm sample surface. The mechanisms of the effect of sample size on fracture strength and plasticity of the Ni-based amorphous alloy are discussed.
Janja Tursic; Irena Grgic; Axel Berner; Jaroslav Skantar; Igor Cuhalev
2008-02-01
A special sampling system for measurements of size-segregated particles directly at the source of emission was designed and constructed. The central part of this system is a low-pressure cascade impactor with 10 collection stages for the size ranges between 15 nm and 16 {mu}m. Its capability and suitability was proven by sampling particles at the stack (100{sup o}C) of a coal-fired power station in Slovenia. These measurements showed very reasonable results in comparison with a commercial cascade impactor for PM10 and PM2.5 and with a plane device for total suspended particulate matter (TSP). The best agreement with the measurements made by a commercial impactor was found for concentrations of TSP above 10 mg m{sup -3}, i.e., the average PM2.5/PM10 ratios obtained by a commercial impactor and by our impactor were 0.78 and 0.80, respectively. Analysis of selected elements in size-segregated emission particles additionally confirmed the suitability of our system. The measurements showed that the mass size distributions were generally bimodal, with the most pronounced mass peak in the 1-2 {mu}m size range. The first results of elemental mass size distributions showed some distinctive differences in comparison to the most common ambient anthropogenic sources (i.e., traffic emissions). For example, trace elements, like Pb, Cd, As, and V, typically related to traffic emissions, are usually more abundant in particles less than 1 {mu}m in size, whereas in our specific case they were found at about 2 {mu}m. Thus, these mass size distributions can be used as a signature of this source. Simultaneous measurements of size-segregated particles at the source and in the surrounding environment can therefore significantly increase the sensitivity of the contribution of a specific source to the actual ambient concentrations. 25 refs., 3 figs., 2 tabs.
[Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].
Fu, Yingkun; Xie, Yanming
2011-10-01
In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research. PMID:22292397
ERIC Educational Resources Information Center
Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack
2014-01-01
The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…
Analysis of variograms with various sample sizes from a multispectral image
Technology Transfer Automated Retrieval System (TEKTRAN)
Variograms play a crucial role in remote sensing application and geostatistics. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100 X 100 pixel subset was chosen from an aerial multispectral image which contained three wavebands, green, ...
Analysis of variograms with various sample sizes from a multispectral image
Technology Transfer Automated Retrieval System (TEKTRAN)
Variogram plays a crucial role in remote sensing application and geostatistics. It is very important to estimate variogram reliably from sufficient data. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100x100-pixel subset was chosen from ...
ERIC Educational Resources Information Center
Kelley, Ken; Rausch, Joseph R.
2006-01-01
Methods for planning sample size (SS) for the standardized mean difference so that a narrow confidence interval (CI) can be obtained via the accuracy in parameter estimation (AIPE) approach are developed. One method plans SS so that the expected width of the CI is sufficiently narrow. A modification adjusts the SS so that the obtained CI is no…
Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas
2014-01-01
Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…
Measurement Model Quality, Sample Size, and Solution Propriety in Confirmatory Factor Models
ERIC Educational Resources Information Center
Gagne, Phill; Hancock, Gregory R.
2006-01-01
Sample size recommendations in confirmatory factor analysis (CFA) have recently shifted away from observations per variable or per parameter toward consideration of model quality. Extending research by Marsh, Hau, Balla, and Grayson (1998), simulations were conducted to determine the extent to which CFA model convergence and parameter estimation…
Cao, Zhiguo; Xu, Fuchao; Li, Wenchao; Sun, Jianhui; Shen, Mohai; Su, Xianfa; Feng, Jinglan; Yu, Gang; Covaci, Adrian
2015-09-15
Particle size is a significant parameter which determines the environmental fate and the behavior of dust particles and, implicitly, the exposure risk of humans to particle-bound contaminants. Currently, the influence of dust particle size on the occurrence and seasonal variation of hexabromocyclododecanes (HBCDs) remains unclear. While HBCDs are now restricted by the Stockholm Convention, information regarding HBCD contamination in indoor dust in China is still limited. We analyzed composite dust samples from offices (n = 22), hotels (n = 3), kindergartens (n = 2), dormitories (n = 40), and main roads (n = 10). Each composite dust sample (one per type of microenvironment) was fractionated into 9 fractions (F1-F9: 2000-900, 900-500, 500-400, 400-300, 300-200, 200-100, 100-74, 74-50, and <50 μm). Total HBCD concentrations ranged from 5.3 (road dust, F4) to 2580 ng g(-1) (dormitory dust, F4) in the 45 size-segregated samples. The seasonality of HBCDs in indoor dust was investigated in 40 samples from two offices. A consistent seasonal trend of HBCD levels was evident with dust collected in the winter being more contaminated with HBCDs than dust from the summer. Particle size-selection strategy for dust analysis has been found to be influential on the HBCD concentrations, while overestimation or underestimation would occur with improper strategies. PMID:26301772
ERIC Educational Resources Information Center
Kim, Su-Young
2012-01-01
Just as growth mixture models are useful with single-phase longitudinal data, multiphase growth mixture models can be used with multiple-phase longitudinal data. One of the practically important issues in single- and multiphase growth mixture models is the sample size requirements for accurate estimation. In a Monte Carlo simulation study, the…
The Influence of Virtual Sample Size on Confidence and Causal-Strength Judgments
ERIC Educational Resources Information Center
Liljeholm, Mimi; Cheng, Patricia W.
2009-01-01
The authors investigated whether confidence in causal judgments varies with virtual sample size--the frequency of cases in which the outcome is (a) absent before the introduction of a generative cause or (b) present before the introduction of a preventive cause. Participants were asked to evaluate the influence of various candidate causes on an…
Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies
NASA Technical Reports Server (NTRS)
McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.
2010-01-01
This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
One-Sided Nonparametric Comparison of Treatments with a Standard for Unequal Sample Sizes.
ERIC Educational Resources Information Center
Chakraborti, S.; Gibbons, Jean D.
1992-01-01
The one-sided problem of comparing treatments with a standard on the basis of data available in the context of a one-way analysis of variance is examined, and the methodology of S. Chakraborti and J. D. Gibbons (1991) is extended to the case of unequal sample sizes. (SLD)
Bolton tooth size ratio among Sudanese Population sample: A preliminary study
Abdalla Hashim, Ala’a Hayder; Eldin, AL-Hadi Mohi; Hashim, Hayder Abdalla
2015-01-01
Background: The study of the mesiodistal size, the morphology of teeth and dental arch may play an important role in clinical dentistry, as well as other sciences such as Forensic Dentistry and Anthropology. Aims: The aims of the present study were to establish tooth-size ratio in Sudanese sample with Class I normal occlusion, to compare the tooth-size ratio between the present study and Bolton's study and between genders. Materials and Methods: The sample consisted of dental casts of 60 subjects (30 males and 30 females). Bolton formula was used to compute the overall and anterior ratio. The correlation coefficient between the anterior ratio and overall ratio was tested, and Student's t-test was used to compare tooth-size ratios between males and females, and between the present study and Bolton's result. Results: The results of the overall and anterior ratio was relatively similar to the mean values reported by Bolton, and there were no statistically significant differences between the mean values of the anterior ratio and the overall ratio between males and females. The correlation coefficient was (r = 0.79). Conclusions: The result obtained was similar to the Caucasian race. However, the reality indicates that the Sudanese population consisted of different racial groups; therefore, the firm conclusion is difficult to draw. Since this sample is not representative for the Sudanese population, hence, a further study with a large sample collected from the different parts of the Sudan is required. PMID:26229948
Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research
ERIC Educational Resources Information Center
Cook, David A.; Hatala, Rose
2015-01-01
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…
NASA Astrophysics Data System (ADS)
Gao, Ka; Li, Shuangming; Xu, Lei; Fu, Hengzhi
2014-05-01
Al-40% Cu hypereutectic alloy samples were successfully directionally solidified at a growth rate of 10 μm/s in different sizes (4 mm, 1.8 mm, and 0.45 mm thickness in transverse section). Using the serial sectioning technique, the three-dimensional (3D) microstructures of the primary intermetallic Al2Cu phase of the alloy can be observed with various growth patterns, L-shape, E-shape, and regular rectangular shape with respect to growth orientations of the (110) and (310) plane. The L-shape and regular rectangular shape of Al2Cu phase are bounded by {110} facets. When the sample size was reduced from 4 mm to 0.45 mm, the solidified microstructures changed from multi-layer dendrites to single-layer dendrite along the growth direction, and then the orientation texture was at the plane (310). The growth mechanism for the regular faceted intermetallic Al2Cu at different sample sizes was interpreted by the oriented attachment mechanism (OA). The experimental results showed that the directionally solidified Al-40% Cu alloy sample in a much smaller size can achieve a well-aligned morphology with a specific growth texture.
Approaches to sample size estimation in the design of clinical trials--a review.
Donner, A
1984-01-01
Over the last decade, considerable interest has focused on sample size estimation in the design of clinical trials. The resulting literature is scattered over many textbooks and journals. This paper presents these methods in a single review and comments on their application in practice. PMID:6385187
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
ERIC Educational Resources Information Center
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models
ERIC Educational Resources Information Center
Shieh, Gwowen
2007-01-01
The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…
The Relation among Fit Indexes, Power, and Sample Size in Structural Equation Modeling
ERIC Educational Resources Information Center
Kim, Kevin H.
2005-01-01
The relation among fit indexes, power, and sample size in structural equation modeling is examined. The noncentrality parameter is required to compute power. The 2 existing methods of computing power have estimated the noncentrality parameter by specifying an alternative hypothesis or alternative fit. These methods cannot be implemented easily and…
Support vector regression to predict porosity and permeability: Effect of sample size
NASA Astrophysics Data System (ADS)
Al-Anazi, A. F.; Gates, I. D.
2012-02-01
Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function
Forestry inventory based on multistage sampling with probability proportional to size
NASA Technical Reports Server (NTRS)
Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.
1983-01-01
A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.
Sabharwal, Sanjeeve; Patel, Nirav K; Holloway, Ian; Athanasiou, Thanos
2015-03-01
The purpose of this study was to identify how often sample size calculations were reported in recent orthopaedic randomized controlled trials (RCTs) and to determine what proportion of studies that failed to find a significant treatment effect were at risk of type II error. A pre-defined computerized search was performed in MEDLINE to identify RCTs published in 2012 in the 20 highest ranked orthopaedic journals based on impact factor. Data from these studies was used to perform post hoc analysis to determine whether each study was sufficiently powered to detect a small (0.2), medium (0.5) and large (0.8) effect size as defined by Cohen. Sufficient power (1-β) was considered to be 80% and a two-tailed test was performed with an alpha value of 0.05. 120 RCTs were identified using our stated search protocol and just 73 studies (60.80%) described an appropriate sample size calculation. Examination of studies with negative primary outcome revealed that 68 (93.15%) were at risk of type II error for a small treatment effect and only 4 (5.48%) were at risk of type II error for a medium sized treatment effect. Although comparison of the results with existing data from over 10 years ago infers improved practice in sample size calculations within orthopaedic surgery, there remains an ongoing need for improvement of practice. Orthopaedic researchers, as well as journal reviewers and editors have a responsibility to ensure that RCTs conform to standardized methodological guidelines and perform appropriate sample size calculations. PMID:26280864
Sample sizes for brain atrophy outcomes in trials for secondary progressive multiple sclerosis
Altmann, D R.; Jasperse, B; Barkhof, F; Beckmann, K; Filippi, M; Kappos, L D.; Molyneux, P; Polman, C H.; Pozzilli, C; Thompson, A J.; Wagner, K; Yousry, T A.; Miller, D H.
2009-01-01
Background: Progressive brain atrophy in multiple sclerosis (MS) may reflect neuroaxonal and myelin loss and MRI measures of brain tissue loss are used as outcome measures in MS treatment trials. This study investigated sample sizes required to demonstrate reduction of brain atrophy using three outcome measures in a parallel group, placebo-controlled trial for secondary progressive MS (SPMS). Methods: Data were taken from a cohort of 43 patients with SPMS who had been followed up with 6-monthly T1-weighted MRI for up to 3 years within the placebo arm of a therapeutic trial. Central cerebral volumes (CCVs) were measured using a semiautomated segmentation approach, and brain volume normalized for skull size (NBV) was measured using automated segmentation (SIENAX). Change in CCV and NBV was measured by subtraction of baseline from serial CCV and SIENAX images; in addition, percentage brain volume change relative to baseline was measured directly using a registration-based method (SIENA). Sample sizes for given treatment effects and power were calculated for standard analyses using parameters estimated from the sample. Results: For a 2-year trial duration, minimum sample sizes per arm required to detect a 50% treatment effect at 80% power were 32 for SIENA, 69 for CCV, and 273 for SIENAX. Two-year minimum sample sizes were smaller than 1-year by 71% for SIENAX, 55% for CCV, and 44% for SIENA. Conclusion: SIENA and central cerebral volume are feasible outcome measures for inclusion in placebo-controlled trials in secondary progressive multiple sclerosis. GLOSSARY ANCOVA = analysis of covariance; CCV = central cerebral volume; FSL = FMRIB Software Library; MNI = Montreal Neurological Institute; MS = multiple sclerosis; NBV = normalized brain volume; PBVC = percent brain volume change; RRMS = relapsing–remitting multiple sclerosis; SPMS = secondary progressive multiple sclerosis. PMID:19005170
Electrospray ionization mass spectrometry from discrete nanoliter-sized sample volumes.
Ek, Patrik; Stjernström, Mårten; Emmer, Asa; Roeraade, Johan
2010-09-15
We describe a method for nanoelectrospray ionization mass spectrometry (nESI-MS) of very small sample volumes. Nanoliter-sized sample droplets were taken up by suction into a nanoelectrospray needle from a silicon microchip prior to ESI. To avoid a rapid evaporation of the small sample volumes, all manipulation steps were performed under a cover of fluorocarbon liquid. Sample volumes down to 1.5 nL were successfully analyzed, and an absolute limit of detection of 105 attomole of insulin (chain B, oxidized) was obtained. The open access to the sample droplets on the silicon chip provides the possibility to add reagents to the sample droplets and perform chemical reactions under an extended period of time. This was demonstrated in an example where we performed a tryptic digestion of cytochrome C in a nanoliter-sized sample volume for 2.5 h, followed by monitoring the outcome of the reaction with nESI-MS. The technology was also utilized for tandem mass spectrometry (MS/MS) sequencing analysis of a 2 nL solution of angiotensin I. PMID:20740531
Gutenberg-Richter b-value maximum likelihood estimation and sample size
NASA Astrophysics Data System (ADS)
Nava, F. A.; Márquez-Ramírez, V. H.; Zúñiga, F. R.; Ávila-Barrientos, L.; Quinteros, C. B.
2016-06-01
The Aki-Utsu maximum likelihood method is widely used for estimation of the Gutenberg-Richter b-value, but not all authors are conscious of the method's limitations and implicit requirements. The Aki/Utsu method requires a representative estimate of the population mean magnitude; a requirement seldom satisfied in b-value studies, particularly in those that use data from small geographic and/or time windows, such as b-mapping and b-vs-time studies. Monte Carlo simulation methods are used to determine how large a sample is necessary to achieve representativity, particularly for rounded magnitudes. The size of a representative sample weakly depends on the actual b-value. It is shown that, for commonly used precisions, small samples give meaningless estimations of b. Our results give estimates on the probabilities of getting correct estimates of b for a given desired precision for samples of different sizes. We submit that all published studies reporting b-value estimations should include information about the size of the samples used.
Estimating the Correlation in Bivariate Normal Data with Known Variances and Small Sample Sizes1
Fosdick, Bailey K.; Raftery, Adrian E.
2013-01-01
We consider the problem of estimating the correlation in bivariate normal data when the means and variances are assumed known, with emphasis on the small sample case. We consider eight different estimators, several of them considered here for the first time in the literature. In a simulation study, we found that Bayesian estimators using the uniform and arc-sine priors outperformed several empirical and exact or approximate maximum likelihood estimators in small samples. The arc-sine prior did better for large values of the correlation. For testing whether the correlation is zero, we found that Bayesian hypothesis tests outperformed significance tests based on the empirical and exact or approximate maximum likelihood estimators considered in small samples, but that all tests performed similarly for sample size 50. These results lead us to suggest using the posterior mean with the arc-sine prior to estimate the correlation in small samples when the variances are assumed known. PMID:23378667
Tsai, Pei-Chien; Bell, Jordana T
2015-01-01
Background: Epigenome-wide association scans (EWAS) are under way for many complex human traits, but EWAS power has not been fully assessed. We investigate power of EWAS to detect differential methylation using case-control and disease-discordant monozygotic (MZ) twin designs with genome-wide DNA methylation arrays. Methods and Results: We performed simulations to estimate power under the case-control and discordant MZ twin EWAS study designs, under a range of epigenetic risk effect sizes and conditions. For example, to detect a 10% mean methylation difference between affected and unaffected subjects at a genome-wide significance threshold of P = 1 × 10−6, 98 MZ twin pairs were required to reach 80% EWAS power, and 112 cases and 112 controls pairs were needed in the case-control design. We also estimated the minimum sample size required to reach 80% EWAS power under both study designs. Our analyses highlighted several factors that significantly influenced EWAS power, including sample size, epigenetic risk effect size, the variance of DNA methylation at the locus of interest and the correlation in DNA methylation patterns within the twin sample. Conclusions: We provide power estimates for array-based DNA methylation EWAS under case-control and disease-discordant MZ twin designs, and explore multiple factors that impact on EWAS power. Our results can help guide EWAS experimental design and interpretation for future epigenetic studies. PMID:25972603
Vertical grain size distribution in dust devils: Analyses of in situ samples from southern Morocco
NASA Astrophysics Data System (ADS)
Raack, J.; Reiss, D.; Ori, G. G.; Taj-Eddine, K.
2014-04-01
Dust devils are vertical convective vortices occurring on Earth and Mars [1]. Entrained particle sizes such as dust and sand lifted by dust devils make them visible [1]. On Earth, finer particles (<~50 μm) can be entrained in the boundary layer and transported over long distances [e.g., 2]. The lifetime of entrained particles in the atmosphere depends on their size, where smaller particles maintain longer into the atmosphere [3]. Mineral aerosols such as desert dust are important for human health, weather, climate, and biogeochemistry [4]. The entrainment of dust particles by dust devil and its vertical grain size distribution is not well constrained. In situ grain size samples from active dust devils were so far derived by [5,6,7] in three different continents: Africa, Australia, and North America, respectively. In this study we report about in situ samples directly derived from active dust devils in the Sahara Desert (Erg Chegaga) in southern Morocco in 2012 to characterize the vertical grain size distribution within dust devils.
Sample size calculation for recurrent events data in one-arm studies.
Rebora, Paola; Galimberti, Stefania
2012-01-01
In some exceptional circumstances, as in very rare diseases, nonrandomized one-arm trials are the sole source of evidence to demonstrate efficacy and safety of a new treatment. The design of such studies needs a sound methodological approach in order to provide reliable information, and the determination of the appropriate sample size still represents a critical step of this planning process. As, to our knowledge, no method exists for sample size calculation in one-arm trials with a recurrent event endpoint, we propose here a closed sample size formula. It is derived assuming a mixed Poisson process, and it is based on the asymptotic distribution of the one-sample robust nonparametric test recently developed for the analysis of recurrent events data. The validity of this formula in managing a situation with heterogeneity of event rates, both in time and between patients, and time-varying treatment effect was demonstrated with exhaustive simulation studies. Moreover, although the method requires the specification of a process for events generation, it seems to be robust under erroneous definition of this process, provided that the number of events at the end of the study is similar to the one assumed in the planning phase. The motivating clinical context is represented by a nonrandomized one-arm study on gene therapy in a very rare immunodeficiency in children (ADA-SCID), where a major endpoint is the recurrence of severe infections. PMID:23024035
Estimating the Size of Populations at High Risk for HIV Using Respondent-Driven Sampling Data
Handcock, Mark S.; Gile, Krista J.; Mar, Corinne M.
2015-01-01
Summary The study of hard-to-reach populations presents significant challenges. Typically, a sampling frame is not available, and population members are difficult to identify or recruit from broader sampling frames. This is especially true of populations at high risk for HIV/AIDS. Respondent-driven sampling (RDS) is often used in such settings with the primary goal of estimating the prevalence of infection. In such populations, the number of people at risk for infection and the number of people infected are of fundamental importance. This article presents a case-study of the estimation of the size of the hard-to-reach population based on data collected through RDS. We study two populations of female sex workers and men-who-have-sex-with-men in El Salvador. The approach is Bayesian and we consider different forms of prior information, including using the UNAIDS population size guidelines for this region. We show that the method is able to quantify the amount of information on population size available in RDS samples. As separate validation, we compare our results to those estimated by extrapolating from a capture–recapture study of El Salvadorian cities. The results of our case-study are largely comparable to those of the capture–recapture study when they differ from the UNAIDS guidelines. Our method is widely applicable to data from RDS studies and we provide a software package to facilitate this. PMID:25585794
ERIC Educational Resources Information Center
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
A Novel Size-Selective Airborne Particle Sampling Instrument (Wras) for Health Risk Evaluation
NASA Astrophysics Data System (ADS)
Gnewuch, H.; Muir, R.; Gorbunov, B.; Priest, N. D.; Jackson, P. R.
Health risks associated with inhalation of airborne particles are known to be influenced by particle sizes. A reliable, size resolving sampler, classifying particles in size ranges from 2 nm—30 μm and suitable for use in the field would be beneficial in investigating health risks associated with inhalation of airborne particles. A review of current aerosol samplers highlighted a number of limitations. These could be overcome by combining an inertial deposition impactor with a diffusion collector in a single device. The instrument was designed for analysing mass size distributions. Calibration was carried out using a number of recognised techniques. The instrument was tested in the field by collecting size resolved samples of lead containing aerosols present at workplaces in factories producing crystal glass. The mass deposited on each substrate proved sufficient to be detected and measured using atomic absorption spectroscopy. Mass size distributions of lead were produced and the proportion of lead present in the aerosol nanofraction calculated and varied from 10% to 70% by weight.
Chondrules in Apollo 14 samples and size analyses of Apollo 14 and 15 fines.
NASA Technical Reports Server (NTRS)
King, E. A., Jr.; Butler, J. C.; Carman, M. F.
1972-01-01
Chondrules have been observed in several breccia samples and one fines sample returned by the Apollo 14 mission. The chondrules are formed by at least three different processes that appear to be related to large impacts: (1) crystallization of shock-melted spherules and droplets; (2) rounding of rock clasts and mineral grains by abrasion in the base surge; and (3) diffusion and recrystallization around clasts in hot base surge and fall-back deposits. In the case of the Apollo 14 samples, the large impact almost certainly is the Imbrian event. Grain size analyses of undisturbed fines samples from the Apollo 14 site and from the Apollo 15 Apennine Front are almost identical, indicating that the two localities have similar meteoroid bombardment exposure ages, approximately 3.7 x 10 to the 9th yr. This observation is consistent with the interpretation that both the Fra Mauro formation and the Apennine Front material originated as ejecta from the Imbrian event.
Multiple Approaches to Down Sizing of the Lunar Sample Return Collection
NASA Technical Reports Server (NTRS)
Lofgren, Gary E.; Horz, F.
2010-01-01
Future Lunar missions are planned for at least 7 days, significantly longer than the 3 days of the later Apollo missions. The last of those missions, A-17, returned 111 kg of samples plus another 20 kg of containers. The current Constellation program requirements for return weight for science is 100 kg with the hope of raising that limit to near 250 kg including containers and other non-geological materials. The estimated return weight for rock and soil samples will, at best, be about 175 kg. One method proposed to accomplish down-sizing of the collection is the use of a Geo-Lab in the lunar habitat to complete a preliminary examination of selected samples and facilitate prioritizing the return samples.
Sample size calculation for the Wilcoxon-Mann-Whitney test adjusting for ties.
Zhao, Yan D; Rahardja, Dewi; Qu, Yongming
2008-02-10
In this paper we study sample size calculation methods for the asymptotic Wilcoxon-Mann-Whitney test for data with or without ties. The existing methods are applicable either to data with ties or to data without ties but not to both cases. While the existing methods developed for data without ties perform well, the methods developed for data with ties have limitations in that they are either applicable to proportional odds alternatives or have computational difficulties. We propose a new method which has a closed-form formula and therefore is very easy to calculate. In addition, the new method can be applied to both data with or without ties. Simulations have demonstrated that the new sample size formula performs very well as the corresponding actual powers are close to the nominal powers. PMID:17487941
Grain size analysis and high frequency electrical properties of Apollo 15 and 16 samples
NASA Technical Reports Server (NTRS)
Gold, T.; Bilson, E.; Yerbury, M.
1973-01-01
The particle size distribution of eleven surface fines samples collected by Apollo 15 and 16 was determined by the method of measuring the sedimentation rate in a column of water. The fact that the grain size distribution in the core samples shows significant differences within a few centimeters variation of depth is important for the understanding of the surface transportation processes which are responsible for the deposition of thin layers of different physical and/or chemical origin. The variation with density of the absorption length is plotted, and results would indicate that for the case of meter wavelength radar waves, reflections from depths of more than 100 meters generally contribute significantly to the radar echoes obtained.
Sample size for estimating the mean concentration of organisms in ballast water.
Costa, Eliardo G; Lopes, Rubens M; Singer, Julio M
2016-09-15
We consider the computation of sample sizes for estimating the mean concentration of organisms in ballast water. Given the possible heterogeneity of their distribution in the tank, we adopt a negative binomial model to obtain confidence intervals for the mean concentration. We show that the results obtained by Chen and Chen (2012) in a different set-up hold for the proposed model and use them to develop algorithms to compute sample sizes both in cases where the mean concentration is known to lie in some bounded interval or where there is no information about its range. We also construct simple diagrams that may be easily employed to decide for compliance with the D-2 regulation of the International Maritime Organization (IMO). PMID:27266648
The inertial and electrical effects on aerosol sampling, charging, and size distribution
Wang, Chuenchung.
1991-01-01
An experimental study was conducted to investigate the effect of particle inertia on deposition behavior near the filter cassette sampler. Field sampling cassettes were tested in a subsonic wind tunnel for 0.2, 0.5 and 0.68 m/s wind speeds to simulate indoor air environment. Fluorescein aerosols of 2 and 5 {mu}m were generated from Berglund-Liu vibrating orifice generator as test material. Sampling tests were conducted in a subsonic wind tunnel with variables of particle size, wind speed, suction velocity and orientation of sampler examined to evaluate the combined effects. Sampling efficiencies were also examined. Electrostatic force is usually used as an effective method for removing, classifying and separating aerosols according to the electrical mobilities of the particulates. On the other hand, the aerosol charging theories possess differences in the ultrafine size range and need experimental verification. The present TSI's electrostatic aerosol analyzer has particle loss problem and cannot be used as a reliable tool in achieving efficient charging. A new unipolar charger with associated electronic circuits was designed, constructed and tested. The performance of the charger is tested in terms of particle loss, uncharged particles, and the collection efficiency of the precipitator. The results were compared with other investigator's data. The log-Beta distribution function is considered to be more versatile in representing size distribution. This study discussed the method in determining the size parameters under different conditions. Also the mutability of size distribution was evaluated when particles undergo coagulation or classification processes. Comparison of evolution between log-Beta and lognormal distributions were made.
Muhm, J M; Olshan, A F
1989-01-01
A program for the Hewlett Packard 41 series programmable calculator that determines sample size, power, and least detectable relative risk for comparative studies with independent groups is described. The user may specify any ratio of cases to controls (or exposed to unexposed subjects) and, if calculating least detectable relative risks, may specify whether the study is a case-control or cohort study. PMID:2910062
On the validity of the Poisson assumption in sampling nanometer-sized aerosols
Damit, Brian E; Wu, Dr. Chang-Yu; Cheng, Mengdawn
2014-01-01
A Poisson process is traditionally believed to apply to the sampling of aerosols. For a constant aerosol concentration, it is assumed that a Poisson process describes the fluctuation in the measured concentration because aerosols are stochastically distributed in space. Recent studies, however, have shown that sampling of micrometer-sized aerosols has non-Poissonian behavior with positive correlations. The validity of the Poisson assumption for nanometer-sized aerosols has not been examined and thus was tested in this study. Its validity was tested for four particle sizes - 10 nm, 25 nm, 50 nm and 100 nm - by sampling from indoor air with a DMA- CPC setup to obtain a time series of particle counts. Five metrics were calculated from the data: pair-correlation function (PCF), time-averaged PCF, coefficient of variation, probability of measuring a concentration at least 25% greater than average, and posterior distributions from Bayesian inference. To identify departures from Poissonian behavior, these metrics were also calculated for 1,000 computer-generated Poisson time series with the same mean as the experimental data. For nearly all comparisons, the experimental data fell within the range of 80% of the Poisson-simulation values. Essentially, the metrics for the experimental data were indistinguishable from a simulated Poisson process. The greater influence of Brownian motion for nanometer-sized aerosols may explain the Poissonian behavior observed for smaller aerosols. Although the Poisson assumption was found to be valid in this study, it must be carefully applied as the results here do not definitively prove applicability in all sampling situations.
NASA Technical Reports Server (NTRS)
Hirleman, E. D.; Oechsle, V.; Chigier, N. A.
1984-01-01
The response characteristics of laser diffraction particle sizing instruments were studied theoretically and experimentally. In particular, the extent of optical sample volume and the effects of receiving lens properties were investigated in detail. The experimental work was performed with a particle size analyzer using a calibration reticle containing a two-dimensional array of opaque circular disks on a glass substrate. The calibration slide simulated the forward-scattering characteristics of a Rosin-Rammler droplet size distribution. The reticle was analyzed with collection lenses of 63 mm, 100 mm, and 300 mm focal lengths using scattering inversion software that determined best-fit Rosin-Rammler size distribution parameters. The data differed from the predicted response for the reticle by about 10 percent. A set of calibration factor for the detector elements was determined that corrected for the nonideal response of the instrument. The response of the instrument was also measured as a function of reticle position, and the results confirmed a theoretical optical sample volume model presented here.
Živković, Daniel; Steinrücken, Matthias; Song, Yun S.; Stephan, Wolfgang
2015-01-01
Advances in empirical population genetics have made apparent the need for models that simultaneously account for selection and demography. To address this need, we here study the Wright–Fisher diffusion under selection and variable effective population size. In the case of genic selection and piecewise-constant effective population sizes, we obtain the transition density by extending a recently developed method for computing an accurate spectral representation for a constant population size. Utilizing this extension, we show how to compute the sample frequency spectrum in the presence of genic selection and an arbitrary number of instantaneous changes in the effective population size. We also develop an alternate, efficient algorithm for computing the sample frequency spectrum using a moment-based approach. We apply these methods to answer the following questions: If neutrality is incorrectly assumed when there is selection, what effects does it have on demographic parameter estimation? Can the impact of negative selection be observed in populations that undergo strong exponential growth? PMID:25873633
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26914402
Jiang, Shengyu; Wang, Chun; Weiss, David J
2016-01-01
Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM) A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root-mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1000 did not increase the accuracy of MGRM parameter estimates. PMID:26903916
A contemporary decennial global Landsat sample of changing agricultural field sizes
NASA Astrophysics Data System (ADS)
White, Emma; Roy, David
2014-05-01
Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by
Sub-sampling genetic data to estimate black bear population size: A case study
Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.
2007-01-01
Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.
NASA Astrophysics Data System (ADS)
Ge, Yunfei; Zhang, Yuan; Booth, Jamie A.; Weaver, Jonathan M. R.; Dobson, Phillip S.
2016-08-01
We report a method for quantifying scanning thermal microscopy (SThM) probe–sample thermal interactions in air using a novel temperature calibration device. This new device has been designed, fabricated and characterised using SThM to provide an accurate and spatially variable temperature distribution that can be used as a temperature reference due to its unique design. The device was characterised by means of a microfabricated SThM probe operating in passive mode. This data was interpreted using a heat transfer model, built to describe the thermal interactions during a SThM thermal scan. This permitted the thermal contact resistance between the SThM tip and the device to be determined as 8.33 × 105 K W‑1. It also permitted the probe–sample contact radius to be clarified as being the same size as the probe’s tip radius of curvature. Finally, the data were used in the construction of a lumped-system steady state model for the SThM probe and its potential applications were addressed.
Ge, Yunfei; Zhang, Yuan; Booth, Jamie A; Weaver, Jonathan M R; Dobson, Phillip S
2016-08-12
We report a method for quantifying scanning thermal microscopy (SThM) probe-sample thermal interactions in air using a novel temperature calibration device. This new device has been designed, fabricated and characterised using SThM to provide an accurate and spatially variable temperature distribution that can be used as a temperature reference due to its unique design. The device was characterised by means of a microfabricated SThM probe operating in passive mode. This data was interpreted using a heat transfer model, built to describe the thermal interactions during a SThM thermal scan. This permitted the thermal contact resistance between the SThM tip and the device to be determined as 8.33 × 10(5) K W(-1). It also permitted the probe-sample contact radius to be clarified as being the same size as the probe's tip radius of curvature. Finally, the data were used in the construction of a lumped-system steady state model for the SThM probe and its potential applications were addressed. PMID:27363896
Using a Divided Bar Apparatus to Measure Thermal Conductivity of Samples of Odd Sizes and Shapes
NASA Astrophysics Data System (ADS)
Crowell, J. "; Gosnold, W. D.
2012-12-01
Standard procedure for measuring thermal conductivity using a divided bar apparatus requires a sample that has the same surface dimensions as the heat sink/source surface in the divided bar. Heat flow is assumed to be constant throughout the column and thermal conductivity (K) is determined by measuring temperatures (T) across the sample and across standard layers and using the basic relationship Ksample=(Kstandard*(ΔT1+ΔT2)/2)/(ΔTsample). Sometimes samples are not large enough or of correct proportions to match the surface of the heat sink/source, however using the equations presented here the thermal conductivity of these samples can still be measured with a divided bar. Measurements were done on the UND Geothermal Laboratories stationary divided bar apparatus (SDB). This SDB has been designed to mimic many in-situ conditions, with a temperature range of -20C to 150C and a pressure range of 0 to 10,000 psi for samples with parallel surfaces and 0 to 3000 psi for samples with non-parallel surfaces. The heat sink/source surfaces are copper disks and have a surface area of 1,772 mm2 (2.74 in2). Layers of polycarbonate 6 mm thick with the same surface area as the copper disks are located in the heat sink and in the heat source as standards. For this study, all samples were prepared from a single piece of 4 inch limestone core. Thermal conductivities were measured for each sample as it was cut successively smaller. The above equation was adjusted to include the thicknesses (Th) of the samples and the standards and the surface areas (A) of the heat sink/source and of the sample Ksample=(Kstandard*Astandard*Thsample*(ΔT1+ΔT3))/(ΔTsample*Asample*2*Thstandard). Measuring the thermal conductivity of samples of multiple sizes, shapes, and thicknesses gave consistent values for samples with surfaces as small as 50% of the heat sink/source surface, regardless of the shape of the sample. Measuring samples with surfaces smaller than 50% of the heat sink/source surface
NASA Astrophysics Data System (ADS)
Jha, Anjani K.
Particulate materials are routinely handled in large quantities by industries such as, agriculture, electronic, ceramic, chemical, cosmetic, fertilizer, food, nutraceutical, pharmaceutical, power, and powder metallurgy. These industries encounter segregation due to the difference in physical and mechanical properties of particulates. The general goal of this research was to study percolation segregation in multi-size and multi-component particulate mixtures, especially measurement, sampling, and modeling. A second generation primary segregation shear cell (PSSC-II), an industrial vibrator, a true cubical triaxial tester, and two samplers (triers) were used as primary test apparatuses for quantifying segregation and flowability; furthermore, to understand and propose strategies to mitigate segregation in particulates. Toward this end, percolation segregation in binary, ternary, and quaternary size mixtures for two particulate types: urea (spherical) and potash (angular) were studied. Three coarse size ranges 3,350-4,000 mum (mean size = 3,675 mum), 2,800-3,350 mum (3,075 mum), and 2,360-2,800 mum (2,580 mum) and three fines size ranges 2,000-2,360 mum (2,180 mum), 1,700-2,000 mum (1,850 mum), and 1,400-1,700 mum (1,550 mum) for angular-shaped and spherical-shaped were selected for tests. Since the fines size 1,550 mum of urea was not available in sufficient quantity; therefore, it was not included in tests. Percolation segregation in fertilizer bags was tested also at two vibration frequencies of 5 Hz and 7Hz. The segregation and flowability of binary mixtures of urea under three equilibrium relative humidities (40%, 50%, and 60%) were also tested. Furthermore, solid fertilizer sampling was performed to compare samples obtained from triers of opening widths 12.7 mm and 19.1 mm and to determine size segregation in blend fertilizers. Based on experimental results, the normalized segregation rate (NSR) of binary mixtures was dependent on size ratio, mixing ratio
NASA Astrophysics Data System (ADS)
Lin, Rongsheng; Burke, David T.; Burns, Mark A.
2004-03-01
In recent years, there has been tremendous interest in developing a highly integrated DNA analysis system using microfabrication techniques. With the success of incorporating sample injection, reaction, separation and detection onto a monolithic silicon device, addition of otherwise time-consuming components in macroworld such as sample preparation is gaining more and more attention. In this paper, we designed and fabricated a miniaturized device, capable of separating size-fractioned DNA sample and extracting the band of interest. In order to obtain pure target band, a novel technique utilizing shaping electric field is demonstrated. Both theoretical analysis and experimental data shows significant agreement in designing appropriate electrode structures to achieve the desired electric field distribution. This technique has a very simple fabrication procedure and can be readily added with other existing components to realize a highly integrated "lab-on-a-chip" system for DNA analysis.
Forest inventory using multistage sampling with probability proportional to size. [Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.
1984-01-01
A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.
Subspace Leakage Analysis and Improved DOA Estimation With Small Sample Size
NASA Astrophysics Data System (ADS)
Shaghaghi, Mahdi; Vorobyov, Sergiy A.
2015-06-01
Classical methods of DOA estimation such as the MUSIC algorithm are based on estimating the signal and noise subspaces from the sample covariance matrix. For a small number of samples, such methods are exposed to performance breakdown, as the sample covariance matrix can largely deviate from the true covariance matrix. In this paper, the problem of DOA estimation performance breakdown is investigated. We consider the structure of the sample covariance matrix and the dynamics of the root-MUSIC algorithm. The performance breakdown in the threshold region is associated with the subspace leakage where some portion of the true signal subspace resides in the estimated noise subspace. In this paper, the subspace leakage is theoretically derived. We also propose a two-step method which improves the performance by modifying the sample covariance matrix such that the amount of the subspace leakage is reduced. Furthermore, we introduce a phenomenon named as root-swap which occurs in the root-MUSIC algorithm in the low sample size region and degrades the performance of the DOA estimation. A new method is then proposed to alleviate this problem. Numerical examples and simulation results are given for uncorrelated and correlated sources to illustrate the improvement achieved by the proposed methods. Moreover, the proposed algorithms are combined with the pseudo-noise resampling method to further improve the performance.
Xiang, Jianping; Yu, Jihnhee; Snyder, Kenneth V.; Levy, Elad I.; Siddiqui, Adnan H.; Meng, Hui
2016-01-01
Background We previously established three logistic regression models for discriminating intracranial aneurysm rupture status based on morphological and hemodynamic analysis of 119 aneurysms (Stroke. 2011;42:144–152). In this study we tested if these models would remain stable with increasing sample size and investigated sample sizes required for various confidence levels. Methods We augmented our previous dataset of 119 aneurysms into a new dataset of 204 samples by collecting additional 85 consecutive aneurysms, on which we performed flow simulation and calculated morphological and hemodynamic parameters as done previously. We performed univariate significance tests of these parameters, and on the significant parameters we performed multivariate logistic regression. The new regression models were compared against the original models. Receiver operating characteristics analysis was applied to compare the performance of regression models. Furthermore, we performed regression analysis based on bootstrapping resampling statistical simulations to explore how many aneurysm cases were required to generate stable models. Results Univariate tests of the 204 aneurysms generated an identical list of significant morphological and hemodynamic parameters as previously from analysis of 119 cases. Furthermore, multivariate regression analysis produced three parsimonious predictive models that were almost identical to the previous ones; with model coefficients that had narrower confidence intervals than the original ones. Bootstrapping showed that 10%, 5%, 2%, and 1% convergence levels of confidence interval required 120, 200, 500, and 900 aneurysms, respectively. Conclusions Our original hemodynamic-morphological rupture prediction models are stable and improve with increasing sample size. Results from resampling statistical simulations provide guidance for designing future large multi-population studies. PMID:25488922
Prediction accuracy of a sample-size estimation method for ROC studies
Chakraborty, Dev P.
2010-01-01
Rationale and Objectives Sample-size estimation is an important consideration when planning a receiver operating characteristic (ROC) study. The aim of this work was to assess the prediction accuracy of a sample-size estimation method using the Monte Carlo simulation method. Materials and Methods Two ROC ratings simulators characterized by low reader and high case variabilities (LH) and high reader and low case variabilities (HL) were used to generate pilot data sets in 2 modalities. Dorfman-Berbaum-Metz multiple-reader multiple-case (DBM-MRMC) analysis of the ratings yielded estimates of the modality-reader, modality-case and error variances. These were input to the Hillis-Berbaum (HB) sample-size estimation method, which predicted the number of cases needed to achieve 80% power for 10 readers and an effect size of 0.06 in the pivotal study. Predictions that generalized to readers and cases (random-all), to cases only (random-cases) and to readers only (random-readers) were generated. A prediction-accuracy index defined as the probability that any single prediction yields true power in the range 75% to 90% was used to assess the HB method. Results For random-case generalization the HB-method prediction-accuracy was reasonable, ~ 50% for 5 readers in the pilot study. Prediction-accuracy was generally higher under low reader variability conditions (LH) than under high reader variability conditions (HL). Under ideal conditions (many readers in the pilot study) the DBM-MRMC based HB method overestimated the number of cases. The overestimates could be explained by the observed large variability of the DBM-MRMC modality-reader variance estimates, particularly when reader variability was large (HL). The largest benefit of increasing the number of readers in the pilot study was realized for LH, where 15 readers were enough to yield prediction accuracy > 50% under all generalization conditions, but the benefit was lesser for HL where prediction accuracy was ~ 36% for 15
Statistical characterization of a large geochemical database and effect of sample size
Zhang, C.; Manheim, F. T.; Hinde, J.; Grossman, J.N.
2005-01-01
smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.
Zeestraten, Eva; Lambert, Christian; Chis Ster, Irina; Williams, Owen A; Lawrence, Andrew J; Patel, Bhavini; MacKinnon, Andrew D; Barrick, Thomas R; Markus, Hugh S
2016-01-01
Detecting treatment efficacy using cognitive change in trials of cerebral small vessel disease (SVD) has been challenging, making the use of surrogate markers such as magnetic resonance imaging (MRI) attractive. We determined the sensitivity of MRI to change in SVD and used this information to calculate sample size estimates for a clinical trial. Data from the prospective SCANS (St George’s Cognition and Neuroimaging in Stroke) study of patients with symptomatic lacunar stroke and confluent leukoaraiosis was used (n = 121). Ninety-nine subjects returned at one or more time points. Multimodal MRI and neuropsychologic testing was performed annually over 3 years. We evaluated the change in brain volume, T2 white matter hyperintensity (WMH) volume, lacunes, and white matter damage on diffusion tensor imaging (DTI). Over 3 years, change was detectable in all MRI markers but not in cognitive measures. WMH volume and DTI parameters were most sensitive to change and therefore had the smallest sample size estimates. MRI markers, particularly WMH volume and DTI parameters, are more sensitive to SVD progression over short time periods than cognition. These markers could significantly reduce the size of trials to screen treatments for efficacy in SVD, although further validation from longitudinal and intervention studies is required. PMID:26036939
Estimating effective population size and migration rates from genetic samples over space and time.
Wang, Jinliang; Whitlock, Michael C
2003-01-01
In the past, moment and likelihood methods have been developed to estimate the effective population size (N(e)) on the basis of the observed changes of marker allele frequencies over time, and these have been applied to a large variety of species and populations. Such methods invariably make the critical assumption of a single isolated population receiving no immigrants over the study interval. For most populations in the real world, however, migration is not negligible and can substantially bias estimates of N(e) if it is not accounted for. Here we extend previous moment and maximum-likelihood methods to allow the joint estimation of N(e) and migration rate (m) using genetic samples over space and time. It is shown that, compared to genetic drift acting alone, migration results in changes in allele frequency that are greater in the short term and smaller in the long term, leading to under- and overestimation of N(e), respectively, if it is ignored. Extensive simulations are run to evaluate the newly developed moment and likelihood methods, which yield generally satisfactory estimates of both N(e) and m for populations with widely different effective sizes and migration rates and patterns, given a reasonably large sample size and number of markers. PMID:12586728
Saccenti, Edoardo; Timmerman, Marieke E
2016-08-01
Sample size determination is a fundamental step in the design of experiments. Methods for sample size determination are abundant for univariate analysis methods, but scarce in the multivariate case. Omics data are multivariate in nature and are commonly investigated using multivariate statistical methods, such as principal component analysis (PCA) and partial least-squares discriminant analysis (PLS-DA). No simple approaches to sample size determination exist for PCA and PLS-DA. In this paper we will introduce important concepts and offer strategies for (minimally) required sample size estimation when planning experiments to be analyzed using PCA and/or PLS-DA. PMID:27322847
Distance software: design and analysis of distance sampling surveys for estimating population size
Thomas, Len; Buckland, Stephen T; Rexstad, Eric A; Laake, Jeff L; Strindberg, Samantha; Hedley, Sharon L; Bishop, Jon RB; Marques, Tiago A; Burnham, Kenneth P
2010-01-01
1.Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2.We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3.Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4.A first step in analysis of distance sampling data is modelling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5.All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6.Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modelling analysis engine for spatial and habitat modelling, and information about accessing the analysis engines directly from other software. 7.Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of-the-art software that implements these methods is described that makes the
Adjustable virtual pore-size filter for automated sample preparation using acoustic radiation force
Jung, B; Fisher, K; Ness, K; Rose, K; Mariella, R
2008-05-22
We present a rapid and robust size-based separation method for high throughput microfluidic devices using acoustic radiation force. We developed a finite element modeling tool to predict the two-dimensional acoustic radiation force field perpendicular to the flow direction in microfluidic devices. Here we compare the results from this model with experimental parametric studies including variations of the PZT driving frequencies and voltages as well as various particle sizes and compressidensities. These experimental parametric studies also provide insight into the development of an adjustable 'virtual' pore-size filter as well as optimal operating conditions for various microparticle sizes. We demonstrated the separation of Saccharomyces cerevisiae and MS2 bacteriophage using acoustic focusing. The acoustic radiation force did not affect the MS2 viruses, and their concentration profile remained unchanged. With optimized design of our microfluidic flow system we were able to achieve yields of > 90% for the MS2 with > 80% of the S. cerevisiae being removed in this continuous-flow sample preparation device.
Separability tests for high-dimensional, low sample size multivariate repeated measures data.
Simpson, Sean L; Edwards, Lloyd J; Styner, Martin A; Muller, Keith E
2014-01-01
Longitudinal imaging studies have moved to the forefront of medical research due to their ability to characterize spatio-temporal features of biological structures across the lifespan. Valid inference in longitudinal imaging requires enough flexibility of the covariance model to allow reasonable fidelity to the true pattern. On the other hand, the existence of computable estimates demands a parsimonious parameterization of the covariance structure. Separable (Kronecker product) covariance models provide one such parameterization in which the spatial and temporal covariances are modeled separately. However, evaluating the validity of this parameterization in high-dimensions remains a challenge. Here we provide a scientifically informed approach to assessing the adequacy of separable (Kronecker product) covariance models when the number of observations is large relative to the number of independent sampling units (sample size). We address both the general case, in which unstructured matrices are considered for each covariance model, and the structured case, which assumes a particular structure for each model. For the structured case, we focus on the situation where the within subject correlation is believed to decrease exponentially in time and space as is common in longitudinal imaging studies. However, the provided framework equally applies to all covariance patterns used within the more general multivariate repeated measures context. Our approach provides useful guidance for high dimension, low sample size data that preclude using standard likelihood based tests. Longitudinal medical imaging data of caudate morphology in schizophrenia illustrates the approaches appeal. PMID:25342869
Weighted skewness and kurtosis unbiased by sample size and Gaussian uncertainties
NASA Astrophysics Data System (ADS)
Rimoldini, Lorenzo
2014-07-01
Central moments and cumulants are often employed to characterize the distribution of data. The skewness and kurtosis are particularly useful for the detection of outliers, the assessment of departures from normally distributed data, automated classification techniques and other applications. Estimators of higher order moments that are robust against outliers are more stable but might miss characteristic features of the data, as in the case of astronomical time series exhibiting brief events like stellar bursts or eclipses of binary systems, while weighting can help identify reliable measurements from uncertain or spurious outliers. Furthermore, noise is an unavoidable part of most measurements and their uncertainties need to be taken properly into account during the data analysis or biases are likely to emerge in the results, including basic descriptive statistics. This work provides unbiased estimates of the weighted skewness and kurtosis moments and cumulants, corrected for biases due to sample size and Gaussian noise, under the assumption of independent data. A comparison of biased and unbiased weighted estimators is illustrated with simulations as a function of sample size and signal-to-noise ratio, employing different data distributions and weighting schemes related to measurement uncertainties and the sampling of the signal. Detailed derivations and figures of simulation results are presented in the Appendices available online.
Power/sample size calculations for assessing correlates of risk in clinical efficacy trials.
Gilbert, Peter B; Janes, Holly E; Huang, Yunda
2016-09-20
In a randomized controlled clinical trial that assesses treatment efficacy, a common objective is to assess the association of a measured biomarker response endpoint with the primary study endpoint in the active treatment group, using a case-cohort, case-control, or two-phase sampling design. Methods for power and sample size calculations for such biomarker association analyses typically do not account for the level of treatment efficacy, precluding interpretation of the biomarker association results in terms of biomarker effect modification of treatment efficacy, with detriment that the power calculations may tacitly and inadvertently assume that the treatment harms some study participants. We develop power and sample size methods accounting for this issue, and the methods also account for inter-individual variability of the biomarker that is not biologically relevant (e.g., due to technical measurement error). We focus on a binary study endpoint and on a biomarker subject to measurement error that is normally distributed or categorical with two or three levels. We illustrate the methods with preventive HIV vaccine efficacy trials and include an R package implementing the methods. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27037797
Mayhew, T M; Sharma, A K
1984-01-01
Using the tibial nerves of diabetic rats, alternative sampling schemes have been compared for estimating the sizes of fibres in nerve trunks of mixed fascicularity. The merits of each scheme were evaluated by comparing their reliability, precision, cost in time, and efficiency with 'absolute' values obtained by first measuring every fibre. The external diameter of all myelinated fibres was measured in each of six nerves (c. 2900 fibres/nerve). Total measurement time was about 29 hours. All sampling schemes produced group means within +/-4% of the absolute value of 5.52 micron. The most efficient schemes were those in which only 6% of all fibres were selected for measurement. For these the measurement time was 2 hours or less. Results are discussed in the general context of measurement of the sizes of nerve fibres. It is concluded that future studies should place more emphasis on sampling fewer fibres from more animals rather than on measuring all fibres very precisely. These considerations are likely to be of special concern to those wanting to analyse specimens with large fibre complements and those screening large numbers of specimens. PMID:6381443
Dynamic sample size detection in learning command line sequence for continuous authentication.
Traore, Issa; Woungang, Isaac; Nakkabi, Youssef; Obaidat, Mohammad S; Ahmed, Ahmed Awad E; Khalilian, Bijan
2012-10-01
Continuous authentication (CA) consists of authenticating the user repetitively throughout a session with the goal of detecting and protecting against session hijacking attacks. While the accuracy of the detector is central to the success of CA, the detection delay or length of an individual authentication period is important as well since it is a measure of the window of vulnerability of the system. However, high accuracy and small detection delay are conflicting requirements that need to be balanced for optimum detection. In this paper, we propose the use of sequential sampling technique to achieve optimum detection by trading off adequately between detection delay and accuracy in the CA process. We illustrate our approach through CA based on user command line sequence and naïve Bayes classification scheme. Experimental evaluation using the Greenberg data set yields encouraging results consisting of a false acceptance rate (FAR) of 11.78% and a false rejection rate (FRR) of 1.33%, with an average command sequence length (i.e., detection delay) of 37 commands. When using the Schonlau (SEA) data set, we obtain FAR = 4.28% and FRR = 12%. PMID:22514203
An In Situ Method for Sizing Insoluble Residues in Precipitation and Other Aqueous Samples
Axson, Jessica L.; Creamean, Jessie M.; Bondy, Amy L.; Capracotta, Sonja S.; Warner, Katy Y.; Ault, Andrew P.
2015-01-01
Particles are frequently incorporated into clouds or precipitation, influencing climate by acting as cloud condensation or ice nuclei, taking up coatings during cloud processing, and removing species through wet deposition. Many of these particles, particularly ice nuclei, can remain suspended within cloud droplets/crystals as insoluble residues. While previous studies have measured the soluble or bulk mass of species within clouds and precipitation, no studies to date have determined the number concentration and size distribution of insoluble residues in precipitation or cloud water using in situ methods. Herein, for the first time we demonstrate that Nanoparticle Tracking Analysis (NTA) is a powerful in situ method for determining the total number concentration, number size distribution, and surface area distribution of insoluble residues in precipitation, both of rain and melted snow. The method uses 500 μL or less of liquid sample and does not require sample modification. Number concentrations for the insoluble residues in aqueous precipitation samples ranged from 2.0–3.0(±0.3)×108 particles cm−3, while surface area ranged from 1.8(±0.7)–3.2(±1.0)×107 μm2 cm−3. Number size distributions peaked between 133–150 nm, with both single and multi-modal character, while surface area distributions peaked between 173–270 nm. Comparison with electron microscopy of particles up to 10 μm show that, by number, > 97% residues are <1 μm in diameter, the upper limit of the NTA. The range of concentration and distribution properties indicates that insoluble residue properties vary with ambient aerosol concentrations, cloud microphysics, and meteorological dynamics. NTA has great potential for studying the role that insoluble residues play in critical atmospheric processes. PMID:25705069
Kotze, D Johan; O'Hara, Robert B; Lehvävirta, Susanna
2012-01-01
Temporal variation in the detectability of a species can bias estimates of relative abundance if not handled correctly. For example, when effort varies in space and/or time it becomes necessary to take variation in detectability into account when data are analyzed. We demonstrate the importance of incorporating seasonality into the analysis of data with unequal sample sizes due to lost traps at a particular density of a species. A case study of count data was simulated using a spring-active carabid beetle. Traps were 'lost' randomly during high beetle activity in high abundance sites and during low beetle activity in low abundance sites. Five different models were fitted to datasets with different levels of loss. If sample sizes were unequal and a seasonality variable was not included in models that assumed the number of individuals was log-normally distributed, the models severely under- or overestimated the true effect size. Results did not improve when seasonality and number of trapping days were included in these models as offset terms, but only performed well when the response variable was specified as following a negative binomial distribution. Finally, if seasonal variation of a species is unknown, which is often the case, seasonality can be added as a free factor, resulting in well-performing negative binomial models. Based on these results we recommend (a) add sampling effort (number of trapping days in our example) to the models as an offset term, (b) if precise information is available on seasonal variation in detectability of a study object, add seasonality to the models as an offset term; (c) if information on seasonal variation in detectability is inadequate, add seasonality as a free factor; and (d) specify the response variable of count data as following a negative binomial or over-dispersed Poisson distribution. PMID:22911719
Particulate sizing and emission indices for a jet engine exhaust sampled at cruise
NASA Astrophysics Data System (ADS)
Hagen, D.; Whitefield, P.; Paladino, J.; Trueblood, M.; Lilenfeld, H.
Particle size and emission indices measurements for jet engines, primarily the Rolls Royce RB211 engines on a NASA 757 aircraft are reported. These data were used to estimate the fraction of fuel sulfur that was converted to particulates. These measurements were made in-situ with the sampling aircraft several kilometers behind the source. Some complimentary ground measurements on the same source aircraft and engines are also reported. Significant differences are seen between the ground observations and the in-situ observations, indicating that plume processes are changing the aerosol's characteristics.
Reagan, Jennifer K; Selmic, Laura E; Garrett, Laura D; Singh, Kuldeep
2016-09-01
OBJECTIVE To evaluate effects of anatomic location, histologic processing, and sample size on shrinkage of excised canine skin samples. SAMPLE Skin samples from 15 canine cadavers. PROCEDURES Elliptical samples of the skin, underlying subcutaneous fat, and muscle fascia were collected from the head, hind limb, and lumbar region of each cadaver. Two samples (10 mm and 30 mm) were collected at each anatomic location of each cadaver (one from the left side and the other from the right side). Measurements of length, width, depth, and surface area were collected prior to excision (P1) and after fixation in neutral-buffered 10% formalin for 24 to 48 hours (P2). Length and width were also measured after histologic processing (P3). RESULTS Length and width decreased significantly at all anatomic locations and for both sample sizes at each processing stage. Hind limb samples had the greatest decrease in length, compared with results for samples obtained from other locations, across all processing stages for both sample sizes. The 30-mm samples had a greater percentage change in length and width between P1 and P2 than did the 10-mm samples. Histologic processing (P2 to P3) had a greater effect on the percentage shrinkage of 10-mm samples. For all locations and both sample sizes, percentage change between P1 and P3 ranged from 24.0% to 37.7% for length and 18.0% to 22.8% for width. CONCLUSIONS AND CLINICAL RELEVANCE Histologic processing, anatomic location, and sample size affected the degree of shrinkage of a canine skin sample from excision to histologic assessment. PMID:27580116
Power analysis and sample size estimation for sequence-based association studies
Wang, Gao T.; Li, Biao; Lyn Santos-Cortez, Regie P.; Peng, Bo; Leal, Suzanne M.
2014-01-01
Motivation: Statistical methods have been developed to test for complex trait rare variant (RV) associations, in which variants are aggregated across a region, which is typically a gene. Power analysis and sample size estimation for sequence-based RV association studies are challenging because of the necessity to realistically model the underlying allelic architecture of complex diseases within a suitable analytical framework to assess the performance of a variety of RV association methods in an unbiased manner. Summary: We developed SEQPower, a software package to perform statistical power analysis for sequence-based association data under a variety of genetic variant and disease phenotype models. It aids epidemiologists in determining the best study design, sample size and statistical tests for sequence-based association studies. It also provides biostatisticians with a platform to fairly compare RV association methods and to validate and assess novel association tests. Availability and implementation: The SEQPower program, source code, multi-platform executables, documentation, list of association tests, examples and tutorials are available at http://bioinformatics.org/spower. Contact: sleal@bcm.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24778108
Berlinger, B; Bugge, M D; Ulvestad, B; Kjuus, H; Kandler, K; Ellingsen, D G
2015-12-01
Air samples were collected by personal sampling with five stage Sioutas cascade impactors and respirable cyclones in parallel among tappers and crane operators in two manganese (Mn) alloy smelters in Norway to investigate PM fractions. The mass concentrations of PM collected by using the impactors and the respirable cyclones were critically evaluated by comparing the results of the parallel measurements. The geometric mean (GM) mass concentrations of the respirable fraction and the <10 μm PM fraction were 0.18 and 0.39 mg m(-3), respectively. Particle size distributions were determined using the impactor data in the range from 0 to 10 μm and by stationary measurements by using a scanning mobility particle sizer in the range from 10 to 487 nm. On average 50% of the particulate mass in the Mn alloy smelters was in the range from 2.5 to 10 μm, while the rest was distributed between the lower stages of the impactors. On average 15% of the particulate mass was found in the <0.25 μm PM fraction. The comparisons of the different PM fraction mass concentrations related to different work tasks or different workplaces, showed in many cases statistically significant differences, however, the particle size distribution of PM in the fraction <10 μm d(ae) was independent of the plant, furnace or work task. PMID:26498986
Design and sample size considerations for simultaneous global drug development program.
Huang, Qin; Chen, Gang; Yuan, Zhilong; Lan, K K Gordon
2012-09-01
Due to the potential impact of ethnic factors on clinical outcomes, the global registration of a new treatment is challenging. China and Japan often require local trials in addition to a multiregional clinical trial (MRCT) to support the efficacy and safety claim of the treatment. The impact of ethnic factors on the treatment effect has been intensively investigated and discussed from different perspectives. However, most current methods are focusing on the assessment of the consistency or similarity of the treatment effect between different ethnic groups in exploratory nature. In this article, we propose a new method for the design and sample size consideration for a simultaneous global drug development program (SGDDP) using weighted z-tests. In the proposed method, to test the efficacy of a new treatment for the targeted ethnic (TE) group, a weighted test that combines the information collected from both the TE group and the nontargeted ethnic (NTE) group is used. The influence of ethnic factors and local medical practice on the treatment effect is accounted for by down-weighting the information collected from NTE group in the combined test statistic. This design controls rigorously the overall false positive rate for the program at a given level. The sample sizes needed for the TE group in an SGDDP for three most commonly used efficacy endpoints, continuous, binary, and time-to-event, are then calculated. PMID:22946950
Sample size calculations for micro-randomized trials in mHealth.
Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A
2016-05-30
The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26707831
DRME: Count-based differential RNA methylation analysis at small sample size scenario.
Liu, Lian; Zhang, Shao-Wu; Gao, Fan; Zhang, Yixin; Huang, Yufei; Chen, Runsheng; Meng, Jia
2016-04-15
Differential methylation, which concerns difference in the degree of epigenetic regulation via methylation between two conditions, has been formulated as a beta or beta-binomial distribution to address the within-group biological variability in sequencing data. However, a beta or beta-binomial model is usually difficult to infer at small sample size scenario with discrete reads count in sequencing data. On the other hand, as an emerging research field, RNA methylation has drawn more and more attention recently, and the differential analysis of RNA methylation is significantly different from that of DNA methylation due to the impact of transcriptional regulation. We developed DRME to better address the differential RNA methylation problem. The proposed model can effectively describe within-group biological variability at small sample size scenario and handles the impact of transcriptional regulation on RNA methylation. We tested the newly developed DRME algorithm on simulated and 4 MeRIP-Seq case-control studies and compared it with Fisher's exact test. It is in principle widely applicable to several other RNA-related data types as well, including RNA Bisulfite sequencing and PAR-CLIP. The code together with an MeRIP-Seq dataset is available online (https://github.com/lzcyzm/DRME) for evaluation and reproduction of the figures shown in this article. PMID:26851340
ERIC Educational Resources Information Center
In'nami, Yo; Koizumi, Rie
2013-01-01
The importance of sample size, although widely discussed in the literature on structural equation modeling (SEM), has not been widely recognized among applied SEM researchers. To narrow this gap, we focus on second language testing and learning studies and examine the following: (a) Is the sample size sufficient in terms of precision and power of…
ERIC Educational Resources Information Center
Guo, Jiin-Huarng; Luh, Wei-Ming
2008-01-01
This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…
ERIC Educational Resources Information Center
Foley, Brett Patrick
2010-01-01
The 3PL model is a flexible and widely used tool in assessment. However, it suffers from limitations due to its need for large sample sizes. This study introduces and evaluates the efficacy of a new sample size augmentation technique called Duplicate, Erase, and Replace (DupER) Augmentation through a simulation study. Data are augmented using…
ERIC Educational Resources Information Center
Shieh, Gwowen; Jan, Show-Li
2013-01-01
The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However,…
Johnson, D.R.; Bachan, L.K.
2014-01-01
Summary In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size (n=58) used in the study is insufficient for making such inferences. Here we discuss and demonstrate how small sample sizes reduce the utility of this research. PMID:24340813
Wang, Y Y; Sun, R H
2016-05-10
The sample size of non-inferiority, equivalence and superiority design in clinical trial was estimated by using PASS 11 software. The result was compared with that by using SAS to evaluate the practicability and accuracy of PASS 11 software for the purpose of providing reference for sample size estimation in clinical trial design. PMID:27188375
Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2016-01-01
This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…
ERIC Educational Resources Information Center
Hedges, Larry V.
1984-01-01
If the quantitative result of a study is observed only when the mean difference is statistically significant, the observed mean difference, variance, and effect size are biased estimators of corresponding population parameters. The exact distribution of sample effect size and the maximum likelihood estimator of effect size are derived. (Author/BW)
Plionis, Alexander A; Peterson, Dominic S; Tandon, Lav; Lamont, Stephen P
2009-01-01
Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid nondestructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.
NASA Astrophysics Data System (ADS)
Plionis, A. A.; Peterson, D. S.; Tandon, L.; LaMont, S. P.
2010-03-01
Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid non-distructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.
Sample size requirements and analysis of tag recoveries for paired releases of lake trout
Elrod, Joseph H.; Frank, Anthony
1990-01-01
A simple chi-square test can be used to analyze recoveries from a paired-release experiment to determine whether differential survival occurs between two groups of fish. The sample size required for analysis is a function of (1) the proportion of fish stocked, (2) the expected proportion at recovery, (3) the level of significance (a) at which the null hypothesis is tested, and (4) the power (1-I?) of the statistical test. Detection of a 20% change from a stocking ratio of 50:50 requires a sample of 172 (I?=0.10; 1-I?=0.80) to 459 (I?=0.01; 1-I?=0.95) fish. Pooling samples from replicate pairs is sometimes an appropriate way to increase statistical precision without increasing numbers stocked or sampling intensity. Summing over time is appropriate if catchability or survival of the two groups of fish does not change relative to each other through time. Twelve pairs of identical groups of yearling lake trout Salvelinus namaycush were marked with coded wire tags and stocked into Lake Ontario. Recoveries of fish at ages 2-8 showed differences of 1-14% from the initial stocking ratios. Mean tag recovery rates were 0.217%, 0.156%, 0.128%, 0.121%, 0.093%, 0.042%, and 0.016% for ages 2-8, respectively. At these rates, stocking 12,100-29,700 fish per group would yield samples of 172-459 fish at ages 2-8 combined.
Blanchin, Myriam; Hardouin, Jean-Benoit; Guillemin, Francis; Falissard, Bruno; Sébille, Véronique
2013-01-01
Background Patient-reported outcomes (PRO) that comprise all self-reported measures by the patient are important as endpoint in clinical trials and epidemiological studies. Models from the Item Response Theory (IRT) are increasingly used to analyze these particular outcomes that bring into play a latent variable as these outcomes cannot be directly observed. Preliminary developments have been proposed for sample size and power determination for the comparison of PRO in cross-sectional studies comparing two groups of patients when an IRT model is intended to be used for analysis. The objective of this work was to validate these developments in a large number of situations reflecting real-life studies. Methodology The method to determine the power relies on the characteristics of the latent trait and of the questionnaire (distribution of the items), the difference between the latent variable mean in each group and the variance of this difference estimated using Cramer-Rao bound. Different scenarios were considered to evaluate the impact of the characteristics of the questionnaire and of the variance of the latent trait on performances of the Cramer-Rao method. The power obtained using Cramer-Rao method was compared to simulations. Principal Findings Powers achieved with the Cramer-Rao method were close to powers obtained from simulations when the questionnaire was suitable for the studied population. Nevertheless, we have shown an underestimation of power with the Cramer-Rao method when the questionnaire was less suitable for the population. Besides, the Cramer-Rao method stays valid whatever the values of the variance of the latent trait. Conclusions The Cramer-Rao method is adequate to determine the power of a test of group effect at design stage for two-group comparison studies including patient-reported outcomes in health sciences. At the design stage, the questionnaire used to measure the intended PRO should be carefully chosen in relation to the studied
NASA Astrophysics Data System (ADS)
Caparica, A. A.; DaSilva, Cláudio J.
2015-12-01
In this work, we discuss the behavior of the microcanonical temperature {partial S(E)}/{partial E} obtained by means of numerical entropic sampling studies. It is observed that in almost all cases, the slope of the logarithm of the density of states S( E) is not infinite in the ground state, since as expected it should be directly related to the inverse temperature {1}/{T}. Here, we show that these finite slopes are in fact due to finite-size effects and we propose an analytic expression aln( bL) for the behavior of {\\varDelta S}/{\\varDelta E} when L→ ∞. To test this idea, we use three distinct two-dimensional square lattice models presenting second-order phase transitions. We calculated by exact means the parameters a and b for the two-states Ising model and for the q = 3 and 4 states Potts model and compared with the results obtained by entropic sampling simulations. We found an excellent agreement between exact and numerical values. We argue that this new set of parameters a and b represents an interesting novel issue of investigation in entropic sampling studies for different models.
NASA Astrophysics Data System (ADS)
Pawcenis, Dominika; Koperska, Monika A.; Milczarek, Jakub M.; Łojewski, Tomasz; Łojewska, Joanna
2014-02-01
A direct goal of this paper was to improve the methods of sample preparation and separation for analyses of fibroin polypeptide with the use of size exclusion chromatography (SEC). The motivation for the study arises from our interest in natural polymers included in historic textile and paper artifacts, and is a logical response to the urgent need for developing rationale-based methods for materials conservation. The first step is to develop a reliable analytical tool which would give insight into fibroin structure and its changes caused by both natural and artificial ageing. To investigate the influence of preparation conditions, two sets of artificially aged samples were prepared (with and without NaCl in sample solution) and measured by the means of SEC with multi angle laser light scattering detector. It was shown that dialysis of fibroin dissolved in LiBr solution allows removal of the salt which destroys stacks chromatographic columns and prevents reproducible analyses. Salt rich (NaCl) water solutions of fibroin improved the quality of chromatograms.
What about N? A methodological study of sample-size reporting in focus group studies
2011-01-01
Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and
Determination of reference limits: statistical concepts and tools for sample size calculation.
Wellek, Stefan; Lackner, Karl J; Jennen-Steinmetz, Christine; Reinhard, Iris; Hoffmann, Isabell; Blettner, Maria
2014-12-01
Reference limits are estimators for 'extreme' percentiles of the distribution of a quantitative diagnostic marker in the healthy population. In most cases, interest will be in the 90% or 95% reference intervals. The standard parametric method of determining reference limits consists of computing quantities of the form X̅±c·S. The proportion of covered values in the underlying population coincides with the specificity obtained when a measurement value falling outside the corresponding reference region is classified as diagnostically suspect. Nonparametrically, reference limits are estimated by means of so-called order statistics. In both approaches, the precision of the estimate depends on the sample size. We present computational procedures for calculating minimally required numbers of subjects to be enrolled in a reference study. The much more sophisticated concept of reference bands replacing statistical reference intervals in case of age-dependent diagnostic markers is also discussed. PMID:25029084
Sample size effect and microcompression of Mg65Cu25Gd10 metallic glass
NASA Astrophysics Data System (ADS)
Lee, C. J.; Huang, J. C.; Nieh, T. G.
2007-10-01
Micropillars with diameters of 1 and 3.8μm were fabricated from Mg-based metallic glasses using focus ion beam, and then tested in compression at strain rates ranging from 6×10-5to6×10-1s-1. The apparent yield strength of the micropillars is 1342-1580MPa, or 60%-100% increment over the bulk specimens. This strength increase can be rationalized using the Weibull statistics for brittle materials, and the Weibull modulus of the Mg-based metallic glasses is estimated to be about 35. Preliminary results indicated that the number of shear bands increased with the sample size and strain rates.
21 CFR 1404.900 - Adequate evidence.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Adequate evidence. 1404.900 Section 1404.900 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 1404.900 Adequate evidence. Adequate evidence means information sufficient to support the reasonable belief that a particular...
29 CFR 98.900 - Adequate evidence.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 1 2010-07-01 2010-07-01 true Adequate evidence. 98.900 Section 98.900 Labor Office of the Secretary of Labor GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 98.900 Adequate evidence. Adequate evidence means information sufficient to support the reasonable belief that a...
Hui, Tin-Yu J.; Burt, Austin
2015-01-01
The effective population size Ne is a key parameter in population genetics and evolutionary biology, as it quantifies the expected distribution of changes in allele frequency due to genetic drift. Several methods of estimating Ne have been described, the most direct of which uses allele frequencies measured at two or more time points. A new likelihood-based estimator NB^ for contemporary effective population size using temporal data is developed in this article. The existing likelihood methods are computationally intensive and unable to handle the case when the underlying Ne is large. This article tries to work around this problem by using a hidden Markov algorithm and applying continuous approximations to allele frequencies and transition probabilities. Extensive simulations are run to evaluate the performance of the proposed estimator NB^, and the results show that it is more accurate and has lower variance than previous methods. The new estimator also reduces the computational time by at least 1000-fold and relaxes the upper bound of Ne to several million, hence allowing the estimation of larger Ne. Finally, we demonstrate how this algorithm can cope with nonconstant Ne scenarios and be used as a likelihood-ratio test to test for the equality of Ne throughout the sampling horizon. An R package “NB” is now available for download to implement the method described in this article. PMID:25747459
Aerosol composition at Chacaltaya, Bolivia, as determined by size-fractionated sampling
NASA Astrophysics Data System (ADS)
Adams, F.; van Espen, P.; Maenhaut, W.
Thirty-four cascade-impactor samples were collected between September 1977 and November 1978 at Chacaltaya, Bolivia. The concentrations of 25 elements were measured for the six impaction stages of each sample by means of energy-dispersive X-ray fluorescence and proton-induced X-ray emission analysis. The results indicated that most elements are predominantly associated with a unimodal coarse-particle soil-dustdispersion component. Also chlorine and the alkali and alkaline earth elements belong to this group. The anomalously enriched elements (S, Br and the heavy metals Cu, Zn, Ga, As, Se, Pb and Bi) showed a bimodal size distribution. Correlation coefficient calculations and principal component analysis indicated the presence in the submicrometer aerosol mode of an important component, containing S, K, Zn, As and Br, which may originate from biomass burning. For certain enriched elements (i.e. Zn and perhaps Cu) the coarse-particle enrichments observed may be the result of the true crust-air fractionation during soil-dust dispersion.
Christen, Hans M; Okubo, Isao; Rouleau, Christopher M; Jellison Jr, Gerald Earle; Puretzky, Alexander A; Geohegan, David B; Lowndes, Douglas H
2005-01-01
Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (Sr{sub x}Ba{sub 1-x}Nb{sub 2}O{sub 6}) and magnetic perovskites (Sr{sub 1-x}Ca{sub x}RuO{sub 3}), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.
Size-exclusion chromatography-based enrichment of extracellular vesicles from urine samples
Lozano-Ramos, Inés; Bancu, Ioana; Oliveira-Tercero, Anna; Armengol, María Pilar; Menezes-Neto, Armando; Del Portillo, Hernando A.; Lauzurica-Valdemoros, Ricardo; Borràs, Francesc E.
2015-01-01
Renal biopsy is the gold-standard procedure to diagnose most of renal pathologies. However, this invasive method is of limited repeatability and often describes an irreversible renal damage. Urine is an easily accessible fluid and urinary extracellular vesicles (EVs) may be ideal to describe new biomarkers associated with renal pathologies. Several methods to enrich EVs have been described. Most of them contain a mixture of proteins, lipoproteins and cell debris that may be masking relevant biomarkers. Here, we evaluated size-exclusion chromatography (SEC) as a suitable method to isolate urinary EVs. Following a conventional centrifugation to eliminate cell debris and apoptotic bodies, urine samples were concentrated using ultrafiltration and loaded on a SEC column. Collected fractions were analysed by protein content and flow cytometry to determine the presence of tetraspanin markers (CD63 and CD9). The highest tetraspanin content was routinely detected in fractions well before the bulk of proteins eluted. These tetraspanin-peak fractions were analysed by cryo-electron microscopy (cryo-EM) and nanoparticle tracking analysis revealing the presence of EVs. When analysed by sodium dodecyl sulphate–polyacrylamide gel electrophoresis, tetraspanin-peak fractions from urine concentrated samples contained multiple bands but the main urine proteins (such as Tamm–Horsfall protein) were absent. Furthermore, a preliminary proteomic study of these fractions revealed the presence of EV-related proteins, suggesting their enrichment in concentrated samples. In addition, RNA profiling also showed the presence of vesicular small RNA species. To summarize, our results demonstrated that concentrated urine followed by SEC is a suitable option to isolate EVs with low presence of soluble contaminants. This methodology could permit more accurate analyses of EV-related biomarkers when further characterized by -omics technologies compared with other approaches. PMID:26025625
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-01-01
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. PMID:26694878
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26694878
Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold
2016-01-01
Objectives To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Design Retrospective observational study. Setting A Norwegian 524-bed general hospital trust. Participants 1920 medical records selected from 1 January to 31 December 2010. Primary outcomes Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. Results In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. Conclusions The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. PMID:27113238
The Effect of Small Sample Size on Two-Level Model Estimates: A Review and Illustration
ERIC Educational Resources Information Center
McNeish, Daniel M.; Stapleton, Laura M.
2016-01-01
Multilevel models are an increasingly popular method to analyze data that originate from a clustered or hierarchical structure. To effectively utilize multilevel models, one must have an adequately large number of clusters; otherwise, some model parameters will be estimated with bias. The goals for this paper are to (1) raise awareness of the…
Rowley, R.; Lillo, Thomas Martin
2002-01-01
High densities and small grain size of alumina ceramic bodies provide high strength and better mechanical properties than lower density and larger grain size bodies. The final sintered density and grain size of slip-cast, alumina samples depends greatly on the processing of the slip and the alumina powder, as well as the sintering schedule. There were many different variables explored that include initial powder particle size, slurry solids percent, amount and type of dispersant used, amount and type of binder used, and sintering schedule. Although the experimentation is not complete, to this point the sample with the highest density and smallest grain size has been a SM8/Nano mixture with Darvan C as the dispersant and Polyvinyl Alcohol (PVA) as the binder, with a solids loading of 70 wt% and a 1500 °C for 2 hours sintering schedule. The resultant density was 98.81% of theoretical and the average grain size was approximately 2.5 µm.
Rowley, R.; Chu, H.
2002-01-01
High densities and small grain size of alumina ceramic bodies provide high strength and better mechanical properties than lower density and larger grain size bodies. The final sintered density and grain size of slip-cast, alumina samples depends greatly on the processing of the slip and the alumina powder, as well as the sintering schedule. There were many different variables explored that include initial powder particle size, slurry solids percent, amount and type of dispersant used, amount and type of binder used, and sintering schedule. Although the experimentation is not complete, to this point the sample with the highest density and smallest grain size has been a SM8/Nano mixture with Darvan C as the dispersant and Polyvinyl Alcohol (PVA) as the binder, with a solids loading of 70 wt% and a 1500 C for 2 hours sintering schedule. The resultant density was 98.81% of theoretical and the average grain size was approximately 2.5 {micro}m.
Rapid assessment of population size by area sampling in disaster situations.
Brown, V; Jacquier, G; Coulombier, D; Balandine, S; Belanger, F; Legros, D
2001-06-01
In the initial phase of a complex emergency, an immediate population size assessment method, based on area sampling, is vital to provide relief workers with a rapid population estimate in refugee camps. In the past decade, the method has been progressively improved; six examples are presented in this paper and questions raised about its statistical validity as well as important issues for further research. There are two stages. The first is to map the camp by registering all of its co-ordinates. In the second stage, the total camp population is estimated by counting the population living in a limited number of square blocks of known surface area, and by extrapolating average population calculated per block to the total camp surface. In six camps selected in Asia and Africa, between 1992 and 1994, population figures were estimated within one to two days. After measuring all external limits, surfaces were calculated and ranged between 121,300 and 2,770,000 square metres. In five camps, the mean average population per square was obtained using blocks 25 by 25 meters (625 m2), and for another camp with blocks 100 by 100 m2. In three camps, different population density zones were defined. Total camp populations obtained were 16,800 to 113,600. Although this method is a valuable public health tool in emergency situations, it has several limitations. Issues related to population density and number and size of blocks to be selected require further research for the method to be better validated. PMID:11434235
Di Camillo, Barbara; Sanavia, Tiziana; Martini, Matteo; Jurman, Giuseppe; Sambo, Francesco; Barla, Annalisa; Squillario, Margherita; Furlanello, Cesare; Toffolo, Gianna; Cobelli, Claudio
2012-01-01
Motivation The identification of robust lists of molecular biomarkers related to a disease is a fundamental step for early diagnosis and treatment. However, methodologies for the discovery of biomarkers using microarray data often provide results with limited overlap. These differences are imputable to 1) dataset size (few subjects with respect to the number of features); 2) heterogeneity of the disease; 3) heterogeneity of experimental protocols and computational pipelines employed in the analysis. In this paper, we focus on the first two issues and assess, both on simulated (through an in silico regulation network model) and real clinical datasets, the consistency of candidate biomarkers provided by a number of different methods. Methods We extensively simulated the effect of heterogeneity characteristic of complex diseases on different sets of microarray data. Heterogeneity was reproduced by simulating both intrinsic variability of the population and the alteration of regulatory mechanisms. Population variability was simulated by modeling evolution of a pool of subjects; then, a subset of them underwent alterations in regulatory mechanisms so as to mimic the disease state. Results The simulated data allowed us to outline advantages and drawbacks of different methods across multiple studies and varying number of samples and to evaluate precision of feature selection on a benchmark with known biomarkers. Although comparable classification accuracy was reached by different methods, the use of external cross-validation loops is helpful in finding features with a higher degree of precision and stability. Application to real data confirmed these results. PMID:22403633
Data with Hierarchical Structure: Impact of Intraclass Correlation and Sample Size on Type-I Error
Musca, Serban C.; Kamiejski, Rodolphe; Nugier, Armelle; Méot, Alain; Er-Rafiy, Abdelatif; Brauer, Markus
2011-01-01
Least squares analyses (e.g., ANOVAs, linear regressions) of hierarchical data leads to Type-I error rates that depart severely from the nominal Type-I error rate assumed. Thus, when least squares methods are used to analyze hierarchical data coming from designs in which some groups are assigned to the treatment condition, and others to the control condition (i.e., the widely used “groups nested under treatment” experimental design), the Type-I error rate is seriously inflated, leading too often to the incorrect rejection of the null hypothesis (i.e., the incorrect conclusion of an effect of the treatment). To highlight the severity of the problem, we present simulations showing how the Type-I error rate is affected under different conditions of intraclass correlation and sample size. For all simulations the Type-I error rate after application of the popular Kish (1965) correction is also considered, and the limitations of this correction technique discussed. We conclude with suggestions on how one should collect and analyze data bearing a hierarchical structure. PMID:21687445
NASA Technical Reports Server (NTRS)
Clanton, U. S.; Fletcher, C. R.
1976-01-01
The paper describes a Monte Carlo model for simulation of two-dimensional representations of thin sections of some of the more common igneous rock textures. These representations are extrapolated to three dimensions to develop a volume of 'rock'. The model (here applied to a medium-grained high-Ti basalt) can be used to determine a statistically significant sample for a lunar rock or to predict the probable errors in the oxide contents that can occur during the analysis of a sample that is not representative of the parent rock.
ERIC Educational Resources Information Center
Olneck, Michael R.; Bills, David B.
1979-01-01
Birth order effects in brothers were found to derive from difference in family size. Effects for family size were found even with socioeconomic background controlled. Nor were family size effects explained by parental ability. The importance of unmeasured preferences or economic resources that vary across families was suggested. (Author/RD)
Technology Transfer Automated Retrieval System (TEKTRAN)
About 100 countries have established regulatory limits for aflatoxin in food and feeds. Because these limits vary widely among regulating countries, the Codex Committee on Food Additives and Contaminants (CCFAC) began work in 2004 to harmonize aflatoxin limits and sampling plans for aflatoxin in alm...
ERIC Educational Resources Information Center
Kelley, Ken
2008-01-01
Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size…
Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian; Tiedje, James M.
2016-01-01
We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation. PMID:27313569
Penton, C. Ryan; Gupta, Vadakattu V. S. R.; Yu, Julian; Tiedje, James M.
2016-06-02
We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5, and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI) were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungalmore » community structure, replicate dispersion and the number of operational taxonomic units (OTUs) retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.« less
Martin, James; Taljaard, Monica; Girling, Alan; Hemming, Karla
2016-01-01
Background Stepped-wedge cluster randomised trials (SW-CRT) are increasingly being used in health policy and services research, but unless they are conducted and reported to the highest methodological standards, they are unlikely to be useful to decision-makers. Sample size calculations for these designs require allowance for clustering, time effects and repeated measures. Methods We carried out a methodological review of SW-CRTs up to October 2014. We assessed adherence to reporting each of the 9 sample size calculation items recommended in the 2012 extension of the CONSORT statement to cluster trials. Results We identified 32 completed trials and 28 independent protocols published between 1987 and 2014. Of these, 45 (75%) reported a sample size calculation, with a median of 5.0 (IQR 2.5–6.0) of the 9 CONSORT items reported. Of those that reported a sample size calculation, the majority, 33 (73%), allowed for clustering, but just 15 (33%) allowed for time effects. There was a small increase in the proportions reporting a sample size calculation (from 64% before to 84% after publication of the CONSORT extension, p=0.07). The type of design (cohort or cross-sectional) was not reported clearly in the majority of studies, but cohort designs seemed to be most prevalent. Sample size calculations in cohort designs were particularly poor with only 3 out of 24 (13%) of these studies allowing for repeated measures. Discussion The quality of reporting of sample size items in stepped-wedge trials is suboptimal. There is an urgent need for dissemination of the appropriate guidelines for reporting and methodological development to match the proliferation of the use of this design in practice. Time effects and repeated measures should be considered in all SW-CRT power calculations, and there should be clarity in reporting trials as cohort or cross-sectional designs. PMID:26846897
34 CFR 85.900 - Adequate evidence.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Definitions § 85.900 Adequate evidence. Adequate evidence means information sufficient to support the reasonable belief that a particular act or omission has occurred. Authority: E.O. 12549 (3 CFR, 1986 Comp., p. 189); E.O 12689 (3 CFR, 1989 Comp., p. 235); 20 U.S.C. 1082, 1094, 1221e-3 and 3474; and Sec....
29 CFR 452.110 - Adequate safeguards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 2 2010-07-01 2010-07-01 false Adequate safeguards. 452.110 Section 452.110 Labor... DISCLOSURE ACT OF 1959 Election Procedures; Rights of Members § 452.110 Adequate safeguards. (a) In addition to the election safeguards discussed in this part, the Act contains a general mandate in section...
29 CFR 452.110 - Adequate safeguards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 2 2011-07-01 2011-07-01 false Adequate safeguards. 452.110 Section 452.110 Labor... DISCLOSURE ACT OF 1959 Election Procedures; Rights of Members § 452.110 Adequate safeguards. (a) In addition to the election safeguards discussed in this part, the Act contains a general mandate in section...
Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws
NASA Astrophysics Data System (ADS)
Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.
2009-04-01
Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W
Sample size determination for a specific region in a multiregional trial.
Ko, Feng-Shou; Tsou, Hsiao-Hui; Liu, Jen-Pei; Hsiao, Chin-Fu
2010-07-01
Recently, geotherapeutics have attracted much attention from sponsors as well as regulatory authorities. A bridging study defined by the International Conference on Harmonisation (ICH) E5 is usually conducted in the new region after the test product has been approved for commercial marketing in the original region due to its proven efficacy and safety. However, extensive duplication of clinical evaluation in the new region not only requires valuable development resources but also delays availability of the test product to the needed patients in the new regions. To shorten the drug lag or the time lag for approval, simultaneous drug development, submission, and approval in the world may be desirable. On September 28, 2007, the Ministry of Health, Labour and Welfare (MHLW) in Japan published the "Basic Principles on Global Clinical Trials" guidance related to the planning and implementation of global clinical studies. The 11th question and answer for the ICH E5 guideline also discuss the concept of a multiregional trial. Both guidelines have established a framework on how to demonstrate the efficacy of a drug in all participating regions while also evaluating the possibility of applying the overall trial results to each region by conducting a multiregional trial. In this paper, we focus on a specific region and establish statistical criteria for consistency between the region of interest and overall results. More specifically, four criteria are considered. Two criteria are to assess whether the treatment effect in the region of interest is as large as that of the other regions or of the regions overall, while the other two criteria are to assess the consistency of the treatment effect of the specific region with other regions or the regions overall. Sample size required for the region of interest can also be evaluated based on these four criteria. PMID:20496211
Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty
Ferson, S.
1996-12-31
A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.
Consideration of sample size for estimating contaminant load reductions using load duration curves
NASA Astrophysics Data System (ADS)
Babbar-Sebens, Meghna; Karthikeyan, R.
2009-06-01
SummaryIn Total Maximum Daily Load (TMDL) programs, load duration curves are often used to estimate reduction of contaminant loads in a watershed. A popular method for calculating these load reductions involves estimation of the 90th percentiles of monitored contaminant concentrations during different hydrologic conditions. However, water quality monitoring is expensive and can pose major limitations in collecting enough data. Availability of scarce water quality data can, therefore, deteriorate the precision in the estimates of the 90th percentiles, which, in turn, affects the accuracy of estimated load reductions. This paper proposes an adaptive sampling strategy that the data collection agencies can use for not only optimizing their collection of new samples across different hydrologic conditions, but also ensuring that newly collected samples provide opportunity for best possible improvements in the precision of the estimated 90th percentile with minimum sampling costs. The sampling strategy was used to propose sampling plans for Escherichia coli monitoring in an actual stream and different sampling procedures of the strategy were tested for hypothetical stream data. Results showed that improvement in precision using the proposed distributed sampling procedure is much better and faster than that attained via the lumped sampling procedure, for the same sampling cost. Hence, it is recommended that when agencies have a fixed sampling budget, they should collect samples in consecutive monitoring cycles as proposed by the distributed sampling procedure, rather than investing all their resources in only one monitoring cycle.
Hammerstrom, Kamille K; Ranasinghe, J Ananda; Weisberg, Stephen B; Oliver, John S; Fairey, W Russell; Slattery, Peter N; Oakden, James M
2012-10-01
Benthic macrofauna are used extensively for environmental assessment, but the area sampled and sieve sizes used to capture animals often differ among studies. Here, we sampled 80 sites using 3 different sized sampling areas (0.1, 0.05, 0.0071 m(2)) and sieved those sediments through each of 2 screen sizes (0.5, 1 mm) to evaluate their effect on number of individuals, number of species, dominance, nonmetric multidimensional scaling (MDS) ordination, and benthic community condition indices that are used to assess sediment quality in California. Sample area had little effect on abundance but substantially affected numbers of species, which are not easily scaled to a standard area. Sieve size had a substantial effect on both measures, with the 1-mm screen capturing only 74% of the species and 68% of the individuals collected in the 0.5-mm screen. These differences, though, had little effect on the ability to differentiate samples along gradients in ordination space. Benthic indices generally ranked sample condition in the same order regardless of gear, although the absolute scoring of condition was affected by gear type. The largest differences in condition assessment were observed for the 0.0071-m(2) gear. Benthic indices based on numbers of species were more affected than those based on relative abundance, primarily because we were unable to scale species number to a common area as we did for abundance. PMID:20938972
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2013 CFR
2013-01-01
... PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Pecans in the... sample shall consist of 100 pecans. The individual sample shall be drawn at random from a...
7 CFR 51.1406 - Sample for grade or size determination.
Code of Federal Regulations, 2014 CFR
2014-01-01
... PRODUCTS 1 2 (INSPECTION, CERTIFICATION, AND STANDARDS) United States Standards for Grades of Pecans in the... sample shall consist of 100 pecans. The individual sample shall be drawn at random from a...
Multiscale sampling of plant diversity: Effects of minimum mapping unit size
Stohlgren, T.J.; Chong, G.W.; Kalkhan, M.A.; Schell, L.D.
1997-01-01
Only a small portion of any landscape can be sampled for vascular plant diversity because of constraints of cost (salaries, travel time between sites, etc.). Often, the investigator decides to reduce the cost of creating a vegetation map by increasing the minimum mapping unit (MMU), and/or by reducing the number of vegetation classes to be considered. Questions arise about what information is sacrificed when map resolution is decreased. We compared plant diversity patterns from vegetation maps made with 100-ha, 50-ha, 2-ha, and 0.02-ha MMUs in a 754-ha study area in Rocky Mountain National Park, Colorado, United States, using four 0.025-ha and 21 0.1-ha multiscale vegetation plots. We developed and tested species-log(area) curves, correcting the curves for within-vegetation type heterogeneity with Jaccard's coefficients. Total species richness in the study area was estimated from vegetation maps at each resolution (MMU), based on the corrected species-area curves, total area of the vegetation type, and species overlap among vegetation types. With the 0.02-ha MMU, six vegetation types were recovered, resulting in an estimated 552 species (95% CI = 520-583 species) in the 754-ha study area (330 plant species were observed in the 25 plots). With the 2-ha MMU, five vegetation types were recognized, resulting in an estimated 473 species for the study area. With the 50-ha MMU, 439 plant species were estimated for the four vegetation types recognized in the study area. With the 100-ha MMU, only three vegetation types were recognized, resulting in an estimated 341 plant species for the study area. Locally rare species and keystone ecosystems (areas of high or unique plant diversity) were missed at the 2-ha, 50-ha, and 100-ha scales. To evaluate the effects of minimum mapping unit size requires: (1) an initial stratification of homogeneous, heterogeneous, and rare habitat types; and (2) an evaluation of within-type and between-type heterogeneity generated by environmental
Tavernier, Elsa; Giraudeau, Bruno
2015-01-01
We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT). In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review). Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was < 60%, as compared with the 80% nominal power); 41%, 16% and 6%, respectively, were overpowered (i.e., with real power > 90%). Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined. PMID:26173007
Americans Getting Adequate Water Daily, CDC Finds
... medlineplus/news/fullstory_158510.html Americans Getting Adequate Water Daily, CDC Finds Men take in an average ... new government report finds most are getting enough water each day. The data, from the U.S. National ...
Americans Getting Adequate Water Daily, CDC Finds
... gov/news/fullstory_158510.html Americans Getting Adequate Water Daily, CDC Finds Men take in an average ... new government report finds most are getting enough water each day. The data, from the U.S. National ...
Sugimoto, Tomoyuki; Sozu, Takashi; Hamasaki, Toshimitsu
2012-01-01
The clinical efficacy of a new treatment may often be better evaluated by two or more co-primary endpoints. Recently, in pharmaceutical drug development, there has been increasing discussion regarding establishing statistically significant favorable results on more than one endpoint in comparisons between treatments, which is referred to as a problem of multiple co-primary endpoints. Several methods have been proposed for calculating the sample size required to design a trial with multiple co-primary correlated endpoints. However, because these methods require users to have considerable mathematical sophistication and knowledge of programming techniques, their application and spread may be restricted in practice. To improve the convenience of these methods, in this paper, we provide a useful formula with accompanying numerical tables for sample size calculations to design clinical trials with two treatments, where the efficacy of a new treatment is demonstrated on continuous co-primary endpoints. In addition, we provide some examples to illustrate the sample size calculations made using the formula. Using the formula and the tables, which can be read according to the patterns of correlations and effect size ratios expected in multiple co-primary endpoints, makes it convenient to evaluate the required sample size promptly. PMID:22415870
Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy
2016-01-01
Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience. PMID:26179055
Hanley, James A; Csizmadi, Ilona; Collet, Jean-Paul
2005-12-15
A two-stage case-control design, in which exposure and outcome are determined for a large sample but covariates are measured on only a subsample, may be much less expensive than a one-stage design of comparable power. However, the methods available to plan the sizes of the stage 1 and stage 2 samples, or to project the precision/power provided by a given configuration, are limited to the case of a binary exposure and a single binary confounder. The authors propose a rearrangement of the components in the variance of the estimator of the log-odds ratio. This formulation makes it possible to plan sample sizes/precision by including variance inflation factors to deal with several confounding factors. A practical variance bound is derived for two-stage case-control studies, where confounding variables are binary, while an empirical investigation is used to anticipate the additional sample size requirements when these variables are quantitative. Two methods are suggested for sample size planning based on a quantitative, rather than binary, exposure. PMID:16269581
Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric
2016-03-01
Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927
Boitard, Simon; Rodríguez, Willy; Jay, Flora; Mona, Stefano; Austerlitz, Frédéric
2016-01-01
Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey), PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles. PMID:26943927
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Method to study sample object size limit of small-angle x-ray scattering computed tomography
NASA Astrophysics Data System (ADS)
Choi, Mina; Ghammraoui, Bahaa; Badal, Andreu; Badano, Aldo
2016-03-01
Small-angle x-ray scattering (SAXS) imaging is an emerging medical tool that can be used for in vivo detailed tissue characterization and has the potential to provide added contrast to conventional x-ray projection and CT imaging. We used a publicly available MC-GPU code to simulate x-ray trajectories in a SAXS-CT geometry for a target material embedded in a water background material with varying sample sizes (1, 3, 5, and 10 mm). Our target materials were water solution of gold nanoparticle (GNP) spheres with a radius of 6 nm and a water solution with dissolved serum albumin (BSA) proteins due to their well-characterized scatter profiles at small angles and highly scattering properties. The background material was water. Our objective is to study how the reconstructed scatter profile degrades at larger target imaging depths and increasing sample sizes. We have found that scatter profiles of the GNP in water can still be reconstructed at depths up to 5 mm embedded at the center of a 10 mm sample. Scatter profiles of BSA in water were also reconstructed at depths up to 5 mm in a 10 mm sample but with noticeable signal degradation as compared to the GNP sample. This work presents a method to study the sample size limits for future SAXS-CT imaging systems.
40 CFR 761.243 - Standard wipe sample method and size.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., AND USE PROHIBITIONS Determining a PCB Concentration for Purposes of Abandonment or Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe.../Rinse Cleanup as Recommended by the Environmental Protection Agency PCB Spill Cleanup Policy,”...
40 CFR 761.243 - Standard wipe sample method and size.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., AND USE PROHIBITIONS Determining a PCB Concentration for Purposes of Abandonment or Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe.../Rinse Cleanup as Recommended by the Environmental Protection Agency PCB Spill Cleanup Policy,”...
40 CFR 761.243 - Standard wipe sample method and size.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., AND USE PROHIBITIONS Determining a PCB Concentration for Purposes of Abandonment or Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe.../Rinse Cleanup as Recommended by the Environmental Protection Agency PCB Spill Cleanup Policy,”...
40 CFR 761.243 - Standard wipe sample method and size.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., AND USE PROHIBITIONS Determining a PCB Concentration for Purposes of Abandonment or Disposal of Natural Gas Pipeline: Selecting Sample Sites, Collecting Surface Samples, and Analyzing Standard PCB Wipe.../Rinse Cleanup as Recommended by the Environmental Protection Agency PCB Spill Cleanup Policy,”...
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
Code of Federal Regulations, 2011 CFR
2011-10-01
... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
Code of Federal Regulations, 2010 CFR
2010-10-01
... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
Code of Federal Regulations, 2013 CFR
2013-10-01
... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
Code of Federal Regulations, 2012 CFR
2012-10-01
... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
Code of Federal Regulations, 2014 CFR
2014-10-01
... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...
Ryman, Nils; Allendorf, Fred W; Jorde, Per Erik; Laikre, Linda; Hössjer, Ola
2014-01-01
Many empirical studies estimating effective population size apply the temporal method that provides an estimate of the variance effective size through the amount of temporal allele frequency change under the assumption that the study population is completely isolated. This assumption is frequently violated, and the magnitude of the resulting bias is generally unknown. We studied how gene flow affects estimates of effective size obtained by the temporal method when sampling from a population system and provide analytical expressions for the expected estimate under an island model of migration. We show that the temporal method tends to systematically underestimate both local and global effective size when populations are connected by gene flow, and the bias is sometimes dramatic. The problem is particularly likely to occur when sampling from a subdivided population where high levels of gene flow obscure identification of subpopulation boundaries. In such situations, sampling in a manner that prevents biased estimates can be difficult. This phenomenon might partially explain the frequently reported unexpectedly low effective population sizes of marine populations that have raised concern regarding the genetic vulnerability of even exceptionally large populations. PMID:24034449
ERIC Educational Resources Information Center
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
Ando, Yuki; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Sugimoto, Tomoyuki; Sozu, Takashi; Ohno, Yuko
2015-01-01
The effects of interventions are multi-dimensional. Use of more than one primary endpoint offers an attractive design feature in clinical trials as they capture more complete characterization of the effects of an intervention and provide more informative intervention comparisons. For these reasons, multiple primary endpoints have become a common design feature in many disease areas such as oncology, infectious disease, and cardiovascular disease. More specifically in medical product development, multiple endpoints are utilized as co-primary to evaluate the effect of the new interventions. Although methodologies to address continuous co-primary endpoints are well-developed, methodologies for binary endpoints are limited. In this paper, we describe power and sample size determination for clinical trials with multiple correlated binary endpoints, when relative risks are evaluated as co-primary. We consider a scenario where the objective is to evaluate evidence for superiority of a test intervention compared with a control intervention, for all of the relative risks. We discuss the normal approximation methods for power and sample size calculations and evaluate how the required sample size, power and Type I error vary as a function of the correlations among the endpoints. Also we discuss a simple, but conservative procedure for appropriate sample size calculation. We then extend the methods allowing for interim monitoring using group-sequential methods. PMID:26167243
NASA Technical Reports Server (NTRS)
Heymann, D.; Lakatos, S.; Walton, J. R.
1973-01-01
Review of the results of inert gas measurements performed on six grain-size fractions and two single particles from four samples of Luna 20 material. Presented and discussed data include the inert gas contents, element and isotope systematics, radiation ages, and Ar-36/Ar-40 systematics.
ERIC Educational Resources Information Center
Meyer, J. Patrick; Seaman, Michael A.
2013-01-01
The authors generated exact probability distributions for sample sizes up to 35 in each of three groups ("n" less than or equal to 105) and up to 10 in each of four groups ("n" less than or equal to 40). They compared the exact distributions to the chi-square, gamma, and beta approximations. The beta approximation was best in…
González-Vacarezza, N; Abad-Santos, F; Carcas-Sansuan, A; Dorado, P; Peñas-Lledó, E; Estévez-Carrizo, F; Llerena, A
2013-10-01
In bioequivalence studies, intra-individual variability (CV(w)) is critical in determining sample size. In particular, highly variable drugs may require enrollment of a greater number of subjects. We hypothesize that a strategy to reduce pharmacokinetic CV(w), and hence sample size and costs, would be to include subjects with decreased metabolic enzyme capacity for the drug under study. Therefore, two mirtazapine studies, two-way, two-period crossover design (n=68) were re-analysed to calculate the total CV(w) and the CV(w)s in three different CYP2D6 genotype groups (0, 1 and ≥ 2 active genes). The results showed that a 29.2 or 15.3% sample size reduction would have been possible if the recruitment had been of individuals carrying just 0 or 0 plus 1 CYP2D6 active genes, due to the lower CV(w). This suggests that there may be a role for pharmacogenetics in the design of bioequivalence studies to reduce sample size and costs, thus introducing a new paradigm for the biopharmaceutical evaluation of drug products. PMID:22733239
Platinum tetraiodide aerosols generated with a spinning disk from solutions in ethanol were used to test a particle size correction factor recently proposed by Criss for the X-ray fluorescence analysis of filter-deposited particulate samples. A set of standards of well-defined pa...
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.
2008-01-01
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
ERIC Educational Resources Information Center
Lambert, Richard; Flowers, Claudia; Sipe, Theresa; Idleman, Lynda
This paper discusses three software packages that offer unique features and options that greatly simplify the research package for conducting surveys. The first package, EPSILON, from Resource Group, Ltd. of Dallas (Texas) is designed to perform a variety of sample size calculations covering most of the commonly encountered survey research…
ERIC Educational Resources Information Center
De Champlain, Andre F.; Gessaroli, Marc E.; Tang, K. Linda; De Champlain, Judy E.
The empirical Type I error rates of Poly-DIMTEST (H. Li and W. Stout, 1995) and the LISREL8 chi square fit statistic (K. Joreskog and D. Sorbom, 1993) were compared with polytomous unidimensional data sets simulated to vary as a function of test length and sample size. The rejection rates for both statistics were also studied with two-dimensional…
COMPARISON OF BIOLOGICAL COMMUNITIES: THE PROBLEM OF SAMPLE REPRESENTATIVENESS
Obtaining an adequate, representative sample of biological communities or assemblages to make richness or compositional comparisons among sites is a continuing challenge. Traditionally, sample size is based on numbers of replicates or area collected or numbers of individuals enum...
Jones, A D; Aitken, R J; Fabriès, J F; Kauffer, E; Liden, G; Maynard, A; Riediger, G; Sahle, W
2005-08-01
The counting of fibres on membrane filters could be facilitated by using size-selective samplers to exclude coarse particulate and fibres that impede fibre counting. Furthermore, the use of thoracic size selection would also remove the present requirement to discriminate fibres by diameter during counting. However, before thoracic samplers become acceptable for sampling fibres, their performance with fibres needs to be determined. This study examines the performance of four thoracic samplers: the GK2.69 cyclone, a Modified SIMPEDS cyclone, the CATHIA sampler (inertial separation) and the IOM thoracic sampler (porous foam pre-selector). The uniformity of sample deposit on the filter samples, which is important when counts are taken on random fields, was examined with two sizes of spherical particles (1 and 10 microm) and a glass fibre aerosol with fibres spanning the aerodynamic size range of the thoracic convention. Counts by optical microscopy examined fields on a set scanning pattern. Hotspots of deposition were detected for one of the thoracic samplers (Modified SIMPEDS with the 10 microm particles and the fibres). These hotspots were attributed to the inertial flow pattern near the port from the cyclone pre-separator. For the other three thoracic samplers, the distribution was similar to that on a cowled sampler, the current standard sampler for fibres. Aerodynamic selection was examined by comparing fibre concentration on thoracic samples with those measured on semi-isokinetic samples, using fibre size (and hence calculated aerodynamic diameter) and number data obtained by scanning electron microscope evaluation in four laboratories. The size-selection characteristics of three thoracic samplers (GK2.69, Modified SIMPEDS and CATHIA) appeared very similar to the thoracic convention; there was a slight oversampling (relative to the convention) for d(ae) < 7 microm, but that would not be disadvantageous for comparability with the cowled sampler. Only the IOM
[The preparation of blood samples for the automatic measurement of erythrocyte size].
Kishchenko, G P; Gorbatov, A F; Kostyrev, O A
1990-01-01
The following procedure is proposed: fixation for 12 h in low ionic strength solution (4 percent formaldehyde in 50 mM sodium phosphate buffer), drying of the suspension drop on the slide, gallocyanin staining. All red cells were contrast-stained without light central spot. The sizes of different red cell groups on the slide differed by less than 5 percent. PMID:1704944
Does size matter? An investigation into the Rey Complex Figure in a pediatric clinical sample.
Loughan, Ashlee R; Perna, Robert B; Galbreath, Jennifer D
2014-01-01
The Rey Complex Figure Test (RCF) copy requires visuoconstructional skills and significant attentional, organizational, and problem-solving skills. Most scoring schemes codify a subset of the details involved in figure construction. Research is unclear regarding the meaning of figure size. The research hypothesis of our inquiry is that size of the RCF copy will have neuropsychological significance. Data from 95 children (43 girls, 52 boys; ages 6-18 years) with behavioral and academic issues revealed that larger figure drawings were associated with higher RCF total scores and significantly higher scores across many neuropsychological tests including the Wechsler Individual Achievement Test-Second Edition (WIAT-II) Word Reading (F = 5.448, p = .022), WIAT-II Math Reasoning (F = 6.365, p = .013), Children's Memory Scale Visual Delay (F = 4.015, p = .048), Trail-Making Test-Part A (F = 5.448, p = .022), and RCF Recognition (F = 4.862, p = .030). Results indicated that wider figures were associated with higher cognitive functioning, which may be part of an adaptive strategy in helping facilitate accurate and relative proportions of the complex details presented in the RCF. Overall, this study initiates the investigation of the RCF size and the relationship between size and a child's neuropsychological profile. PMID:24236943
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper compares the collection characteristics of a new rotating impactor for ultra fine aerosols (FLB) with the industry standard (Hock). The volume and droplet size distribution collected by the rotating impactors were measured via spectroscopy and microscopy. The rotary impactors were co-lo...
40 CFR 761.243 - Standard wipe sample method and size.
Code of Federal Regulations, 2010 CFR
2010-07-01
... surface areas, when small diameter pipe, a small valve, or a small regulator. When smaller surfaces are sampled, convert the... pipe segment or pipeline section using a standard wipe test as defined in § 761.123. Detailed...
Ellison, Laura E.; Lukacs, Paul M.
2014-01-01
Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.
NASA Astrophysics Data System (ADS)
Tremblay, R. T.; Zika, R. G.
2003-04-01
Aerosol samples were collected for the analysis of organic source markers using non-rotating Micro Orifice Uniform Deposit Impactors (MOUDI) as part of the Bay Regional Atmospheric Chemistry Experiment (BRACE) in Tampa, FL, USA. Daily samples were collected 12 m above ground at a flow rate of 30 lpm throughout the month of May 2002. Aluminum foil discs were used to sample aerosol size fractions with aerodynamic cut diameter of 18, 10, 5.6, 3.2, 1.8, 1.0, 0.56, 0.32, 0.17 and 0.093 um. Samples were solvent extracted using a mixture of dichloromethane/acetone/hexane, concentrated and then analyzed using gas chromatography-mass spectrometry (GC/MS). Low detection limits were achieved using a HP Programmable Temperature Vaporizing inlet (PTV) and large volume injections (80ul). Excellent chromatographic resolution was obtained using a 60 m long RTX-5MS, 0.25 mm I.D. column. A quantification method was built for over 90 organic compounds chosen as source markers including straight/iso/anteiso alkanes and polycyclic aromatic hydrocarbons (PAH). The investigation of potential aerosol sources for different particle sizes using known organic markers and source profiles will be presented. Size distributions of carbon preference indices (CPI), percent wax n-alkanes (%WNA) and concentration of selected compounds will be discussed. Also, results will be compared with samples acquired in different environments including the 1999 Atlanta SuperSite Experiment, GA, USA.
Bhaskar, Anand; Wang, Y.X. Rachel; Song, Yun S.
2015-01-01
With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. PMID:25564017
The Effect of Grain Size on Radon Exhalation Rate in Natural-dust and Stone-dust Samples
NASA Astrophysics Data System (ADS)
Kumari, Raj; Kant, Krishan; Garg, Maneesha
Radiation dose to human population due to inhalation of radon and its progeny contributes more than 50% of the total dose from the natural sources which is the second leading cause of lung cancer after smoking. In the present work the dependence of radon exhalation rate on the physical sample parameters of stone dust and natural dust were studied. The samples under study were first crushed, grinded, dried and then passed through sieves with different pore sizes to get samples of various grain sizes (μm). The average value of radon mass exhalation rate is 5.95±2.7 mBqkg-1hr-1 and average value of radon surface exhalation rate is 286±36 mBqm-2 hr-1 for stone dust, and the average value of radon mass exhalation rate is 9.02±5.37 mBqkg-1hr-1 and average value of radon surface exhalation rate is 360±67 mBqm-2 hr-1 for natural dust. The exhalation rate was found to increase with the increase in grain size of the sample. The obtained values of radon exhalation rate for all the samples are found to be under the radon exhalation rate limit reported worldwide.
Asbestos/NESHAP adequately wet guidance
Shafer, R.; Throwe, S.; Salgado, O.; Garlow, C.; Hoerath, E.
1990-12-01
The Asbestos NESHAP requires facility owners and/or operators involved in demolition and renovation activities to control emissions of particulate asbestos to the outside air because no safe concentration of airborne asbestos has ever been established. The primary method used to control asbestos emissions is to adequately wet the Asbestos Containing Material (ACM) with a wetting agent prior to, during and after demolition/renovation activities. The purpose of the document is to provide guidance to asbestos inspectors and the regulated community on how to determine if friable ACM is adequately wet as required by the Asbestos NESHAP.
NASA Astrophysics Data System (ADS)
Di Lorenzo, Robert A.; Young, Cora J.
2016-01-01
The majority of brown carbon (BrC) in atmospheric aerosols is derived from biomass burning (BB) and is primarily composed of extremely low volatility organic carbons. We use two chromatographic methods to compare the contribution of large and small light-absorbing BrC components in aged BB aerosols with UV-vis absorbance detection: (1) size exclusion chromatography (SEC) and (2) reverse phase high-performance liquid chromatography. We observe no evidence of small molecule absorbers. Most BrC absorption arises from large molecular weight components (>1000 amu). This suggests that although small molecules may contribute to BrC absorption near the BB source, analyses of aerosol extracts should use methods selective to large molecular weight compounds because these species may be responsible for long-term BrC absorption. Further characterization with electrospray ionization mass spectrometry (MS) coupled to SEC demonstrates an underestimation of the molecular size determined through MS as compared to SEC.
Watts, R
2013-04-01
The morphometry of the lumbar vertebral canal is of importance to clinical and bioarchaeological researchers, yet there are no growth standards for its diameters and there is a disagreement over the age at which its development is complete. Direct measurements of the midsagittal and interpedicular diameters of the lumbar vertebral canal (L1-L5) were taken from 65 children (3-17 years) and 120 adults (>17 years) from the East Smithfield Black Death cemetery, London (1348-1350 CE) to discover the age at which these diameters reached their final adult size in an historical population from later mediaeval London. Children were grouped into age categories: 3-5 years; 6-10 years; 11-14 years; 15-17 years, and the group means of each diameter were compared with the mean adult diameters using one-way ANOVAs. The child midsagittal diameters were not significantly different from adults in any age category, indicating that this diameter reached adult size by 3-5 years of age. However, interpedicular diameters increased with age until 15-17 years when they reached full adult size. Mean diameters and percentiles (10th and 90th) are provided for each age category. PMID:23415375
Strategies for minimizing sample size for use in airborne LiDAR-based forest inventory
Junttila, Virpi; Finley, Andrew O.; Bradford, John B.; Kauranne, Tuomo
2013-01-01
Recently airborne Light Detection And Ranging (LiDAR) has emerged as a highly accurate remote sensing modality to be used in operational scale forest inventories. Inventories conducted with the help of LiDAR are most often model-based, i.e. they use variables derived from LiDAR point clouds as the predictive variables that are to be calibrated using field plots. The measurement of the necessary field plots is a time-consuming and statistically sensitive process. Because of this, current practice often presumes hundreds of plots to be collected. But since these plots are only used to calibrate regression models, it should be possible to minimize the number of plots needed by carefully selecting the plots to be measured. In the current study, we compare several systematic and random methods for calibration plot selection, with the specific aim that they be used in LiDAR based regression models for forest parameters, especially above-ground biomass. The primary criteria compared are based on both spatial representativity as well as on their coverage of the variability of the forest features measured. In the former case, it is important also to take into account spatial auto-correlation between the plots. The results indicate that choosing the plots in a way that ensures ample coverage of both spatial and feature space variability improves the performance of the corresponding models, and that adequate coverage of the variability in the feature space is the most important condition that should be met by the set of plots collected.
Sampling size in the verification of manufactured-supplied air kerma strengths
Ramos, Luis Isaac; Martinez Monge, Rafael
2005-11-15
Quality control mandate that the air kerma strengths (S{sub K}) of permanent seeds be verified, this is usually done by statistics inferred from 10% of the seeds. The goal of this paper is to proposed a new sampling method in which the number of seeds to be measured will be set beforehand according to an a priori statistical level of uncertainty. The results are based on the assumption that the S{sub K} has a normal distribution. To demonstrate this, the S{sub K} of each of the seeds measured was corrected to ensure that the average S{sub K} of its sample remained the same. In this process 2030 results were collected and analyzed using a normal plot. In our opinion, the number of seeds sampled should be determined beforehand according to an a priori level of statistical uncertainty.
Target preparation for milligram sized 14C samples and data evaluation for AMS measurements
NASA Astrophysics Data System (ADS)
Andree, Michael; Beer, Jürg; Oeschger, Hans; Bonani, G.; Hofmann, H. J.; Morenzoni, E.; Nessi, M.; Suter, M.; Wölfli, W.
1984-11-01
Our preparation technique produces in a glow-discharge an amorphous carbon deposit on a copper substrate. The process starts with 1.6 cm 3 CO 2 STP (900 μg carbon) which is reduced over hot zinc to CO and subsequently cracked in the discharge. The yield of the process is typically 80%. With these targets in the Zürich ion source ion currents up to 20 μA are obtained. The background of samples prepared with this technique is presently around 30 ka (2.5% MODERN). The precision after half an hour measuring time for a modern sample is 0.7% and 2.7% for a three half-lives old sample, including the errors of the background and the NBS oxalic acid measurement. The method we use to correct for the background of the preparation and the accelerator as well as for the fractionation in the accelerator is presented.
Family size, birth order, and intelligence in a large South American sample.
Velandia, W; Grandon, G M; Page, E B
1978-01-01
The confluence theory, which hypothesizes a relationship between intellectual development birth order, and family size, was examined in a colombian study of more than 36,000 college applicants. The results of the study did not support the confluence theory. The confluence theory states that the intellectual development of a child is related to average mental age of the members of his family at the time of his birth. The mental age of the parents is always assigned a value of 30 and siblings are given scores equivalent to their chronological age at the birth of the subject. Therefore, the average mental age of family members for a 1st born child is 30, or 60 divided by 2. If a subject is born into a family consisting of 2 parents and a 6-year old sibling, the average mental age of family members tends, therefore, to decrease with each birth order. The hypothesis derived from the confluence theory states that there is a positive relationship between average mental age of a subject's family and the subject's performance on intelligence tests. In the Colombian study, data on family size, birth order and socioeconomic status was derived from college application forms. Intelligence test scores for each subject was obtained from college entrance exams. The mental age of each applicant's family at the time of the applicant's birth was calculated. Multiple correlation analysis and path analysis were used to assess the relationship. Results were 1) the test scores of subjects from families with 2,3,4, and 5 children were higher than test scores of the 1st born subjects; 2) the rank order of intelligence by family size was 3,4,5,2,6,1 instead of the hypothesized 1,2,3,4,5,6; and 3) only 1% of the variability in test scores was explained by the variables of birth order and family size. Further analysis indicated that socioeconomic status was a far more powerful explanatory variable than family size. PMID:12266293
Pan, Bo; Shibutani, Yoji; Zhang, Xu; Shang, Fulin
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.
NASA Astrophysics Data System (ADS)
Cantarello, Elena; Steck, Claude E.; Fontana, Paolo; Fontaneto, Diego; Marini, Lorenzo; Pautasso, Marco
2010-03-01
Recent large-scale studies have shown that biodiversity-rich regions also tend to be densely populated areas. The most obvious explanation is that biodiversity and human beings tend to match the distribution of energy availability, environmental stability and/or habitat heterogeneity. However, the species-people correlation can also be an artefact, as more populated regions could show more species because of a more thorough sampling. Few studies have tested this sampling bias hypothesis. Using a newly collated dataset, we studied whether Orthoptera species richness is related to human population size in Italy’s regions (average area 15,000 km2) and provinces (2,900 km2). As expected, the observed number of species increases significantly with increasing human population size for both grain sizes, although the proportion of variance explained is minimal at the provincial level. However, variations in observed Orthoptera species richness are primarily associated with the available number of records, which is in turn well correlated with human population size (at least at the regional level). Estimated Orthoptera species richness (Chao2 and Jackknife) also increases with human population size both for regions and provinces. Both for regions and provinces, this increase is not significant when controlling for variation in area and number of records. Our study confirms the hypothesis that broad-scale human population-biodiversity correlations can in some cases be artefactual. More systematic sampling of less studied taxa such as invertebrates is necessary to ascertain whether biogeographical patterns persist when sampling effort is kept constant or included in models.
Adequate supervision for children and adolescents.
Anderst, James; Moffatt, Mary
2014-11-01
Primary care providers (PCPs) have the opportunity to improve child health and well-being by addressing supervision issues before an injury or exposure has occurred and/or after an injury or exposure has occurred. Appropriate anticipatory guidance on supervision at well-child visits can improve supervision of children, and may prevent future harm. Adequate supervision varies based on the child's development and maturity, and the risks in the child's environment. Consideration should be given to issues as wide ranging as swimming pools, falls, dating violence, and social media. By considering the likelihood of harm and the severity of the potential harm, caregivers may provide adequate supervision by minimizing risks to the child while still allowing the child to take "small" risks as needed for healthy development. Caregivers should initially focus on direct (visual, auditory, and proximity) supervision of the young child. Gradually, supervision needs to be adjusted as the child develops, emphasizing a safe environment and safe social interactions, with graduated independence. PCPs may foster adequate supervision by providing concrete guidance to caregivers. In addition to preventing injury, supervision includes fostering a safe, stable, and nurturing relationship with every child. PCPs should be familiar with age/developmentally based supervision risks, adequate supervision based on those risks, characteristics of neglectful supervision based on age/development, and ways to encourage appropriate supervision throughout childhood. PMID:25369578
Small Rural Schools CAN Have Adequate Curriculums.
ERIC Educational Resources Information Center
Loustaunau, Martha
The small rural school's foremost and largest problem is providing an adequate curriculum for students in a changing world. Often the small district cannot or is not willing to pay the per-pupil cost of curriculum specialists, specialized courses using expensive equipment no more than one period a day, and remodeled rooms to accommodate new…
Funding the Formula Adequately in Oklahoma
ERIC Educational Resources Information Center
Hancock, Kenneth
2015-01-01
This report is a longevity, simulational study that looks at how the ratio of state support to local support effects the number of school districts that breaks the common school's funding formula which in turns effects the equity of distribution to the common schools. After nearly two decades of adequately supporting the funding formula, Oklahoma…
Item Characteristic Curve Parameters: Effects of Sample Size on Linear Equating.
ERIC Educational Resources Information Center
Ree, Malcom James; Jensen, Harald E.
By means of computer simulation of test responses, the reliability of item analysis data and the accuracy of equating were examined for hypothetical samples of 250, 500, 1000, and 2000 subjects for two tests with 20 equating items plus 60 additional items on the same scale. Birnbaum's three-parameter logistic model was used for the simulation. The…
ERIC Educational Resources Information Center
De Champlain, Andre; Gessaroli, Marc E.
1998-01-01
Type I error rates and rejection rates for three-dimensionality assessment procedures were studied with data sets simulated to reflect short tests and small samples. Results show that the G-squared difference test (D. Bock, R. Gibbons, and E. Muraki, 1988) suffered from a severely inflated Type I error rate at all conditions simulated. (SLD)
Evaluation of Pump Pulsation in Respirable Size-Selective Sampling: Part I. Pulsation Measurements
Lee, Eun Gyung; Lee, Larry; Möhlmann, Carsten; Flemmer, Michael M.; Kashon, Michael; Harper, Martin
2015-01-01
Pulsations generated by personal sampling pumps modulate the airflow through the sampling trains, thereby varying sampling efficiencies, and possibly invalidating collection or monitoring. The purpose of this study was to characterize pulsations generated by personal sampling pumps relative to a nominal flow rate at the inlet of different respirable cyclones. Experiments were conducted using a factorial combination of 13 widely used sampling pumps (11 medium and 2 high volumetric flow rate pumps having a diaphragm mechanism) and 7 cyclones [10-mm nylon also known as Dorr-Oliver (DO), Higgins-Dewell (HD), GS-1, GS-3, Aluminum, GK2.69, and FSP-10]. A hot- wire anemometer probe cemented to the inlet of each cyclone type was used to obtain pulsation readings. The three medium flow rate pump models showing the highest, a midrange, and the lowest pulsations and two high flow rate pump models for each cyclone type were tested with dust-loaded filters (0.05, 0.21, and 1.25 mg) to determine the effects of filter loading on pulsations. The effects of different tubing materials and lengths on pulsations were also investigated. The fundamental frequency range was 22–110 Hz and the magnitude of pulsation as a proportion of the mean flow rate ranged from 4.4 to 73.1%. Most pump/cyclone combinations generated pulse magnitudes >10% (48 out of 59 combinations), while pulse shapes varied considerably. Pulsation magnitudes were not considerably different for the clean and dust-loaded filters for the DO, HD, and Aluminum cyclones, but no consistent pattern was observed for the other cyclone types. Tubing material had less effect on pulsations than tubing length; when the tubing length was 183 cm, pronounced damping was observed for a pump with high pulsation (>60%) for all tested tubing materials except for the Tygon Inert tubing. The findings in this study prompted a further study to determine the possibility of shifts in cyclone sampling efficiency due to sampling pump pulsations
Krogager, J.-K.; Zirm, A. W.; Toft, S.; Man, A.; Brammer, G.
2014-12-10
We present deep, near-infrared Hubble Space Telescope/Wide Field Camera 3 grism spectroscopy and imaging for a sample of 14 galaxies at z ≈ 2 selected from a mass-complete photometric catalog in the COSMOS field. By combining the grism observations with photometry in 30 bands, we derive accurate constraints on their redshifts, stellar masses, ages, dust extinction, and formation redshifts. We show that the slope and scatter of the z ∼ 2 mass-size relation of quiescent galaxies is consistent with the local relation, and confirm previous findings that the sizes for a given mass are smaller by a factor of two to three. Finally, we show that the observed evolution of the mass-size relation of quiescent galaxies between z = 2 and 0 can be explained by the quenching of increasingly larger star forming galaxies at a rate dictated by the increase in the number density of quiescent galaxies with decreasing redshift. However, we find that the scatter in the mass-size relation should increase in the quenching-driven scenario in contrast to what is seen in the data. This suggests that merging is not needed to explain the evolution of the median mass-size relation of massive galaxies, but may still be required to tighten its scatter, and explain the size growth of individual z = 2 galaxies quiescent galaxies.
In situ detection of small-size insect pests sampled on traps using multifractal analysis
NASA Astrophysics Data System (ADS)
Xia, Chunlei; Lee, Jang-Myung; Li, Yan; Chung, Bu-Keun; Chon, Tae-Soo
2012-02-01
We introduce a multifractal analysis for detecting the small-size pest (e.g., whitefly) images from a sticky trap in situ. An automatic attraction system is utilized for collecting pests from greenhouse plants. We applied multifractal analysis to segment action of whitefly images based on the local singularity and global image characteristics. According to the theory of multifractal dimension, the candidate blobs of whiteflies are initially defined from the sticky-trap image. Two schemes, fixed thresholding and regional minima obtainment, were utilized for feature extraction of candidate whitefly image areas. The experiment was conducted with the field images in a greenhouse. Detection results were compared with other adaptive segmentation algorithms. Values of F measuring precision and recall score were higher for the proposed multifractal analysis (96.5%) compared with conventional methods such as Watershed (92.2%) and Otsu (73.1%). The true positive rate of multifractal analysis was 94.3% and the false positive rate minimal level at 1.3%. Detection performance was further tested via human observation. The degree of scattering between manual and automatic counting was remarkably higher with multifractal analysis (R2=0.992) compared with Watershed (R2=0.895) and Otsu (R2=0.353), ensuring overall detection of the small-size pests is most feasible with multifractal analysis in field conditions.
Second generation laser-heated microfurnace for the preparation of microgram-sized graphite samples
NASA Astrophysics Data System (ADS)
Yang, Bin; Smith, A. M.; Long, S.
2015-10-01
We present construction details and test results for two second-generation laser-heated microfurnaces (LHF-II) used to prepare graphite samples for Accelerator Mass Spectrometry (AMS) at ANSTO. Based on systematic studies aimed at optimising the performance of our prototype laser-heated microfurnace (LHF-I) (Smith et al., 2007 [1]; Smith et al., 2010 [2,3]; Yang et al., 2014 [4]), we have designed the LHF-II to have the following features: (i) it has a small reactor volume of 0.25 mL allowing us to completely graphitise carbon dioxide samples containing as little as 2 μg of C, (ii) it can operate over a large pressure range (0-3 bar) and so has the capacity to graphitise CO2 samples containing up to 100 μg of C; (iii) it is compact, with three valves integrated into the microfurnace body, (iv) it is compatible with our new miniaturised conventional graphitisation furnaces (MCF), also designed for small samples, and shares a common vacuum system. Early tests have shown that the extraneous carbon added during graphitisation in each LHF-II is of the order of 0.05 μg, assuming 100 pMC activity, similar to that of the prototype unit. We use a 'budget' fibre packaged array for the diode laser with custom built focusing optics. The use of a new infrared (IR) thermometer with a short focal length has allowed us to decrease the height of the light-proof safety enclosure. These innovations have produced a cheaper and more compact device. As with the LHF-I, feedback control of the catalyst temperature and logging of the reaction parameters is managed by a LabVIEW interface.
Lyman, Edward; Zuckerman, Daniel M.
2008-01-01
Although atomistic simulations of proteins and other biological systems are approaching microsecond timescales, the quality of simulation trajectories has remained difficult to assess. Such assessment is critical not only for establishing the relevance of any individual simulation, but also in the extremely active field of developing computational methods. Here we map the trajectory assessment problem onto a simple statistical calculation of the “effective sample size” - i.e., the number of statistically independent configurations. The mapping is achieved by asking the question, “How much time must elapse between snapshots included in a sample for that sample to exhibit the statistical properties expected for independent and identically distributed configurations?” Our method is more general than standard autocorrelation methods, in that it directly probes the configuration space distribution, without requiring a priori definition of configurational substates, and without any fitting parameters. We show that the method is equally and directly applicable to toy models, peptides, and a 72-residue protein model. Variants of our approach can readily be applied to a wide range of physical and chemical systems. PMID:17935314
Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios
NASA Technical Reports Server (NTRS)
Juarez, Alfredo; Harper, Susana A.
2016-01-01
The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.
Sahiner, Berkman; Chan, Heang-Ping; Hadjiiski, Lubomir
2008-01-01
In a practical classifier design problem the sample size is limited, and the available finite sample needs to be used both to design a classifier and to predict the classifier's performance for the true population. Since a larger sample is more representative of the population, it is advantageous to design the classifier with all the available cases, and to use a resampling technique for performance prediction. We conducted a Monte Carlo simulation study to compare the ability of different resampling techniques in predicting the performance of a neural network (NN) classifier designed with the available sample. We used the area under the receiver operating characteristic curve as the performance index for the NN classifier. We investigated resampling techniques based on the cross-validation, the leave-one-out method, and three different types of bootstrapping, namely, the ordinary, .632, and .632+ bootstrap. Our results indicated that, under the study conditions, there can be a large difference in the accuracy of the prediction obtained from different resampling methods, especially when the feature space dimensionality is relatively large and the sample size is small. Although this investigation is performed under some specific conditions, it reveals important trends for the problem of classifier performance prediction under the constraint of a limited data set. PMID:18234468
How Many Is Enough? Effect of Sample Size in Inter-Subject Correlation Analysis of fMRI
Pajula, Juha; Tohka, Jussi
2016-01-01
Inter-subject correlation (ISC) is a widely used method for analyzing functional magnetic resonance imaging (fMRI) data acquired during naturalistic stimuli. A challenge in ISC analysis is to define the required sample size in the way that the results are reliable. We studied the effect of the sample size on the reliability of ISC analysis and additionally addressed the following question: How many subjects are needed for the ISC statistics to converge to the ISC statistics obtained using a large sample? The study was realized using a large block design data set of 130 subjects. We performed a split-half resampling based analysis repeatedly sampling two nonoverlapping subsets of 10–65 subjects and comparing the ISC maps between the independent subject sets. Our findings suggested that with 20 subjects, on average, the ISC statistics had converged close to a large sample ISC statistic with 130 subjects. However, the split-half reliability of unthresholded and thresholded ISC maps improved notably when the number of subjects was increased from 20 to 30 or more. PMID:26884746
NASA Astrophysics Data System (ADS)
VanCuren, Richard A.; Cahill, Thomas; Burkhart, John; Barnes, David; Zhao, Yongjing; Perry, Kevin; Cliff, Steven; McConnell, Joe
2012-06-01
An ongoing program to continuously collect time- and size-resolved aerosol samples from ambient air at Summit Station, Greenland (72.6 N, 38.5 W) is building a long-term data base to both record individual transport events and provide long-term temporal context for past and future intensive studies at the site. As a "first look" at this data set, analysis of samples collected from summer 2005 to spring 2006 demonstrates the utility of continuous sampling to characterize air masses over the ice pack, document individual aerosol transport events, and develop a long-term record. Seven source-related aerosol types were identified in this analysis: Asian dust, Saharan dust, industrial combustion, marine with combustion tracers, fresh coarse volcanic tephra, and aged volcanic plume with fine tephra and sulfate, and the well-mixed background "Arctic haze". The Saharan dust is a new discovery; the other types are consistent with those reported from previous work using snow pits and intermittent ambient air sampling during intensive study campaigns. Continuous sampling complements the fundamental characterization of Greenland aerosols developed in intensive field programs by providing a year-round record of aerosol size and composition at all temporal scales relevant to ice core analysis, ranging from individual deposition events and seasonal cycles, to a record of inter-annual variability of aerosols from both natural and anthropogenic sources.
Neubauer, Simon; Gunz, Philipp; Weber, Gerhard W; Hublin, Jean-Jacques
2012-04-01
Estimation of endocranial volume in Australopithecus africanus is important in interpreting early hominin brain evolution. However, the number of individuals available for investigation is limited and most of these fossils are, to some degree, incomplete and/or distorted. Uncertainties of the required reconstruction ('missing data uncertainty') and the small sample size ('small sample uncertainty') both potentially bias estimates of the average and within-group variation of endocranial volume in A. africanus. We used CT scans, electronic preparation (segmentation), mirror-imaging and semilandmark-based geometric morphometrics to generate and reconstruct complete endocasts for Sts 5, Sts 60, Sts 71, StW 505, MLD 37/38, and Taung, and measured their endocranial volumes (EV). To get a sense of the reliability of these new EV estimates, we then used simulations based on samples of chimpanzees and humans to: (a) test the accuracy of our approach, (b) assess missing data uncertainty, and (c) appraise small sample uncertainty. Incorporating missing data uncertainty of the five adult individuals, A. africanus was found to have an average adult endocranial volume of 454-461 ml with a standard deviation of 66-75 ml. EV estimates for the juvenile Taung individual range from 402 to 407 ml. Our simulations show that missing data uncertainty is small given the missing portions of the investigated fossils, but that small sample sizes are problematic for estimating species average EV. It is important to take these uncertainties into account when different fossil groups are being compared. PMID:22365336
Snyder, Noah P.; Allen, James R.; Dare, Carlin; Hampton, Margaret A.; Schneider, Gary; Wooley, Ryan J.; Alpers, Charles N.; Marvin-DiPasquale, Mark C.
2004-01-01
This report presents sedimentologic data from three 2002 sampling campaigns conducted in Englebright Lake on the Yuba River in northern California. This work was done to assess the properties of the material deposited in the reservoir between completion of Englebright Dam in 1940 and 2002, as part of the Upper Yuba River Studies Program. Included are the results of grain-size-distribution and loss-on-ignition analyses for 561 samples, as well as an error analysis based on replicate pairs of subsamples.
NASA Astrophysics Data System (ADS)
Domalski, E. S.; Churney, K. L.; Ledford, A. E.; Ryan, R. V.; Reilly, M. L.
1982-02-01
A calorimeter to determine the enthalpies of combustion of kilogram size samples of minimally processed municipal solid municipal waste (MSW) in flowing oxygen near atmospheric pressure is discussed. The organic fraction of 25 gram pellets of highly processed MSW was burned in pure oxygen to CO2 and H2O in a small prototype calorimeter. The carbon content of the ash and the uncertainty in the amount of CO in the combustion products contribute calorimetric errors of 0.1 percent or less to the enthalpy of combustion. Large pellets of relatively unprocessed MSW have been successfully burned in a prototype kilogram size combustor at a rate of 15 minutes per kilogram with CO/CO2 ratios not greater than 0.1 percent. The design of the kilogram size calorimeter was completed and construction was begun.
IN SITU NON-INVASIVE SOIL CARBON ANALYSIS: SAMPLE SIZE AND GEOSTATISTICAL CONSIDERATIONS.
WIELOPOLSKI, L.
2005-04-01
I discuss a new approach for quantitative carbon analysis in soil based on INS. Although this INS method is not simple, it offers critical advantages not available with other newly emerging modalities. The key advantages of the INS system include the following: (1) It is a non-destructive method, i.e., no samples of any kind are taken. A neutron generator placed above the ground irradiates the soil, stimulating carbon characteristic gamma-ray emission that is counted by a detection system also placed above the ground. (2) The INS system can undertake multielemental analysis, so expanding its usefulness. (3) It can be used either in static or scanning modes. (4) The volume sampled by the INS method is large with a large footprint; when operating in a scanning mode, the sampled volume is continuous. (5) Except for a moderate initial cost of about $100,000 for the system, no additional expenses are required for its operation over two to three years after which a NG has to be replenished with a new tube at an approximate cost of $10,000, this regardless of the number of sites analyzed. In light of these characteristics, the INS system appears invaluable for monitoring changes in the carbon content in the field. For this purpose no calibration is required; by establishing a carbon index, changes in carbon yield can be followed with time in exactly the same location, thus giving a percent change. On the other hand, with calibration, it can be used to determine the carbon stock in the ground, thus estimating the soil's carbon inventory. However, this requires revising the standard practices for deciding upon the number of sites required to attain a given confidence level, in particular for the purposes of upward scaling. Then, geostatistical considerations should be incorporated in considering properly the averaging effects of the large volumes sampled by the INS system that would require revising standard practices in the field for determining the number of spots to be
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R^{2} = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m^{2}), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data. PMID:27410085
Guilleux, Alice; Blanchin, Myriam; Hardouin, Jean-Benoit; Sébille, Véronique
2014-01-01
Patient-reported outcomes (PRO) have gained importance in clinical and epidemiological research and aim at assessing quality of life, anxiety or fatigue for instance. Item Response Theory (IRT) models are increasingly used to validate and analyse PRO. Such models relate observed variables to a latent variable (unobservable variable) which is commonly assumed to be normally distributed. A priori sample size determination is important to obtain adequately powered studies to determine clinically important changes in PRO. In previous developments, the Raschpower method has been proposed for the determination of the power of the test of group effect for the comparison of PRO in cross-sectional studies with an IRT model, the Rasch model. The objective of this work was to evaluate the robustness of this method (which assumes a normal distribution for the latent variable) to violations of distributional assumption. The statistical power of the test of group effect was estimated by the empirical rejection rate in data sets simulated using a non-normally distributed latent variable. It was compared to the power obtained with the Raschpower method. In both cases, the data were analyzed using a latent regression Rasch model including a binary covariate for group effect. For all situations, both methods gave comparable results whatever the deviations from the model assumptions. Given the results, the Raschpower method seems to be robust to the non-normality of the latent trait for determining the power of the test of group effect. PMID:24427276
Guilleux, Alice; Blanchin, Myriam; Hardouin, Jean-Benoit; Sébille, Véronique
2014-01-01
Patient-reported outcomes (PRO) have gained importance in clinical and epidemiological research and aim at assessing quality of life, anxiety or fatigue for instance. Item Response Theory (IRT) models are increasingly used to validate and analyse PRO. Such models relate observed variables to a latent variable (unobservable variable) which is commonly assumed to be normally distributed. A priori sample size determination is important to obtain adequately powered studies to determine clinically important changes in PRO. In previous developments, the Raschpower method has been proposed for the determination of the power of the test of group effect for the comparison of PRO in cross-sectional studies with an IRT model, the Rasch model. The objective of this work was to evaluate the robustness of this method (which assumes a normal distribution for the latent variable) to violations of distributional assumption. The statistical power of the test of group effect was estimated by the empirical rejection rate in data sets simulated using a non-normally distributed latent variable. It was compared to the power obtained with the Raschpower method. In both cases, the data were analyzed using a latent regression Rasch model including a binary covariate for group effect. For all situations, both methods gave comparable results whatever the deviations from the model assumptions. Given the results, the Raschpower method seems to be robust to the non-normality of the latent trait for determining the power of the test of group effect. PMID:24427276
Paukert, C.P.; Willis, D.W.; Holland, R.S.
2002-01-01
We assessed the precision of visual estimates of vegetation and substrate along transects in 15 shallow, natural Nebraska lakes. Vegetation type (submergent or emergent), vegetation density (sparse, moderate, or dense), and substrate composition (percentage sand, muck, and clay; to the nearest 10%) were estimated at 25-70 sampling sites per lake by two independent observers. Observer agreement for vegetation type was 92%. Agreement ranged from 62.5% to 90.1% for substrate composition. Agreement was also high (72%) for vegetation density estimates. The relatively high agreement between estimates was likely attributable to the homogeneity of the lake habitats. Nearly 90% of the substrate sites were classified as 0% clay, and over 68% as either 0% or 100% sand. When habitats were homogeneous, less than 40 sampling sites per lake were required for 95% confidence that habitat composition was within 10% of the true mean, and over 100 sites were required when habitats were heterogeneous. Our results suggest that relatively high precision is attainable for vegetation and substrate mapping in shallow, natural lakes.
Size-exclusion chromatography of biological samples which contain extremely alkaline proteins.
Hayakawa, Kou; Guo, Lei; Terentyeva, Elena A; Li, Xiao Kang; Kimura, Hiromitsu; Hirano, Masahiko; Yoshikawa, Kazuyuki; Yoshinaga, Teruo; Nagamine, Takeaki; Katsumata, Noriyuki; Tanaka, Toshiaki
2003-06-30
An improved size-exclusion chromatography (SEC) was developed to isolate extremely basic (alkaline) proteins, such as trypsin (pI=10.5), lysozyme (pI=11), and histone (pI=10.8). Develosil 300 Diol-5 (300 x 8 mm I.D., 30 nm average pore diameter) column was used with an eluent of 0.1 M sodium phosphate, 1.5 M sodium chloride, glycerol (40%, v/v), 2-propanol (10%, v/v), and Brij-58 (1%, v/v). Under these conditions, the final apparent pH becomes to 4.0, and pH adjustment is not necessary. Column temperature and flow rate were 15 degrees C and 0.2 ml/min, respectively. This elution system is stable and reliable, and applications onto human pancreatic juice, human bile, and tissue homogenates were successfully achieved. Since this system is convenient for protein analysis, it is expected to be generally applicable to clinical and biochemical research for identifying protein components in combination with microsequencing. PMID:12834974
Free and combined amino acids in size-segregated atmospheric aerosol samples
NASA Astrophysics Data System (ADS)
Di Filippo, Patrizia; Pomata, Donatella; Riccardi, Carmela; Buiarelli, Francesca; Gallo, Valentina; Quaranta, Alessandro
2014-12-01
Concentrations of free and combined amino acids in an urban atmosphere and their distributions in size-segregated particles were investigated in the cold and warm seasons. In particular this article provides the first investigation of protein bioaerosol concentrations in ultrafine fraction (PM0.1) of particulate matter. In addition the present work provides amino acid and total proteinaceous material concentrations in NIST SRM 1649b, useful as reference values. The reference material was also used to build matrix matched calibration curves. Free amino acid total content in winter and summer PM0.1 was respectively 48.0 and 94.4 ng m-3, representing about 0.7 and 7.4% by weight of urban particulate matter in the two seasons. Total airborne protein and peptide concentrations in the same ultrafine fractions were 93.6 and 449.9 ng m-3 respectively in winter and in summer, representing 7.5 and 35.4% w/w of PM0.1, and demonstrating an exceptionally high percentage in summer ultrafine fraction. The significant potential adverse health effects of ultrafine particulate matter include allergies mainly caused by protein particles and we assumed that in summer 162 ng h-1 of proteinaceous material, by means of ultrafine particles, can penetrate from the lungs into the bloodstream.
Distribution of human waste samples in relation to sizing waste processing in space
NASA Technical Reports Server (NTRS)
Parker, Dick; Gallagher, S. K.
1992-01-01
Human waste processing for closed ecological life support systems (CELSS) in space requires that there be an accurate knowledge of the quantity of wastes produced. Because initial CELSS will be handling relatively few individuals, it is important to know the variation that exists in the production of wastes rather than relying upon mean values that could result in undersizing equipment for a specific crew. On the other hand, because of the costs of orbiting equipment, it is important to design the equipment with a minimum of excess capacity because of the weight that extra capacity represents. A considerable quantity of information that had been independently gathered on waste production was examined in order to obtain estimates of equipment sizing requirements for handling waste loads from crews of 2 to 20 individuals. The recommended design for a crew of 8 should hold 34.5 liters per day (4315 ml/person/day) for urine and stool water and a little more than 1.25 kg per day (154 g/person/day) of human waste solids and sanitary supplies.
ACE-Asia: Size Resolved Sampling of Aerosols on the Ronald H Brown and US Western Receptor Sites
NASA Astrophysics Data System (ADS)
Jimenez-Cruz, M. P.; Cliff, S. S.; Perry, K. D.; Cahill, T. A.; Bates, T. S.
2001-12-01
The ACE (Aerosol Characterization Experiment)-Asia project was pre-dominantly performed during the spring of 2001. In addition to the core Asian sampling sites, we sampled at 4 Western US receptor sites. The receptor sites include, Mauna Loa Observatory, Hawaii, Crater Lake Oregon, Adak Island, Alaska and Rattlesnake Mountain, Washington. A small subset of sites (Rattlesnake Mtn., MLO, and Asian sites) continued during a 6-week intensive summer study. For the spring study, an 8-stage DRUM impactor also sampled aboard the NOAA ship RV Ronald H Brown, and mix of 8- and 3-DRUM impactors were used at the western US receptor sites. The impactors are capable of size-segregated, time-resolved aerosol collection. The size categories for the 8-DRUM are inlet-5.00, 5.00-2.50, 2.50-1.15, 1.15-0.75, 0.75-0.56, 0.56-0.34, 0.34-.026, 0.26-.09 microns and 3-DRUM: 2.50-1.10, 1.10-0.34, 0.34-0.12 microns. These samples were analyzed in 6 hour time bites using synchrotron-XRF for quantitative composition for elements sodium through uranium, when present. A major dust event occurring around April 13 was detected at all receptor sites. Comparisons of key elemental ratios and conservative tracers will be presented.
NASA Astrophysics Data System (ADS)
Rinaldi, Antonio
2011-11-01
Micro-compression tests have demonstrated that plastic yielding in nanoscale pillars is the result of the fine interplay between the sample-size (chiefly the diameter D) and the density of bulk dislocations ρ. The power-law scaling typical of the nanoscale stems from a source-limited regime, which depends on both these sample parameters. Based on the experimental and theoretical results available in the literature, this paper offers a perspective about the joint effect of D and ρ on the yield stress in any plastic regime, promoting also a schematic graphical map of it. In the sample-size dependent regime, such dependence is cast mathematically into a first order Weibull-type theory, where the power-law scaling the power exponent β and the modulus m of an approximate (unimodal) Weibull distribution of source-strengths can be related by a simple inverse proportionality. As a corollary, the scaling exponent β may not be a universal number, as speculated in the literature. In this context, the discussion opens the alternative possibility of more general (multimodal) source-strength distributions, which could produce more complex and realistic strengthening patterns than the single power-law usually assumed. The paper re-examines our own experimental data, as well as results of Bei et al. (2008) on Mo-alloy pillars, especially for the sake of emphasizing the significance of a sudden increase in sample response scatter as a warning signal of an incipient source-limited regime.
Wind tunnel study of twelve dust samples by large particle size
NASA Astrophysics Data System (ADS)
Shannak, B.; Corsmeier, U.; Kottmeier, Ch.; Al-azab, T.
2014-12-01
Due to the lack of data by large dust and sand particle, the fluid dynamics characteristics, hence the collection efficiencies of different twelve dust samplers have been experimentally investigated. Wind tunnel tests were carried out at wind velocities ranging from 1 up to 5.5 ms-1. As a large solid particle of 0.5 and 1 mm in diameter, Polystyrene pellets called STYRO Beads or polystyrene sphere were used instead of sand or dust. The results demonstrate that the collection efficiency is relatively acceptable only of eight tested sampler and lie between 60 and 80% depending on the wind velocity and particle size. These samplers are: the Cox Sand Catcher (CSC), the British Standard Directional Dust Gauge (BSD), the Big Spring Number Eight (BSNE), the Suspended Sediment Trap (SUSTRA), the Modified Wilson and Cooke (MWAC), the Wedge Dust Flux Gauge (WDFG), the Model Series Number 680 (SIERRA) and the Pollet Catcher (POLCA). Generally they can be slightly recommended as suitable dust samplers but with collecting error of 20 up to 40%. However the BSNE verify the best performance with a catching error of about 20% and can be with caution selected as a suitable dust sampler. Quite the contrary, the other four tested samplers which are the Marble Dust Collector (MDCO), the United States Geological Survey (USGS), the Inverted Frisbee Sampler (IFS) and the Inverted Frisbee Shaped Collecting Bowl (IFSCB) cannot be recommended due to their very low collection efficiency of 5 up to 40%. In total the efficiency of sampler may be below 0.5, depending on the frictional losses (caused by the sampler geometry) in the fluid and the particle's motion, and on the intensity of airflow acceleration near the sampler inlet. Therefore, the literature data of dust are defective and insufficient. To avoid false collecting data and hence inaccurate mass flux modeling, the geometry of the dust sampler should be considered and furthermore improved.
Ruthrauff, Daniel R.; Tibbitts, T. Lee; Gill, Robert E., Jr.; Dementyev, Maksim N.; Handel, Colleen M.
2012-01-01
The Rock Sandpiper (Calidris ptilocnemis) is endemic to the Bering Sea region and unique among shorebirds in the North Pacific for wintering at high latitudes. The nominate subspecies, the Pribilof Rock Sandpiper (C. p. ptilocnemis), breeds on four isolated islands in the Bering Sea and appears to spend the winter primarily in Cook Inlet, Alaska. We used a stratified systematic sampling design and line-transect method to survey the entire breeding range of this population during springs 2001-2003. Densities were up to four times higher on the uninhabited and more northerly St. Matthew and Hall islands than on St. Paul and St. George islands, which both have small human settlements and introduced reindeer herds. Differences in density, however, appeared to be more related to differences in vegetation than to anthropogenic factors, raising some concern for prospective effects of climate change. We estimated the total population at 19 832 birds (95% CI 17 853–21 930), ranking it among the smallest of North American shorebird populations. To determine the vulnerability of C. p. ptilocnemis to anthropogenic and stochastic environmental threats, future studies should focus on determining the amount of gene flow among island subpopulations, the full extent of the subspecies' winter range, and the current trajectory of this small population.
Bayer, Immanuel; Groth, Philip; Schneckener, Sebastian
2013-01-01
Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50) of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment. PMID:23894636
Durán Pacheco, Gonzalo; Hattendorf, Jan; Colford, John M; Mäusezahl, Daniel; Smith, Thomas
2009-10-30
Many different methods have been proposed for the analysis of cluster randomized trials (CRTs) over the last 30 years. However, the evaluation of methods on overdispersed count data has been based mostly on the comparison of results using empiric data; i.e. when the true model parameters are not known. In this study, we assess via simulation the performance of five methods for the analysis of counts in situations similar to real community-intervention trials. We used the negative binomial distribution to simulate overdispersed counts of CRTs with two study arms, allowing the period of time under observation to vary among individuals. We assessed different sample sizes, degrees of clustering and degrees of cluster-size imbalance. The compared methods are: (i) the two-sample t-test of cluster-level rates, (ii) generalized estimating equations (GEE) with empirical covariance estimators, (iii) GEE with model-based covariance estimators, (iv) generalized linear mixed models (GLMM) and (v) Bayesian hierarchical models (Bayes-HM). Variation in sample size and clustering led to differences between the methods in terms of coverage, significance, power and random-effects estimation. GLMM and Bayes-HM performed better in general with Bayes-HM producing less dispersed results for random-effects estimates although upward biased when clustering was low. GEE showed higher power but anticonservative coverage and elevated type I error rates. Imbalance affected the overall performance of the cluster-level t-test and the GEE's coverage in small samples. Important effects arising from accounting for overdispersion are illustrated through the analysis of a community-intervention trial on Solar Water Disinfection in rural Bolivia. PMID:19672840
NASA Astrophysics Data System (ADS)
Mirante, Fátima; Alves, Célia; Pio, Casimiro; Pindado, Oscar; Perez, Rosa; Revuelta, M.^{a.} Aranzazu; Artiñano, Begoña
2013-10-01
Madrid, the largest city of Spain, has some and unique air pollution problems, such as emissions from residential coal burning, a huge vehicle fleet and frequent African dust outbreaks, along with the lack of industrial emissions. The chemical composition of particulate matter (PM) was studied during summer and winter sampling campaigns, conducted in order to obtain size-segregated information at two different urban sites (roadside and urban background). PM was sampled with high volume cascade impactors, with 4 stages: 10-2.5, 2.5-1, 1-0.5 and < 0.5 μm. Samples were solvent extracted and organic compounds were identified and quantified by GC-MS. Alkanes, polycyclic aromatic hydrocarbons (PAHs), alcohols and fatty acids were chromatographically resolved. The PM1-2.5 was the fraction with the highest mass percentage of organics. Acids were the organic compounds that dominated all particle size fractions. Different organic compounds presented apparently different seasonal characteristics, reflecting distinct emission sources, such as vehicle exhausts and biogenic sources. The benzo[a]pyrene equivalent concentrations were lower than 1 ng m- 3. The estimated carcinogenic risk is low.
Dahlin, Jakob; Spanne, Mårten; Karlsson, Daniel; Dalene, Marianne; Skarping, Gunnar
2008-07-01
Isocyanates in the workplace atmosphere are typically present both in gas and particle phase. The health effects of exposure to isocyanates in gas phase and different particle size fractions are likely to be different due to their ability to reach different parts in the respiratory system. To reveal more details regarding the exposure to isocyanate aerosols, a denuder-impactor (DI) sampler for airborne isocyanates was designed. The sampler consists of a channel-plate denuder for collection of gaseous isocyanates, in series with three-cascade impactor stages with cut-off diameters (d(50)) of 2.5, 1.0 and 0.5 mum. An end filter was connected in series after the impactor for collection of particles smaller than 0.5 mum. The denuder, impactor plates and the end filter were impregnated with a mixture of di-n-butylamine (DBA) and acetic acid for derivatization of the isocyanates. During sampling, the reagent on the impactor plates and the end filter is continuously refreshed, due to the DBA release from the impregnated denuder plates. This secures efficient derivatization of all isocyanate particles. The airflow through the sampler was 5 l min(-1). After sampling, the samples containing the different size fractions were analyzed using liquid chromatography-mass spectrometry (LC-MS)/MS. The DBA impregnation was stable in the sampler for at least 1 week. After sampling, the DBA derivatives were stable for at least 3 weeks. Air sampling was performed in a test chamber (300 l). Isocyanate aerosols studied were thermal degradation products of different polyurethane polymers, spraying of isocyanate coating compounds and pure gas-phase isocyanates. Sampling with impinger flasks, containing DBA in toluene, with a glass fiber filter in series was used as a reference method. The DI sampler showed good compliance with the reference method, regarding total air levels. For the different aerosols studied, vast differences were revealed in the distribution of isocyanate in gas and
Churchill, Nathan W; Yourganov, Grigori; Strother, Stephen C
2014-09-01
In recent years, a variety of multivariate classifier models have been applied to fMRI, with different modeling assumptions. When classifying high-dimensional fMRI data, we must also regularize to improve model stability, and the interactions between classifier and regularization techniques are still being investigated. Classifiers are usually compared on large, multisubject fMRI datasets. However, it is unclear how classifier/regularizer models perform for within-subject analyses, as a function of signal strength and sample size. We compare four standard classifiers: Linear and Quadratic Discriminants, Logistic Regression and Support Vector Machines. Classification was performed on data in the linear kernel (covariance) feature space, and classifiers are tuned with four commonly-used regularizers: Principal Component and Independent Component Analysis, and penalization of kernel features using L₁ and L₂ norms. We evaluated prediction accuracy (P) and spatial reproducibility (R) of all classifier/regularizer combinations on single-subject analyses, over a range of three different block task contrasts and sample sizes for a BOLD fMRI experiment. We show that the classifier model has a small impact on signal detection, compared to the choice of regularizer. PCA maximizes reproducibility and global SNR, whereas Lp -norms tend to maximize prediction. ICA produces low reproducibility, and prediction accuracy is classifier-dependent. However, trade-offs in (P,R) depend partly on the optimization criterion, and PCA-based models are able to explore the widest range of (P,R) values. These trends are consistent across task contrasts and data sizes (training samples range from 6 to 96 scans). In addition, the trends in classifier performance are consistent for ROI-based classifier analyses. PMID:24639383
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F.-M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-07-01
The principles of the Polarization and Directionality of the Earth's Reflectance (POLDER) cloud droplet size retrieval requires that clouds are horizontally homogeneous. Nevertheless, the retrieval is applied by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using the POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval, and then analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-scale variability in droplet effective radius (CDR) can mislead both the CDR and effective variance (EV) retrievals. Nevertheless, the sub-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval is accurate using limited observations and is largely independent of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, the measurements in the primary rainbow region (137-145°) are used to ensure accurate large droplet (> 15 μm) retrievals and reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data for June 2008, the new CDR results are compared with the operational CDRs. The comparison show that the operational CDRs tend to be underestimated for large droplets. The reason is that the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Lastly, a sub-scale retrieval case is analyzed, illustrating that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size parameters from POLDER measurements.
NASA Astrophysics Data System (ADS)
Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.
2015-11-01
The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.
A log-linear model approach to estimation of population size using the line-transect sampling method
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1978-01-01
The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.
Sillett, Scott T.; Chandler, Richard B.; Royle, J. Andrew; Kéry, Marc; Morrison, Scott A.
2012-01-01
Population size and habitat-specific abundance estimates are essential for conservation management. A major impediment to obtaining such estimates is that few statistical models are able to simultaneously account for both spatial variation in abundance and heterogeneity in detection probability, and still be amenable to large-scale applications. The hierarchical distance-sampling model of J. A. Royle, D. K. Dawson, and S. Bates provides a practical solution. Here, we extend this model to estimate habitat-specific abundance and rangewide population size of a bird species of management concern, the Island Scrub-Jay (Aphelocoma insularis), which occurs solely on Santa Cruz Island, California, USA. We surveyed 307 randomly selected, 300 m diameter, point locations throughout the 250-km2 island during October 2008 and April 2009. Population size was estimated to be 2267 (95% CI 1613-3007) and 1705 (1212-2369) during the fall and spring respectively, considerably lower than a previously published but statistically problematic estimate of 12 500. This large discrepancy emphasizes the importance of proper survey design and analysis for obtaining reliable information for management decisions. Jays were most abundant in low-elevation chaparral habitat; the detection function depended primarily on the percent cover of chaparral and forest within count circles. Vegetation change on the island has been dramatic in recent decades, due to release from herbivory following the eradication of feral sheep (Ovis aries) from the majority of the island in the mid-1980s. We applied best-fit fall and spring models of habitat-specific jay abundance to a vegetation map from 1985, and estimated the population size of A. insularis was 1400-1500 at that time. The 20-30% increase in the jay population suggests that the species has benefited from the recovery of native vegetation since sheep removal. Nevertheless, this jay's tiny range and small population size make it vulnerable to natural
Sillett, T Scott; Chandler, Richard B; Royle, J Andrew; Kery, Marc; Morrison, Scott A
2012-10-01
Population size and habitat-specific abundance estimates are essential for conservation management. A major impediment to obtaining such estimates is that few statistical models are able to simultaneously account for both spatial variation in abundance and heterogeneity in detection probability, and still be amenable to large-scale applications. The hierarchical distance-sampling model of J. A. Royle, D. K. Dawson, and S. Bates provides a practical solution. Here, we extend this model to estimate habitat-specific abundance and rangewide population size of a bird species of management concern, the Island Scrub-Jay (Aphelocoma insularis), which occurs solely on Santa Cruz Island, California, USA. We surveyed 307 randomly selected, 300 m diameter, point locations throughout the 250-km2 island during October 2008 and April 2009. Population size was estimated to be 2267 (95% CI 1613-3007) and 1705 (1212-2369) during the fall and spring respectively, considerably lower than a previously published but statistically problematic estimate of 12 500. This large discrepancy emphasizes the importance of proper survey design and analysis for obtaining reliable information for management decisions. Jays were most abundant in low-elevation chaparral habitat; the detection function depended primarily on the percent cover of chaparral and forest within count circles. Vegetation change on the island has been dramatic in recent decades, due to release from herbivory following the eradication of feral sheep (Ovis aries) from the majority of the island in the mid-1980s. We applied best-fit fall and spring models of habitat-specific jay abundance to a vegetation map from 1985, and estimated the population size of A. insularis was 1400-1500 at that time. The 20-30% increase in the jay population suggests that the species has benefited from the recovery of native vegetation since sheep removal. Nevertheless, this jay's tiny range and small population size make it vulnerable to natural
Hall, William L; Ramsey, Charles; Falls, J Harold
2014-01-01
Bulk blending of dry fertilizers is a common practice in the United States and around the world. This practice involves the mixing (either physically or volumetrically) of concentrated, high analysis raw materials. Blending is followed by bagging (for small volume application such as lawn and garden products), loading into truck transports, and spreading. The great majority of bulk blended products are not bagged but handled in bulk and transferred from the blender to a holding hopper. The product is then transferred to a transport vehicle, which may, or may not, also be a spreader. If the primary transport vehicle is not a spreader, then there is another transfer at the user site to a spreader for application. Segregation of materials that are mismatched due to size, density, or shape is an issue when attempting to effectively sample or evenly spread bulk blended products. This study, prepared in coordination with and supported by the Florida Department of Agriculture and Consumer Services and the Florida Fertilizer and Agrochemical Association, looks at the impact of varying particle size as it relates to blending, sampling, and application of bulk blends. The study addresses blends containing high ratios of N-P-K materials and varying (often small) quantities of the micronutrient Zn. PMID:25051620
NASA Astrophysics Data System (ADS)
O'Brien, R. E.; Laskin, A.; Laskin, J.; Weber, R.; Goldstein, A. H.
2011-12-01
This project focuses on analyzing the identities of molecules that comprise oligomers in size resolved aerosol fractions. Since oligomers are generally too large and polar to be measured by typical GC/MS analysis, soft ionization with high resolution mass spectrometry is used to extend the range of observable compounds. Samples collected with a microorifice uniform deposition impactor (MOUDI) during CALNEX Bakersfield in June 2010 have been analyzed with nanospray desorption electrospray ionization (nano-DESI) and an Orbitrap mass spectrometer. The nano-DESI is a soft ionization technique that allows molecular ions to be observed and the Orbitrap has sufficient resolution to determine the elemental composition of almost all species above the detection limit. A large fraction of SOA is made up of high molecular weight oligomers which are thought to form through acid catalyzed reactions of photo-chemically processed volatile organic compounds (VOC). The formation of oligomers must be influenced by the VOCs available, the amount of atmospheric sulfate and nitrate, and the magnitude of photo-chemical processing, among other potential influences. We present the elemental composition of chemical species in SOA in the 0.18 to 0.32 micron size range, providing the first multi-day data set for the study of these oligomers in atmospheric samples. Possible formation pathways and sources of observed compounds will be examined by comparison to other concurrent measurements at the site.
Reboussin, Beth A; Preisser, John S; Song, Eun-Young; Wolfson, Mark
2012-07-01
Under-age drinking is an enormous public health issue in the USA. Evidence that community level structures may impact on under-age drinking has led to a proliferation of efforts to change the environment surrounding the use of alcohol. Although the focus of these efforts is to reduce drinking by individual youths, environmental interventions are typically implemented at the community level with entire communities randomized to the same intervention condition. A distinct feature of these trials is the tendency of the behaviours of individuals residing in the same community to be more alike than that of others residing in different communities, which is herein called 'clustering'. Statistical analyses and sample size calculations must account for this clustering to avoid type I errors and to ensure an appropriately powered trial. Clustering itself may also be of scientific interest. We consider the alternating logistic regressions procedure within the population-averaged modelling framework to estimate the effect of a law enforcement intervention on the prevalence of under-age drinking behaviours while modelling the clustering at multiple levels, e.g. within communities and within neighbourhoods nested within communities, by using pairwise odds ratios. We then derive sample size formulae for estimating intervention effects when planning a post-test-only or repeated cross-sectional community-randomized trial using the alternating logistic regressions procedure. PMID:24347839
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-01-01
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350
Garino, Terry J.
2007-09-01
The sintering behavior of Sandia chem-prep high field varistor materials was studied using techniques including in situ shrinkage measurements, optical and scanning electron microscopy and x-ray diffraction. A thorough literature review of phase behavior, sintering and microstructure in Bi{sub 2}O{sub 3}-ZnO varistor systems is included. The effects of Bi{sub 2}O{sub 3} content (from 0.25 to 0.56 mol%) and of sodium doping level (0 to 600 ppm) on the isothermal densification kinetics was determined between 650 and 825 C. At {ge} 750 C samples with {ge}0.41 mol% Bi{sub 2}O{sub 3} have very similar densification kinetics, whereas samples with {le}0.33 mol% begin to densify only after a period of hours at low temperatures. The effect of the sodium content was greatest at {approx}700 C for standard 0.56 mol% Bi{sub 2}O{sub 3} and was greater in samples with 0.30 mol% Bi{sub 2}O{sub 3} than for those with 0.56 mol%. Sintering experiments on samples of differing size and shape found that densification decreases and mass loss increases with increasing surface area to volume ratio. However, these two effects have different causes: the enhancement in densification as samples increase in size appears to be caused by a low oxygen internal atmosphere that develops whereas the mass loss is due to the evaporation of bismuth oxide. In situ XRD experiments showed that the bismuth is initially present as an oxycarbonate that transforms to metastable {beta}-Bi{sub 2}O{sub 3} by 400 C. At {approx}650 C, coincident with the onset of densification, the cubic binary phase, Bi{sub 38}ZnO{sub 58} forms and remains stable to >800 C, indicating that a eutectic liquid does not form during normal varistor sintering ({approx}730 C). Finally, the formation and morphology of bismuth oxide phase regions that form on the varistors surfaces during slow cooling were studied.
Is a vegetarian diet adequate for children.
Hackett, A; Nathan, I; Burgess, L
1998-01-01
The number of people who avoid eating meat is growing, especially among young people. Benefits to health from a vegetarian diet have been reported in adults but it is not clear to what extent these benefits are due to diet or to other aspects of lifestyles. In children concern has been expressed concerning the adequacy of vegetarian diets especially with regard to growth. The risks/benefits seem to be related to the degree of restriction of he diet; anaemia is probably both the main and the most serious risk but this also applies to omnivores. Vegan diets are more likely to be associated with malnutrition, especially if the diets are the result of authoritarian dogma. Overall, lacto-ovo-vegetarian children consume diets closer to recommendations than omnivores and their pre-pubertal growth is at least as good. The simplest strategy when becoming vegetarian may involve reliance on vegetarian convenience foods which are not necessarily superior in nutritional composition. The vegetarian sector of the food industry could do more to produce foods closer to recommendations. Vegetarian diets can be, but are not necessarily, adequate for children, providing vigilance is maintained, particularly to ensure variety. Identical comments apply to omnivorous diets. Three threats to the diet of children are too much reliance on convenience foods, lack of variety and lack of exercise. PMID:9670174
Durney, Brandon C; Bachert, Beth A; Sloane, Hillary S; Lukomski, Slawomir; Landers, James P; Holland, Lisa A
2015-06-23
Phospholipid additives are a cost-effective medium to separate deoxyribonucleic acid (DNA) fragments and possess a thermally-responsive viscosity. This provides a mechanism to easily create and replace a highly viscous nanogel in a narrow bore capillary with only a 10°C change in temperature. Preparations composed of dimyristoyl-sn-glycero-3-phosphocholine (DMPC) and 1,2-dihexanoyl-sn-glycero-3-phosphocholine (DHPC) self-assemble, forming structures such as nanodisks and wormlike micelles. Factors that influence the morphology of a particular DMPC-DHPC preparation include the concentration of lipid in solution, the temperature, and the ratio of DMPC and DHPC. It has previously been established that an aqueous solution containing 10% phospholipid with a ratio of [DMPC]/[DHPC]=2.5 separates DNA fragments with nearly single base resolution for DNA fragments up to 500 base pairs in length, but beyond this size the resolution decreases dramatically. A new DMPC-DHPC medium is developed to effectively separate and size DNA fragments up to 1500 base pairs by decreasing the total lipid concentration to 2.5%. A 2.5% phospholipid nanogel generates a resolution of 1% of the DNA fragment size up to 1500 base pairs. This increase in the upper size limit is accomplished using commercially available phospholipids at an even lower material cost than is achieved with the 10% preparation. The separation additive is used to evaluate size markers ranging between 200 and 1500 base pairs in order to distinguish invasive strains of Streptococcus pyogenes and Aspergillus species by harnessing differences in gene sequences of collagen-like proteins in these organisms. For the first time, a reversible stacking gel is integrated in a capillary sieving separation by utilizing the thermally-responsive viscosity of these self-assembled phospholipid preparations. A discontinuous matrix is created that is composed of a cartridge of highly viscous phospholipid assimilated into a separation matrix
NASA Astrophysics Data System (ADS)
Kerminen, Veli-Matti; Hillamo, Risto; Teinilä, Kimmo; Pakkanen, Tuomo; Allegrini, Ivo; Sparapani, Roberto
A large set of size-resolved aerosol samples was inspected with regard to their ion balance to shed light on how the aerosol acidity changes with particle size in the lower troposphere and what implications this might have for the atmospheric processing of aerosols. Quite different behaviour between the remote and more polluted environments could be observed. At the remote sites, practically the whole accumulation mode had cation-to-anion ratios clearly below unity, indicating that these particles were quite acidic. The supermicron size range was considerably less acidic and may in some cases have been close to neutral or even alkaline. An interesting feature common to the remote sites was a clear jump in the cation-to-anion ratio when going from the accumulation to the Aitken mode. The most likely reason for this was cloud processing which, via in-cloud sulphate production, makes the smallest accumulation-mode particles more acidic than the non-activated Aitken-mode particles. A direct consequence of the less acidic nature of the Aitken mode is that it can take up semi-volatile, water-soluble gases much easier than the accumulation mode. This feature may have significant implications for atmospheric cloud condensation nuclei production in remote environments. In rural and urban locations, the cation-to-anion ratio was close to unity over most of the accumulation mode, but increased significantly when going to either larger or smaller particle sizes. The high cation-to-anion ratios in the supermicron size range were ascribed to carbonate associated with mineral dust. The ubiquitous presence of carbonate in these particles indicates that they were neutral or alkaline, making them good sites for heterogeneous reactions involving acidic trace gases. The high cation-to-anion ratios in the Aitken mode suggest that these particles contained some water-soluble anions not detected by our chemical analysis. This is worth keeping in mind when investigating the hygroscopic
NASA Technical Reports Server (NTRS)
Eglinton, G.; Gowar, A. P.; Jull, A. J. T.; Pillinger, C. T.; Agrell, S. O.; Agrell, J. E.; Long, J. V. P.; Bowie, S. H. U.; Simpson, P. R.; Beckinsale, R. D.
1977-01-01
Samples of Luna 16 and 20 have been separated according to size, visual appearance, density, and magnetic susceptibility. Selected aliquots were examined in eight British laboratories. The studies included mineralogy and petrology, selenochronology, magnetic characteristics, Mossbauer spectroscopy, oxygen isotope ratio determinations, cosmic ray track and thermoluminescence investigations, and carbon chemistry measurements. Luna 16 and 20 are typically mare and highland soils, comparing well with their Apollo counterparts, Apollo 11 and 16, respectively. Both soils are very mature (high free iron, carbide, and methane and cosmogenic Ar), while Luna 16, but not Luna 20, is characterized by a high content of glassy materials. An aliquot of anorthosite fragments, handpicked from Luna 20, had a gas retention age of about 4.3 plus or minus 0.1 Gy.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Odell, P. L.
1973-01-01
A multichannel scanning device may fail to observe objects because of obstructions blocking the view, or different categories of objects may make up a resolution element giving rise to a single observation. Ground truth will be required on any such categories of objects in order to estimate their expected proportions associated with various classes represented in the remote sensing data. Considering the classes to be distributed as multivariate normal with different mean vectors and common covariance, maximum likelihood estimates are given for the expected proportions of objects associated with different classes, using the Bayes procedure for classification of individuals obtained from these classes. An approximate solution for simultaneous confidence intervals on these proportions is given, and thereby a sample-size needed to achieve a desired amount of accuracy for the estimates is determined.
NASA Technical Reports Server (NTRS)
Hughes, William O.; McNelis, Anne M.
2010-01-01
The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.
NASA Technical Reports Server (NTRS)
Parry, Edward P.; Hern, Don H.
1971-01-01
A technique for determining lead with a detection limit down to a nanogram on limited size samples is described. The technique is an electrochemical one and involves pre-concentration of the metal species in a mercury drop. Although the emphasis in this paper is on the determination of lead, many metal ion species which are reducible to the metal at an electrode are equally determinable. A technique called pulse polarography is proposed to determine the metals in the drop and this technique is discussed and is compared with other techniques. Other approaches for determination of lead are also compared. Some data are also reported for the lead content of Ventura County particulates. The characterization of lead species by solubility parameters is discussed.
Tan, Ming; Fang, Hong-Bin; Tian, Guo-Liang; Houghton, Peter J
2003-07-15
In anticancer drug development, the combined use of two drugs is an important strategy to achieve greater therapeutic success. Often combination studies are performed in animal (mostly mice) models before clinical trials are conducted. These experiments on mice are costly, especially with combination studies. However, experimental designs and sample size derivations for the joint action of drugs are not currently available except for a few cases where strong model assumptions are made. For example, Abdelbasit and Plackett proposed an optimal design assuming that the dose-response relationship follows some specified linear models. Tallarida et al. derived a design by fixing the mixture ratio and used a t-test to detect the simple similar action. The issue is that in reality we usually do not have enough information on the joint action of the two compounds before experiment and to understand their joint action is exactly our study goal. In this paper, we first propose a novel non-parametric model that does not impose such strong assumptions on the joint action. We then propose an experimental design for the joint action using uniform measure in this non-parametric model. This design is optimal in the sense that it reduces the variability in modelling synergy while allocating the doses to minimize the number of experimental units and to extract maximum information on the joint action of the compounds. Based on this design, we propose a robust F-test to detect departures from the simple similar action of two compounds and a method to determine sample sizes that are economically feasible. We illustrate the method with a study of the joint action of two new anticancer agents: temozolomide and irinotecan. PMID:12820275
Mélachio, Tanekou Tito Trésor; Njiokou, Flobert; Ravel, Sophie; Simo, Gustave; Solano, Philippe; De Meeûs, Thierry
2015-07-01
Human and animal trypanosomiases are two major constraints to development in Africa. These diseases are mainly transmitted by tsetse flies in particular by Glossina palpalis palpalis in Western and Central Africa. To set up an effective vector control campaign, prior population genetics studies have proved useful. Previous studies on population genetics of G. p. palpalis using microsatellite loci showed high heterozygote deficits, as compared to Hardy-Weinberg expectations, mainly explained by the presence of null alleles and/or the mixing of individuals belonging to several reproductive units (Wahlund effect). In this study we implemented a system of trapping, consisting of a central trap and two to four satellite traps around the central one to evaluate a possible role of the Wahlund effect in tsetse flies from three Cameroon human and animal African trypanosomiases foci (Campo, Bipindi and Fontem). We also estimated effective population sizes and dispersal. No difference was observed between the values of allelic richness, genetic diversity and Wright's FIS, in the samples from central and from satellite traps, suggesting an absence of Wahlund effect. Partitioning of the samples with Bayesian methods showed numerous clusters of 2-3 individuals as expected from a population at demographic equilibrium with two expected offspring per reproducing female. As previously shown, null alleles appeared as the most probable factor inducing these heterozygote deficits in these populations. Effective population sizes varied from 80 to 450 individuals while immigration rates were between 0.05 and 0.43, showing substantial genetic exchanges between different villages within a focus. These results suggest that the "suppression" with establishment of physical barriers may be the best strategy for a vector control campaign in this forest context. PMID:25917495
2014-01-01
Background The identification of polymorphisms and/or genes responsible for an organism's radiosensitivity increases the knowledge about the cell cycle and the mechanism of the phenomena themselves, possibly providing the researchers with a better understanding of the process of carcinogenesis. Aim The aim of the study was to develop a data analysis strategy capable of discovering the genetic background of radiosensitivity in the case of small sample size studies. Results Among many indirect measures of radiosensitivity known, the level of radiation-induced chromosomal aberrations was used in the study. Mathematical modelling allowed the transformation of the yield-time curve of radiation-induced chromosomal aberrations into the exponential curve with limited number of parameters, while Gaussian mixture models applied to the distributions of these parameters provided the criteria for mouse strain classification. A detailed comparative analysis of genotypes between the obtained subpopulations of mice followed by functional validation provided a set of candidate polymorphisms that might be related to radiosensitivity. Among 1857 candidate relevant SNPs, that cluster in 28 genes, eight SNPs were detected nonsynonymous (nsSNP) on protein function. Two of them, rs48840878 (gene Msh3) and rs5144199 (gene Cc2d2a), were predicted as having increased probability of a deleterious effect. Additionally, rs48840878 is capable of disordering phosphorylation with 14 PKs. In silico analysis of candidate relevant SNP similarity score distribution among 60 CGD mouse strains allowed for the identification of SEA/GnJ and ZALENDE/EiJ mouse strains (95.26% and 86.53% genetic consistency respectively) as the most similar to radiosensitive subpopulation Conclusions A complete step-by-step strategy for seeking the genetic signature of radiosensitivity in the case of small sample size studies conducted on mouse models was proposed. It is shown that the strategy, which is a combination of
NASA Astrophysics Data System (ADS)
Johari, G. P.; Khouri, J.
2013-03-01
Certain distributions of relaxation times can be described in terms of a non-exponential response parameter, β, of value between 0 and 1. Both β and the relaxation time, τ0, of a material depend upon the probe used for studying its dynamics and the value of β is qualitatively related to the non-Arrhenius variation of viscosity and τ0. A solute adds to the diversity of an intermolecular environment and is therefore expected to reduce β, i.e., to increase the distribution and to change τ0. We argue that the calorimetric value βcal determined from the specific heat [Cp = T(dS/dT)p] data is a more appropriate measure of the distribution of relaxation times arising from configurational fluctuations than β determined from other properties, and report a study of βcal of two sets of binary mixtures, each containing a different molecule of ˜2 nm size. We find that βcal changes monotonically with the composition, i.e., solute molecules modify the nano-scale composition and may increase or decrease τ0, but do not always decrease βcal. (Plots of βcal against the composition do not show a minimum.) We also analyze the data from the literature, and find that (i) βcal of an orientationally disordered crystal is less than that of its liquid, (ii) βcal varies with the isomer's nature, and chiral centers in a molecule decrease βcal, and (iii) βcal decreases when a sample's thickness is decreased to the nm-scale. After examining the difference between βcal and β determined from other properties we discuss the consequences of our findings for theories of non-exponential response, and suggest that studies of βcal may be more revealing of structure-freezing than studies of the non-Arrhenius behavior. On the basis of previous reports that β → 1 for dielectric relaxation of liquids of centiPoise viscosity observed at GHz frequencies, we argue that its molecular mechanism is the same as that of the Johari-Goldstein (JG) relaxation. Its spectrum becomes broader on
Landguth, Erin L.; Gedy, Bradley C.; Oyler-McCance, Sara J.; Garey, Andrew L.; Emel, Sarah L.; Mumma, Matthew; Wagner, Helene H.; Fortin, Marie-Josée; Cushman, Samuel A.
2012-01-01
The influence of study design on the ability to detect the effects of landscape pattern on gene flow is one of the most pressing methodological gaps in landscape genetic research. To investigate the effect of study design on landscape genetics inference, we used a spatially-explicit, individual-based program to simulate gene flow in a spatially continuous population inhabiting a landscape with gradual spatial changes in resistance to movement. We simulated a wide range of combinations of number of loci, number of alleles per locus and number of individuals sampled from the population. We assessed how these three aspects of study design influenced the statistical power to successfully identify the generating process among competing hypotheses of isolation-by-distance, isolation-by-barrier, and isolation-by-landscape resistance using a causal modelling approach with partial Mantel tests. We modelled the statistical power to identify the generating process as a response surface for equilibrium and non-equilibrium conditions after introduction of isolation-by-landscape resistance. All three variables (loci, alleles and sampled individuals) affect the power of causal modelling, but to different degrees. Stronger partial Mantel r correlations between landscape distances and genetic distances were found when more loci were used and when loci were more variable, which makes comparisons of effect size between studies difficult. Number of individuals did not affect the accuracy through mean equilibrium partial Mantel r, but larger samples decreased the uncertainty (increasing the precision) of equilibrium partial Mantel r estimates. We conclude that amplifying more (and more variable) loci is likely to increase the power of landscape genetic inferences more than increasing number of individuals.
Landguth, E.L.; Fedy, B.C.; Oyler-McCance, S.J.; Garey, A.L.; Emel, S.L.; Mumma, M.; Wagner, H.H.; Fortin, M.-J.; Cushman, S.A.
2012-01-01
The influence of study design on the ability to detect the effects of landscape pattern on gene flow is one of the most pressing methodological gaps in landscape genetic research. To investigate the effect of study design on landscape genetics inference, we used a spatially-explicit, individual-based program to simulate gene flow in a spatially continuous population inhabiting a landscape with gradual spatial changes in resistance to movement. We simulated a wide range of combinations of number of loci, number of alleles per locus and number of individuals sampled from the population. We assessed how these three aspects of study design influenced the statistical power to successfully identify the generating process among competing hypotheses of isolation-by-distance, isolation-by-barrier, and isolation-by-landscape resistance using a causal modelling approach with partial Mantel tests. We modelled the statistical power to identify the generating process as a response surface for equilibrium and non-equilibrium conditions after introduction of isolation-by-landscape resistance. All three variables (loci, alleles and sampled individuals) affect the power of causal modelling, but to different degrees. Stronger partial Mantel r correlations between landscape distances and genetic distances were found when more loci were used and when loci were more variable, which makes comparisons of effect size between studies difficult. Number of individuals did not affect the accuracy through mean equilibrium partial Mantel r, but larger samples decreased the uncertainty (increasing the precision) of equilibrium partial Mantel r estimates. We conclude that amplifying more (and more variable) loci is likely to increase the power of landscape genetic inferences more than increasing number of individuals. ?? 2011 Blackwell Publishing Ltd.
Forcino, Frank L.; Leighton, Lindsey R.; Twerdy, Pamela; Cahill, James F.
2015-01-01
Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment. PMID:26058066
Are shear force methods adequately reported?
Holman, Benjamin W B; Fowler, Stephanie M; Hopkins, David L
2016-09-01
This study aimed to determine the detail to which shear force (SF) protocols and methods have been reported in the scientific literature between 2009 and 2015. Articles (n=734) published in peer-reviewed animal and food science journals and limited to only those testing the SF of unprocessed and non-fabricated mammal meats were evaluated. It was found that most of these SF articles originated in Europe (35.3%), investigated bovine species (49.0%), measured m. longissimus samples (55.2%), used tenderometers manufactured by Instron (31.2%), and equipped with Warner-Bratzler blades (68.8%). SF samples were also predominantly thawed prior to cooking (37.1%) and cooked sous vide, using a water bath (50.5%). Information pertaining to blade crosshead speed (47.5%), recorded SF resistance (56.7%), muscle fibre orientation when tested (49.2%), sub-section or core dimension (21.8%), end-point temperature (29.3%), and other factors contributing to SF variation were often omitted. This base failure diminishes repeatability and accurate SF interpretation, and must therefore be rectified. PMID:27107727
7 CFR 57.350 - Procedures for selecting appeal samples.
Code of Federal Regulations, 2010 CFR
2010-01-01
... original samples are not available or have been altered, such as removing the undergrades, the sample size shall be double the number of samples required in 7 CFR 56.4. ... maintained under adequate refrigeration when applicable. (b) The appeal sample shall consist of product...
Grain size is a physical measurement commonly made in the analysis of many benthic systems. Grain size influences benthic community composition, can influence contaminant loading and can indicate the energy regime of a system. We have recently investigated the relationship betw...
The Clark Phase-able Sample Size Problem: Long-Range Phasing and Loss of Heterozygosity in GWAS
NASA Astrophysics Data System (ADS)
Halldórsson, Bjarni V.; Aguiar, Derek; Tarpine, Ryan; Istrail, Sorin
A phase transition is taking place today. The amount of data generated by genome resequencing technologies is so large that in some cases it is now less expensive to repeat the experiment than to store the information generated by the experiment. In the next few years it is quite possible that millions of Americans will have been genotyped. The question then arises of how to make the best use of this information and jointly estimate the haplotypes of all these individuals. The premise of the paper is that long shared genomic regions (or tracts) are unlikely unless the haplotypes are identical by descent (IBD), in contrast to short shared tracts which may be identical by state (IBS). Here we estimate for populations, using the US as a model, what sample size of genotyped individuals would be necessary to have sufficiently long shared haplotype regions (tracts) that are identical by descent (IBD), at a statistically significant level. These tracts can then be used as input for a Clark-like phasing method to obtain a complete phasing solution of the sample. We estimate in this paper that for a population like the US and about 1% of the people genotyped (approximately 2 million), tracts of about 200 SNPs long are shared between pairs of individuals IBD with high probability which assures the Clark method phasing success. We show on simulated data that the algorithm will get an almost perfect solution if the number of individuals being SNP arrayed is large enough and the correctness of the algorithm grows with the number of individuals being genotyped.
Hardouin, Jean-Benoit; Blanchin, Myriam; Feddag, Mohand-Larbi; Le Néel, Tanguy; Perrot, Bastien; Sébille, Véronique
2015-07-20
The analysis of patient-reported outcomes or other psychological traits can be realized using the Rasch measurement model. When the objective of a study is to compare groups of individuals, it is important, before the study, to define a sample size such that the group comparison test will attain a given power. The Raschpower procedure (RP) allows doing so with dichotomous items. The RP is extended to polytomous items. Several computational issues were identified, and adaptations have been proposed. The performance of this new version of RP is assessed using simulations. This adaptation of RP allows obtaining a good estimate of the expected power of a test to compare groups of patients in a large number of practical situations. A Stata module, as well as its implementation online, is proposed to perform the RP. Two versions of the RP for polytomous items are proposed (deterministic and stochastic versions). These two versions produce similar results in all of the tested cases. We recommend the use of the deterministic version, when the measure is obtained using small questionnaires or items with a few number of response categories, and the stochastic version elsewhere, so as to optimize computing time. PMID:25787270
Power and Sample Size Calculation for Log-rank Test with a Time Lag in Treatment Effect
Zhang, Daowen; Quan, Hui
2009-01-01
Summary The log-rank test is the most powerful nonparametric test for detecting a proportional hazards alternative and thus is the most commonly used testing procedure for comparing time-to-event distributions between different treatments in clinical trials. When the log-rank test is used for the primary data analysis, the sample size calculation should also be based on the test to ensure the desired power for the study. In some clinical trials, the treatment effect may not manifest itself right after patients receive the treatment. Therefore, the proportional hazards assumption may not hold. Furthermore, patients may discontinue the study treatment prematurely and thus may have diluted treatment effect after treatment discontinuation. If a patient’s treatment termination time is independent of his/her time-to-event of interest, the termination time can be treated as a censoring time in the final data analysis. Alternatively, we may keep collecting time-to-event data until study termination from those patients who discontinued the treatment and conduct an intent-to-treat (ITT) analysis by including them in the original treatment groups. We derive formulas necessary to calculate the asymptotic power of the log-rank test under this non-proportional hazards alternative for the two data analysis strategies. Simulation studies indicate that the formulas provide accurate power for a variety of trial settings. A clinical trial example is used to illustrate the application of the proposed methods. PMID:19152230
Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard
2013-04-01
The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice. PMID:23365140
NASA Astrophysics Data System (ADS)
Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.
2015-11-01
Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM
Thompson, J K; Spana, R E
1991-08-01
The relationship between visuospatial ability and size accuracy in perception was assessed in 69 normal college females. In general, correlations indicated small associations between visuospatial defects and size overestimation and little relationship between visuospatial ability and level of bulimic disturbance. Implications for research on the size overestimation of body image are addressed. PMID:1945715
Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A; Kashon, Michael L; Harper, Martin
2014-10-01
Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60-73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min(-1) for the medium- and 4.4, 10, and 11.2 l min(-1) for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP=8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP=18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN=0.014+0.375×PPNIOSH (adjusted R2=0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods
Adequate histologic sectioning of prostate needle biopsies.
Bostwick, David G; Kahane, Hillel
2013-08-01
No standard method exists for sampling prostate needle biopsies, although most reports claim to embed 3 cores per block and obtain 3 slices from each block. This study was undertaken to determine the extent of histologic sectioning necessary for optimal examination of prostate biopsies. We prospectively compared the impact on cancer yield of submitting 1 biopsy core per cassette (biopsies from January 2010) with 3 cores per cassette (biopsies from August 2010) from a large national reference laboratory. Between 6 and 12 slices were obtained with the former 1-core method, resulting in 3 to 6 slices being placed on each of 2 slides; for the latter 3-core method, a limit of 6 slices was obtained, resulting in 3 slices being place on each of 2 slides. A total of 6708 sets of 12 to 18 core biopsies were studied, including 3509 biopsy sets from the 1-biopsy-core-per-cassette group (January 2010) and 3199 biopsy sets from the 3-biopsy-cores-percassette group (August 2010). The yield of diagnoses was classified as benign, atypical small acinar proliferation, high-grade prostatic intraepithelial neoplasia, and cancer and was similar with the 2 methods: 46.2%, 8.2%, 4.5%, and 41.1% and 46.7%, 6.3%, 4.4%, and 42.6%, respectively (P = .02). Submission of 1 core or 3 cores per cassette had no effect on the yield of atypical small acinar proliferation, prostatic intraepithelial neoplasia, or cancer in prostate needle biopsies. Consequently, we recommend submission of 3 cores per cassette to minimize labor and cost of processing. PMID:23764163
ERIC Educational Resources Information Center
George, Goldy C.; Hoelscher, Deanna M.; Nicklas, Theresa A.; Kelder, Steven H.
2009-01-01
Objective: To examine diet- and body size-related attitudes and behaviors associated with supplement use in a representative sample of fourth-grade students in Texas. Design: Cross-sectional data from the School Physical Activity and Nutrition study, a probability-based sample of schoolchildren. Children completed a questionnaire that assessed…
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-04-15
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-01-01
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same
21 CFR 201.5 - Drugs; adequate directions for use.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Drugs; adequate directions for use. 201.5 Section...) DRUGS: GENERAL LABELING General Labeling Provisions § 201.5 Drugs; adequate directions for use. Adequate directions for use means directions under which the layman can use a drug safely and for the purposes...
21 CFR 201.5 - Drugs; adequate directions for use.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 4 2011-04-01 2011-04-01 false Drugs; adequate directions for use. 201.5 Section...) DRUGS: GENERAL LABELING General Labeling Provisions § 201.5 Drugs; adequate directions for use. Adequate directions for use means directions under which the layman can use a drug safely and for the purposes...
4 CFR 200.14 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 4 Accounts 1 2010-01-01 2010-01-01 false Responsibility for maintaining adequate safeguards. 200.14 Section 200.14 Accounts RECOVERY ACCOUNTABILITY AND TRANSPARENCY BOARD PRIVACY ACT OF 1974 § 200.14 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining adequate technical, physical, and...
10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining adequate technical, physical, and security...
10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining adequate technical, physical, and security...
4 CFR 200.14 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 4 Accounts 1 2011-01-01 2011-01-01 false Responsibility for maintaining adequate safeguards. 200....14 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining adequate technical, physical, and security safeguards to prevent unauthorized disclosure...
Al-Kabab, FA; Ghoname, NA; Banabilh, SM
2014-01-01
Objective: The aim was to formulate a prediction regression equation for Yemeni and to compare it with Moyer's method for the prediction of the size of the un-erupted permanent canines and premolars. Subjects and Methods: Measurements of mesio-distal width of four permanent mandibular incisors, as well as canines and premolars in both arches were obtained from a sample of 400 school children aged 12-14 years old (13.80 ± 0.42 standard deviation) using electronic digital calliper. The data were subjected to statistical and linear regression analysis and then compared with Moyer's prediction tables. Results: The result showed that the mean mesio-distal tooth widths of the canines and premolars in the maxillary arch were significantly larger in boys than girls (P < 0.001), while, in the mandibular arch, only lateral incisors and canines were also significantly larger in boys than in girls (P < 0.001). Regression equations for the maxillary arch (boys, Y = 13.55 + 0.29X; girls, Y = 14.04 + 0.25X) and the mandibular arch (boys, Y = 9.97 + 0.40X; girls, Y = 9.56 + 0.41X) were formulated and used to develop new probability tables following the Moyer's method. Significant differences (P < 0.05) were found between the present study predicted widths and the Moyer's tables in almost all percentile levels, including the recommended 50% and 75% levels. Conclusions: The Moyer's probability tables significantly overestimate the mesio-distal widths of the un-erupted permanent canine and premolars of Yemeni in almost all percentile levels, including the commonly used 50% and 75% levels. Therefore, it was suggested with caution that the proposed prediction regression equations and tables developed in the present study could be considered as an alternative and more precise method for mixed dentition space analysis in Yemeni. PMID:25143930
Neumann, Christoph; Taub, Margaret A.; Younkin, Samuel G.; Beaty, Terri H.; Ruczinski, Ingo; Schwender, Holger
2014-01-01
Case-parent trio studies considering genotype data from children affected by a disease and from their parents are frequently used to detect single nucleotide polymorphisms (SNPs) associated with disease. The most popular statistical tests in this study design are transmission/disequlibrium tests (TDTs). Several types of these tests have been developed, e.g., procedures based on alleles or genotypes. Therefore, it is of great interest to examine which of these tests have the highest statistical power to detect SNPs associated with disease. Comparisons of the allelic and the genotypic TDT for individual SNPs have so far been conducted based on simulation studies, since the test statistic of the genotypic TDT was determined numerically. Recently, it, however, has been shown that this test statistic can be presented in closed form. In this article, we employ this analytic solution to derive equations for calculating the statistical power and the required sample size for different types of the genotypic TDT. The power of this test is then compared with the one of the corresponding score test assuming the same mode of inheritance as well as the allelic TDT based on a multiplicative mode of inheritance, which is equivalent to the score test assuming an additive mode of inheritance. This is, thus, the first time that the power of these tests are compared based on equations, yielding instant results and omitting the need for time-consuming simulation studies. This comparison reveals that the tests have almost the same power, with the score test being slightly more powerful. PMID:25123830
Voordouw, Gerrit; Menon, Priyesh; Pinnock, Tijan; Sharma, Mohita; Shen, Yin; Venturelli, Amanda; Voordouw, Johanna; Sexton, Aoife
2016-01-01
Microbially-influenced corrosion (MIC) contributes to the general corrosion rate (CR), which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm), for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs based on weight loss and iron determination were in good agreement. Average CRs were 0.022 mm/yr for eight produced waters with high numbers (105/ml) of acid-producing bacteria (APB), but no sulfate-reducing bacteria (SRB). Average CRs were 0.009 mm/yr for five central processing facility (CPF) waters, which had no APB or SRB due to weekly biocide treatment and 0.036 mm/yr for 2 CPF tank bottom sludges, which had high numbers of APB (106/ml) and SRB (108/ml). Hence, corrosion monitoring with carbon steel beads indicated that biocide treatment of CPF waters decreased the CR, except where biocide did not penetrate. The CR for incubations with 20 ml of a produced water decreased from 0.061 to 0.007 mm/yr when increasing the number of beads from 1 to 40. CRs determined with beads were higher than those with coupons, possibly also due to a higher weight of iron per unit volume used in incubations with coupons. Use of 1 ml syringe columns, containing carbon steel beads, and injected with 10 ml/day of SRB-containing medium for 256 days gave a CR of 0.11 mm/yr under flow conditions. The standard deviation of the distribution of residual bead weights, a measure for the unevenness of the corrosion, increased with increasing CR. The most heavily corroded beads showed significant pitting. Hence the use of uniformly sized carbon steel beads offers new opportunities for screening and monitoring of corrosion including determination of the distribution of corrosion rates, which allows
Voordouw, Gerrit; Menon, Priyesh; Pinnock, Tijan; Sharma, Mohita; Shen, Yin; Venturelli, Amanda; Voordouw, Johanna; Sexton, Aoife
2016-01-01
Microbially-influenced corrosion (MIC) contributes to the general corrosion rate (CR), which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm), for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs based on weight loss and iron determination were in good agreement. Average CRs were 0.022 mm/yr for eight produced waters with high numbers (10(5)/ml) of acid-producing bacteria (APB), but no sulfate-reducing bacteria (SRB). Average CRs were 0.009 mm/yr for five central processing facility (CPF) waters, which had no APB or SRB due to weekly biocide treatment and 0.036 mm/yr for 2 CPF tank bottom sludges, which had high numbers of APB (10(6)/ml) and SRB (10(8)/ml). Hence, corrosion monitoring with carbon steel beads indicated that biocide treatment of CPF waters decreased the CR, except where biocide did not penetrate. The CR for incubations with 20 ml of a produced water decreased from 0.061 to 0.007 mm/yr when increasing the number of beads from 1 to 40. CRs determined with beads were higher than those with coupons, possibly also due to a higher weight of iron per unit volume used in incubations with coupons. Use of 1 ml syringe columns, containing carbon steel beads, and injected with 10 ml/day of SRB-containing medium for 256 days gave a CR of 0.11 mm/yr under flow conditions. The standard deviation of the distribution of residual bead weights, a measure for the unevenness of the corrosion, increased with increasing CR. The most heavily corroded beads showed significant pitting. Hence the use of uniformly sized carbon steel beads offers new opportunities for screening and monitoring of corrosion including determination of the distribution of corrosion rates, which allows
Mehta, Cyrus; Liu, Lingyun
2016-02-10
Over the past 25 years, adaptive designs have gradually gained acceptance and are being used with increasing frequency in confirmatory clinical trials. Recent surveys of submissions to the regulatory agencies reveal that the most popular type of adaptation is unblinded sample size re-estimation. Concerns have nevertheless been raised that this type of adaptation is inefficient.We intend to show in our discussion that such concerns are greatly exaggerated in any practical setting and that the advantages of adaptive sample size re-estimation usually outweigh any minor loss of efficiency. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26757953
NASA Astrophysics Data System (ADS)
Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.
2016-07-01
Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.
NASA Astrophysics Data System (ADS)
Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.
2016-09-01
Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.
In large-scale studies, it is often neither feasible nor necessary to obtain the large samples of 400 particles advocated by many geomorphologists to adequately quantify streambed surface particle-size distributions. Synoptic surveys such as U.S. Environmental Protection Agency...
Technology Transfer Automated Retrieval System (TEKTRAN)
The objective of this research was to examine diet- and body size-related attitudes and behaviors associated with supplement use in a representative sample of fourth-grade students in Texas. The research design consisted of cross-sectional data from the School Physical Activity and Nutrition study, ...
Technology Transfer Automated Retrieval System (TEKTRAN)
Particle size distributions (PSD) have long been used to more accurately estimate the PM10 fraction of total particulate matter (PM) stack samples taken from agricultural sources. These PSD analyses were typically conducted using a Coulter Counter with 50 micrometer aperture tube. With recent increa...
ERIC Educational Resources Information Center
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica
2013-01-01
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
ERIC Educational Resources Information Center
Misanchuk, Earl R.
Multiple matrix sampling of three subscales of the California Psychological Inventory was used to investigate the effects of four variables on error estimates of the mean (EEM) and variance (EEV). The four variables were examinee population size (600, 450, 300, 150, 100, and 75); number of subtests, (2, 3, 4, 5, 6, and 7), hence the number of…
Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set
Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira
2016-01-01
Several studies have shown that our visual system may construct a “summary statistical representation” over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4. PMID:27242622
Delgado-Saborit, Juana Maria; Stark, Christopher; Harrison, Roy M
2014-01-01
The design and performance of a multiparallel plate denuder able to operate at low and high-flow (3-30 L/min) for the collection of polycyclic aromatic hydrocarbon (PAH) vapor is described. The denuder, in combination with a micro orifice uniform deposit impactor (MOUDI) was used to assess processes of artifact formation in MOUDIs used with and without an upstream denuder. Duplicate sampling trains with an upstream denuder showed good repeatability of the measured gas and particle-phase concentrations and low breakthrough in the denuder (3.5-15%). The PAH size distributions within undenuded and denuded MOUDIs were studied. Use of the denuder altered the measured size distribution of PAHs toward smaller sizes, but both denuded and undenuded systems are subject to sampling artifacts. PMID:24279283
Technology Transfer Automated Retrieval System (TEKTRAN)
During the regeneration of cross-pollinating accessions, genetic contamination from foreign pollen and reduction of the effective population size can be a hindrance to maintaining the genetic diversity in the temperate grass collection at the Western Regional Plant Introduction Station (WRPIS). The...
ERIC Educational Resources Information Center
Anstey, Kaarin J.; Mack, Holly A.; Christensen, Helen; Li, Shu-Chen; Reglade-Meslin, Chantal; Maller, Jerome; Kumar, Rajeev; Dear, Keith; Easteal, Simon; Sachdev, Perminder
2007-01-01
Intra-individual variability in reaction time increases with age and with neurological disorders, but the neural correlates of this increased variability remain uncertain. We hypothesized that both faster mean reaction time (RT) and less intra-individual RT variability would be associated with larger corpus callosum (CC) size in older adults, and…
Jalava, Pasi I. . E-mail: Pasi.Jalava@ktl.fi; Salonen, Raimo O.; Haelinen, Arja I.; Penttinen, Piia; Pennanen, Arto S.; Sillanpaeae, Markus; Sandell, Erik; Hillamo, Risto; Hirvonen, Maija-Riitta
2006-09-15
The impact of long-range transport (LRT) episodes of wildfire smoke on the inflammogenic and cytotoxic activity of urban air particles was investigated in the mouse RAW 264.7 macrophages. The particles were sampled in four size ranges using a modified Harvard high-volume cascade impactor, and the samples were chemically characterized for identification of different emission sources. The particulate mass concentration in the accumulation size range (PM{sub 1-0.2}) was highly increased during two LRT episodes, but the contents of total and genotoxic polycyclic aromatic hydrocarbons (PAH) in collected particulate samples were only 10-25% of those in the seasonal average sample. The ability of coarse (PM{sub 10-2.5}), intermodal size range (PM{sub 2.5-1}), PM{sub 1-0.2} and ultrafine (PM{sub 0.2}) particles to cause cytokine production (TNF{alpha}, IL-6, MIP-2) reduced along with smaller particle size, but the size range had a much smaller impact on induced nitric oxide (NO) production and cytotoxicity or apoptosis. The aerosol particles collected during LRT episodes had a substantially lower activity in cytokine production than the corresponding particles of the seasonal average period, which is suggested to be due to chemical transformation of the organic fraction during aging. However, the episode events were associated with enhanced inflammogenic and cytotoxic activities per inhaled cubic meter of air due to the greatly increased particulate mass concentration in the accumulation size range, which may have public health implications.
7 CFR 4290.200 - Adequate capital for RBICs.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 15 2011-01-01 2011-01-01 false Adequate capital for RBICs. 4290.200 Section 4290.200 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND... Qualifications for the RBIC Program Capitalizing A Rbic § 4290.200 Adequate capital for RBICs. You must meet...
13 CFR 107.200 - Adequate capital for Licensees.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Adequate capital for Licensees... INVESTMENT COMPANIES Qualifying for an SBIC License Capitalizing An Sbic § 107.200 Adequate capital for... Licensee, and to receive Leverage. (a) You must have enough Regulatory Capital to provide...
13 CFR 107.200 - Adequate capital for Licensees.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Adequate capital for Licensees... INVESTMENT COMPANIES Qualifying for an SBIC License Capitalizing An Sbic § 107.200 Adequate capital for... Licensee, and to receive Leverage. (a) You must have enough Regulatory Capital to provide...
7 CFR 4290.200 - Adequate capital for RBICs.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Adequate capital for RBICs. 4290.200 Section 4290.200 Agriculture Regulations of the Department of Agriculture (Continued) RURAL BUSINESS-COOPERATIVE SERVICE AND... Qualifications for the RBIC Program Capitalizing A Rbic § 4290.200 Adequate capital for RBICs. You must meet...
10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...
40 CFR 716.25 - Adequate file search.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Adequate file search. 716.25 Section 716.25 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of...
40 CFR 51.354 - Adequate tools and resources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 2 2011-07-01 2011-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...
40 CFR 51.354 - Adequate tools and resources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 2 2012-07-01 2012-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...
40 CFR 51.354 - Adequate tools and resources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 2 2014-07-01 2014-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...
40 CFR 51.354 - Adequate tools and resources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 2 2013-07-01 2013-07-01 false Adequate tools and resources. 51.354... Requirements § 51.354 Adequate tools and resources. (a) Administrative resources. The program shall maintain the administrative resources necessary to perform all of the program functions including...
40 CFR 716.25 - Adequate file search.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...
40 CFR 716.25 - Adequate file search.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...
40 CFR 716.25 - Adequate file search.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...
40 CFR 716.25 - Adequate file search.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Adequate file search. 716.25 Section... ACT HEALTH AND SAFETY DATA REPORTING General Provisions § 716.25 Adequate file search. The scope of a person's responsibility to search records is limited to records in the location(s) where the...
10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...
10 CFR 1304.114 - Responsibility for maintaining adequate safeguards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Responsibility for maintaining adequate safeguards. 1304.114 Section 1304.114 Energy NUCLEAR WASTE TECHNICAL REVIEW BOARD PRIVACY ACT OF 1974 § 1304.114 Responsibility for maintaining adequate safeguards. The Board has the responsibility for maintaining...
10 CFR 503.35 - Inability to obtain adequate capital.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Inability to obtain adequate capital. 503.35 Section 503.35 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS NEW FACILITIES Permanent Exemptions for New Facilities § 503.35 Inability to obtain adequate capital. (a) Eligibility. Section 212(a)(1)(D)...
10 CFR 503.35 - Inability to obtain adequate capital.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Inability to obtain adequate capital. 503.35 Section 503.35 Energy DEPARTMENT OF ENERGY (CONTINUED) ALTERNATE FUELS NEW FACILITIES Permanent Exemptions for New Facilities § 503.35 Inability to obtain adequate capital. (a) Eligibility. Section 212(a)(1)(D)...
15 CFR 970.404 - Adequate exploration plan.
Code of Federal Regulations, 2011 CFR
2011-01-01
... ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES Certification of Applications § 970.404 Adequate exploration plan. Before he may certify an application, the Administrator must find... 15 Commerce and Foreign Trade 3 2011-01-01 2011-01-01 false Adequate exploration plan....
15 CFR 970.404 - Adequate exploration plan.
Code of Federal Regulations, 2010 CFR
2010-01-01
... ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES Certification of Applications § 970.404 Adequate exploration plan. Before he may certify an application, the Administrator must find... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Adequate exploration plan....
"Something Adequate"? In Memoriam Seamus Heaney, Sister Quinlan, Nirbhaya
ERIC Educational Resources Information Center
Parker, Jan
2014-01-01
Seamus Heaney talked of poetry's responsibility to represent the "bloody miracle", the "terrible beauty" of atrocity; to create "something adequate". This article asks, what is adequate to the burning and eating of a nun and the murderous gang rape and evisceration of a medical student? It considers Njabulo…
Fong, Erika J.; Huang, Chao; Hamilton, Julie; Benett, William J.; Bora, Mihail; Burklund, Alison; Metz, Thomas R.; Shusteff, Maxim
2015-11-23
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
Fong, Erika J.; Huang, Chao; Hamilton, Julie; Benett, William J.; Bora, Mihail; Burklund, Alison; Metz, Thomas R.; Shusteff, Maxim
2015-01-01
A major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection, system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing. PMID:26651055
Fong, Erika J.; Huang, Chao; Hamilton, Julie; Benett, William J.; Bora, Mihail; Burklund, Alison; Metz, Thomas R.; Shusteff, Maxim
2015-11-23
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection, system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.
NASA Astrophysics Data System (ADS)
Steven, E.; Jobiliong, E.; Eugenio, P. M.; Brooks, J. S.
2012-04-01
A procedure for fabricating adhesive stamp electrodes based on gold coated adhesive tape used to measure electronic transport properties of supra-micron samples in the lateral range 10-100 μm and thickness >1 μm is described. The electrodes can be patterned with a ˜4 μm separation by metal deposition through a mask using Nephila clavipes spider dragline silk fibers. Ohmic contact is made by adhesive lamination of a sample onto the patterned electrodes. The performance of the electrodes with temperature and magnetic field is demonstrated for the quasi-one-dimensional organic conductor (TMTSF)2PF6 and single crystal graphite, respectively.